Ibm V5000 5035
Ibm V5000 5035
Ibm V5000 5035
Redbooks
IBM Redbooks
February 2021
SG24-8492-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xv.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Chapter 2. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.2 Planning for availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.3 Physical installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.4 Planning for system management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.4.1 User password creation options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.5 Connectivity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6 Fibre Channel SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6.1 Physical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.6.2 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.6.3 N_Port ID Virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.6.4 Inter-node zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.6.5 Back-end storage zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.6.6 Host zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.6.7 Zoning considerations for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . 81
2.6.8 Port designation recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.7 IP SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.7.1 iSCSI and iSER protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.7.2 Priority flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.7.3 RDMA clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.7.4 iSCSI back-end storage attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.7.5 IP network host attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.7.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.7.7 Firewall planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.8 Planning topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.8.1 High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.8.2 Three-site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.9 Back-end storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.10 Internal storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.11 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.11.1 Child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.11.2 The storage pool and cache relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.12 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.1 Planning for image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.2 Planning for fully allocated volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.3 Planning for thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.4 Planning for compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.12.5 Planning for deduplicated volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.13 Host attachment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13.1 Queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13.2 Microsoft Offloaded Data Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13.3 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.13.4 Planning for large deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.13.5 Planning for SCSI UNMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Contents v
4.10.5 System menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
4.10.6 Support menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
4.10.7 GUI Preferences menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
4.11 Additional frequent tasks in the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
4.11.1 Renaming components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
4.11.2 Working with enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
4.11.3 Restarting the GUI service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Contents vii
8.1 Storage migration overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
8.1.1 Interoperability and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
8.1.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
8.2 Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.3 Enclosure Upgrade Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
viii Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.2 FlashCopy window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
10.2.3 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
10.2.4 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
10.2.5 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
10.2.6 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
10.2.7 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
10.2.8 Creating FlashCopy mappings in a consistency group . . . . . . . . . . . . . . . . . . . 606
10.2.9 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.2.10 Moving FlashCopy mappings across consistency groups . . . . . . . . . . . . . . . 610
10.2.11 Removing FlashCopy mappings from consistency groups . . . . . . . . . . . . . . . 611
10.2.12 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.2.13 Renaming FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
10.2.14 Deleting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
10.2.15 Deleting a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
10.2.16 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
10.2.17 Stopping FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
10.2.18 Memory allocation for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
10.3 Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
10.3.1 Considerations for using Transparent Cloud Tiering. . . . . . . . . . . . . . . . . . . . . 622
10.3.2 Transparent Cloud Tiering as backup solution and data migration. . . . . . . . . . 622
10.3.3 Restoring data by using Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . 623
10.3.4 Transparent Cloud Tiering restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
10.4 Implementing Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
10.4.1 Domain Name System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
10.4.2 Enabling Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
10.4.3 Creating cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
10.4.4 Managing cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
10.4.5 Restoring cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
10.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
10.6 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
10.6.1 IBM SAN Volume Controller and IBM FlashSystem system layers . . . . . . . . . 637
10.6.2 Multiple IBM Spectrum Virtualize systems replication. . . . . . . . . . . . . . . . . . . . 638
10.6.3 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
10.6.4 Remote Copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
10.6.5 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
10.6.6 Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
10.6.7 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
10.6.8 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
10.6.9 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
10.6.10 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
10.6.11 Asynchronous Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
10.6.12 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
10.6.13 Using Global Mirror with Change Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
10.6.14 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
10.6.15 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
10.6.16 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
10.6.17 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
10.6.18 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
10.6.19 IBM Spectrum Virtualize HyperSwap topology . . . . . . . . . . . . . . . . . . . . . . . . 655
10.6.20 Consistency Protection for Global Mirror and Metro Mirror. . . . . . . . . . . . . . . 656
10.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 657
10.6.22 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
10.6.23 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Contents ix
10.7 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
10.7.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
10.7.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.7.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.7.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
10.7.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 668
10.7.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 669
10.7.7 Changing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 669
10.7.8 Changing a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 669
10.7.9 Starting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 669
10.7.10 Stopping a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 670
10.7.11 Starting a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 670
10.7.12 Stopping a Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . 670
10.7.13 Deleting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 671
10.7.14 Deleting a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 671
10.7.15 Reversing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . 671
10.7.16 Reversing a Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . 672
10.8 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.8.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.8.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.8.3 IP Partnership and data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
10.8.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
10.8.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
10.8.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
10.8.7 Remote Copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
10.8.8 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
10.9 Managing Remote Copy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
10.9.1 Creating a Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
10.9.2 Creating Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
10.9.3 Creating a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
10.9.4 Renaming Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
10.9.5 Renaming a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 706
10.9.6 Moving stand-alone Remote Copy relationships to a consistency group . . . . . 707
10.9.7 Removing Remote Copy relationships from a consistency group. . . . . . . . . . . 708
10.9.8 Starting Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
10.9.9 Starting a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
10.9.10 Switching a relationship copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
10.9.11 Switching a consistency group direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
10.9.12 Stopping Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
10.9.13 Stopping a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
10.9.14 Deleting Remote Copy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
10.9.15 Deleting a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
10.10 Remote Copy memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
10.11 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
10.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
10.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and
troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
13.1 Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
13.1.1 Node canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
13.1.2 Expansion canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
13.1.3 Dense Drawer Enclosures LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
13.1.4 Enclosure SAS cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
13.1.5 IBM FlashCore Module drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
Contents xi
13.1.6 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
13.2 Shutting down the IBM FlashSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
13.2.1 Shutting down and powering on a complete infrastructure . . . . . . . . . . . . . . . . 805
13.3 Removing or adding a node from or to the system . . . . . . . . . . . . . . . . . . . . . . . . . . 805
13.4 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
13.4.1 Backing up by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
13.4.2 Saving the backup by using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
13.5 Software update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
13.5.1 Precautions before the update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
13.5.2 IBM FlashSystem update test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
13.5.3 Updating your IBM FlashSystem to Version 8.4.0. . . . . . . . . . . . . . . . . . . . . . . 814
13.5.4 Updating the IBM FlashSystem drive code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
13.5.5 Manually updating the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
13.6 Health checker feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
13.7 Troubleshooting and fix procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
13.7.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
13.7.2 Running a fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
13.7.3 Event log details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
13.8 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
13.8.1 Email notifications and the Call Home function. . . . . . . . . . . . . . . . . . . . . . . . . 835
13.8.2 Remote Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
13.8.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
13.8.4 Syslog notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
13.9 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
13.10 Collecting support information by using the GUI, CLI, and USB . . . . . . . . . . . . . . . 855
13.10.1 Collecting information by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
13.10.2 Collecting logs by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
13.10.3 Collecting logs by using a USB flash drive . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
13.10.4 Uploading files to the IBM Support Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
13.11 Service Assistant Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
13.12 IBM Storage Insights monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
13.12.1 Capacity monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
13.12.2 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
13.12.3 Logging support tickets by using IBM Storage Insights . . . . . . . . . . . . . . . . . 870
13.12.4 Managing existing support tickets by using IBM Storage Insights and uploading
logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
xii Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
Contents xiii
xiv Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM FlashCore® PowerHA®
Db2® IBM FlashSystem® PureSystems®
DS8000® IBM Garage™ Real-time Compression Appliance®
Easy Tier® IBM Research® Redbooks®
FICON® IBM Security™ Redbooks (logo) ®
FlashCopy® IBM Spectrum® Scalable POWERparallel Systems®
HyperSwap® Informix® Storwize®
IBM® Insight® XIV®
IBM Cloud® MicroLatency®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
xvi Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Preface
The solution incorporates some of the top IBM technologies that are typically found only in
enterprise-class storage systems, which raise the standard for storage efficiency in midrange
disk systems. This cutting-edge storage system extends the comprehensive storage portfolio
from IBM and can help change the way organizations address the ongoing information
explosion.
This IBM Redbooks® publication introduces the features and functions of an IBM Spectrum
Virtualize V8.4 system through several examples. This book is aimed at pre-sales and
post-sales technical support and marketing and storage administrators. It helps you
understand the architecture, how to implement it, and how to take advantage of its
industry-leading functions and features.
Authors
This book was produced by a team of specialists from around the world.
xviii Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Ibrahim Alade Rufai has expertise with designing, building,
and implementing enterprise cloud and artificial intelligence
(AI) projects, storage, and software-defined infrastructure
systems for cognitive products. He helps clients across the
Middle East and Africa design for cognitive business, build with
collaborative innovation, and deliver through a cloud platform
(private, public, hybrid, and multicloud).
Konrad Trojok has been the technical team lead for the
IBM Storage team at System Vertrieb Alexander GmbH for the
last 9 years. His role includes being an active part in the daily
IBM storage business, such as design, implementation, and
taking care of storage solutions. He acts as a strategic advisor
for storage solutions. He has worked on IBM Power Systems
solutions for IBM Scalable POWERparallel Systems®, and
Serial Storage Architecture storage before switching his
technical focus to SAN and SAN storage.
Preface xix
Rodrigo Jungi Suzuki is a SAN Storage specialist at
IBM Brazil Global Delivery Center in Hortolandia. Currently,
Rodrigo is a SME account focal point, and works with projects,
implementations, and support for international clients. He has
20 years of IT Industry experience with the last five years in the
SAN Storage area. He has a background in UNIX and
IBM Informix® databases. He holds a bachelor’s degree in
computer science from Universidade Paulista in Sao Paulo,
Brazil, and is an IBM Certified IT Specialist. Rodrigo also is
certified in NetApp NCDA, IBM Storwize V7000 Technical
Solutions V2, and the Information Technology Infrastructure
Library (ITIL).
Thanks to the following for their contributions that made this book possible:
Bill Scales, Evelyn Perez, Jamie Pryde, Jon Tate, Greg Shepherd, Liam P Moyna, Lucy
Harris, Matthew Smith, Suri Polisetti
IBM Hursley, UK
John Bernatz, Joe Consorti, Karen Brown, Mary Connell, Matt Key, Meagan M Miller, Richard
Heffel
IBM US
Markus Oscheka
IBM Germany
Wade Wallace
ITSO Austin, IBM Garage for Systems, US
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xxi
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xxii Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
1
For more information: See the IBM Spectrum Storage portfolio website.
With the introduction of the IBM Spectrum Storage family, the software that runs on
IBM SAN Volume Controller (SVC) and on IBM FlashSystem products is called
IBM Spectrum Virtualize. The name of the underlying hardware platform remains intact.
IBM FlashSystem storage systems are built with award-winning IBM Spectrum Virtualize
software that simplifies infrastructure and eliminate differences in management, function, and
even hybrid multicloud support.
IBM Spectrum Virtualize is an offering that has been available for years for the SVC and
IBM FlashSystem family of storage solutions. It provides an ideal way to manage and protect
huge volumes of data from mobile and social applications, enable rapid and flexible cloud
services deployments, and deliver the performance and scalability that is needed to gain
insights from the latest analytics technologies.
Note: The benefits that are listed are not a complete list of features and functions that are
available with IBM Spectrum Virtualize software.
Note: At the time of writing, this capability may be used only for volumes with
supported deduplication without mirroring relationships and within the same pool
and I/O group. The mode selection (RoW/CoW) is automatically based on these
conditions.
– Comprestimator is always on, which allows the systems to sample each volume at
regular intervals and display the compressibility of the data in the GUI and IBM Storage
Insights at any time.
– Redundant array of independent disks (RAID) Reconstruct Read, which increases
reliability and availability by reducing the chances of DRP going offline because of
fixable array issues. By using RAID capabilities, DRP asks for a specific data block
reconstruction when detecting a potential corruption.
Distributed redundant array of independent disks 1 (DRAID 1) support extends DRAID
advantages to smaller pools of drives, which improves performance over traditional RAID
(TRAID) 1 implementations, allowing a better use of flash technology. These DRAIDs can
support as few as two drives with no rebuild area, and 3 - 16 drives with a single rebuild
area.
With Version 8.4, IBM FlashSystem 5100, IBM FlashSystem 7200, and IBM FlashSystem
9200 systems can support up to 12 storage-class memory (SCM) devices per enclosure
with no slot restriction. Previously, the limit for all SCM drives was four per enclosure at the
right side.
Note: With Version 8.3, IBM FlashSystem 5100, IBM FlashSystem 7200, and IBM
FlashSystem 9200 systems can support up to 12 Z-SSD SCM drives or up to four
Optane SCM drives.
The expansion of mirrored virtual disks (VDisks) (also known as volumes) allows the
VDisks capacity to be expanded or reduced online without requiring an offline format and
sync. This function improves the availability of the volume for use because the new
capacity is available immediately.
Three-site replication with IBM HyperSwap® support providing improved availability for
data in three-site implementations. This function expands on the DR capabilities that are
inherent in this topology.
Important: Three-site replication that uses Metro Mirror (MM) was previously
supported on Version 8.3.1 only in limited installations through the RPQ process. With
Version 8.4.0, this implementation is generally available.
Host attachment support with Non-Volatile Memory Express over Fibre Channel
(FC-NVMe) in HyperSwap systems.
Domain name server (DNS) support for Lightweight Directory Access Protocol (LDAP)
and Network Time Protocol (NTP) with full DNS length (256 characters).
Updates to maximum configuration limits, which double FlashCopy mapping from 5,000 to
10,000 and increases the HyperSwap volumes limit from 1,250 to 2,000.
IBM FlashSystem 5010, IBM FlashSystem 5030, and IBM FlashSystem 5100 deliver entry
enterprise solutions. IBM FlashSystem 7200 provides a midrange enterprise solution.
IBM FlashSystem 9200 and the rack-based IBM FlashSystem 9200R provide two high-end
enterprise solutions.
Even though all the IBM FlashSystem family systems are running the same IBM Spectrum
Virtualize software, the feature set that is available with each of the models is different.
Figure 1-1 on page 5 shows the feature set that is provided by the IBM FlashSystem systems.
Each of the features is described in more detail in further sections of this book.
Note: For an analyst report about the IBM FlashSystem family, see IBM FlashSystem
Family: Ease of Use for All Environments.
As shown in Figure 1-2, the IBM FlashSystem 9200 enclosure consists of redundant PSUs,
node canisters, and fan modules to provide redundancy and HA.
Figure 1-4 shows a picture of the internal hardware components of a node canister. At the left
of the picture is the front of the canister, where the fan modules and battery backup are,
followed by two Cascade Lake CPUs and memory DIMM slots and Peripheral Component
Interconnect Express (PCIe) risers for the adapters on the right.
An IBM FlashSystem 9200 clustered system can contain up to four IBM FlashSystem 9200
systems and up to 3,040 drives in expansion enclosures. IBM FlashSystem 9200 systems
can be clustered with existing Storwize V7000 systems models 524, 624, or 724.
IBM FlashSystem 9200 system > IBM FlashSystem 9100 system > IBM FlashSystem 7200
system > Storwize 7000 system
IBM Storage Insights is responsible for monitoring the system and reporting the capacity that
was used beyond the base 35%, which is then billed on the capacity-used basis. You can
grow or shrink usage, and pay only for the configured capacity.
The IBM FlashSystem Utility Model is provided for customers who can benefit from a variable
capacity system, where billing is based only on actual provisioned space. The hardware is
leased through IBM Global Finance on a three-year lease, which entitles the customer to use
approximately 30 - 40% of the total system capacity at no additional cost (depends on the
individual customer contract). If storage needs increase beyond that initial capacity, usage is
billed based on the average daily provisioned capacity per terabyte per month, on a quarterly
basis.
The system monitors daily provisioned capacity and averages those daily usage rates over
the month term. The result is the average daily usage for the month.
If a customer uses 45 TB, 42.5 TB, and 50 TB in three consecutive months, IBM Storage
Insights calculates the overage as shown in Table 1-1, rounding to the nearest terabyte.
45 TB 40.25 TB 4.75 TB 5 TB
50 TB 40.25 TB 9.75 TB 10 TB
The total capacity that is billed at the end of the quarter is 17 TB per month in this example.
Flash drive expansions may be ordered with the system in all supported configurations.
Table 1-2 shows the feature codes that are associated with the IBM FlashSystem 9200 Utility
Model UG8 billing.
Table 1-2 IBM FlashSystem 9200 Utility Model UG8 billing feature codes
1.4.3 IBM FlashSystem 9000 Expansion Enclosure Models AFF and A9F
IBM FlashSystem 9000 Expansion Enclosures Models AFF and A9F can be attached to an
IBM FlashSystem 9200 Control Enclosure to increase the available capacity. It communicates
with the Control Enclosure through a dual pair of 12 Gbps SAS connections. These
Expansion Enclosures can house many flash (solid-state drive (SSD)) SAS type drives.
Figure 1-5 shows the front view of the IBM FlashSystem 9000 Expansion Enclosure Model
AFF.
Figure 1-6 on page 11 shows the front view of the IBM FlashSystem 9000 Expansion
Enclosure Model A9F.
For example, you can combine seven IBM FlashSystem 9000 Expansion Enclosure Model
AFF and one IBM FlashSystem 9000 Expansion Enclosure Model A9F expansions (7 x 1 + 1
x 2.5 = 9.5 chain weight) or two IBM FlashSystem 9000 Expansion Enclosure Model A9F
enclosures and five IBM FlashSystem 9000 Expansion Enclosure Model AFF expansions (2 x
2.5 + 5 x 1 = 10 chain weight).
Figure 1-7 IBM FlashSystem 9200 system that is connected to expansion enclosures
The IBM FlashSystem 9200R Rack Solution system has a dedicated FC network for
clustering and optional expansion enclosures, which are delivered assembled in a rack.
Available with two, three, or four clustered IBM FlashSystem 9200 systems and up to four
expansion enclosures, it can be ordered as an IBM FlashSystem 9202R, IBM FlashSystem
9203R, or IBM FlashSystem 9204R system, with the last number denoting the number of
AG8 controller enclosures in the rack.
The final configuration occurs on site following the delivery of the systems. More components
can be added to the rack after delivery to meet the growing needs of the business.
Note: Other than the IBM FlashSystem 9200 control enclosure and its expansion
enclosures, the additional components of this solution are not covered under Enterprise
Class Support (ECS). Instead, they have their own warranty, maintenance terms, and
conditions.
Following the initial order, each 9848 Model AG8 Control Enclosures can be upgraded
through a miscellaneous equipment specification (MES).
More components can be ordered separately and added to the rack within the configuration
limitations of the IBM FlashSystem 9200 system. Clients must ensure that the space, power,
and cooling requirements are met. If assistance is needed with the installation of these
additional components beyond the service that is provided by your IBM System Services
Representative (IBM SSR), IBM Lab Services are available.
Table 1-3 shows the IBM FlashSystem 9200R Rack Solution combinations, the MTMs, and
their associated feature codes.
Key to figures
The key to the symbols that are used in the figures in this section are shown in Table 1-4.
Table 1-4 Key to the symbols that are used in the figures
Label Description
FC SWn FC switch n of 2.
These switches are either both 8977-T32 or they are both
8960-F24.
PDU A, PDU B PDUs. Both have the same rack feature code: #ECJJ, #ECJL,
#ECJN, or 3ECJQ.
Figure 1-8 shows the legend that is used to denote the component placement and mandatory
gaps for the figures that show the configurations.
Figure 1-9 Minimum IBM FlashSystem 9200R Rack Solution configuration in the rack
Figure 1-10 Maximum configuration of an IBM FlashSystem 9200R Rack Solution with Model A9F
Expansion Enclosures
Figure 1-11 Maximum configuration of an IBM FlashSystem 9200R Rack Solution with Model AFF
Expansion Enclosures
Figure 1-12 shows the FC cabling at the rear of the IBM FlashSystem 92000R Control
Enclosure.
Figure 1-12 FC cabling at the rear of the IBM FlashSystem 92000R Control Enclosure
Note: If there are multiple adapters, install the 32 G FC adapter first, then the 16 G FC
adapter, and then the 25 G Ethernet adapter.
From the top image, this “block diagram” depicts the rear composition of the
IBM FlashSystem 9200 system. It shows a simple composition to draw attention to the
ports for cabling.
– The upper canister (for example, node1) is numbered right to left.
– The lower canister (for example, node2) is numbered left to right.
– Numbers 1, 2, 3, and 4 are used to denote inter-cluster cabling. The items for CE,
IBM SSR, and LBS to refer to the cabling for the cluster switch.
– H depicts host-facing ports, which are a customer responsibility and a required
selection (otherwise, the hosts cannot use the storage).
– s/h is for attaching optional SAS expansion enclosures or more SAS hosts. The ones
with the lowercase h are an optional choice.
– Where a SAS adapter is not installed, use slot 3 for optional extra host-facing ports.
The h means that they are optional.
Figure 1-13 through Figure 1-17 on page 24 shows the numeric cabling for clustering:
CTL1 - CTL4 represent the relative rack position of 1 - 4 (min - max) IBM FlashSystem
9200 Control Enclosures within the rack.
To denote the cable ports:
– N1P1 represents Node1 port 1, which is the farthest right port of the upper node
canister.
– N2P1 represents Node2 port 1, which is the lower node canister, farthest left port.
Figure 1-15 IBM FlashSystem 9200R Rack Solution with IBM SAN24B-6 switches
Figure 1-16 IBM FlashSystem 9200R Rack Solution Model AFF and Model A9F SAS Expansion Enclosure ports
Ports A - H refer to the connections that are made to the IBM FlashSystem 9200 Control
Enclosures.
If required, use the same pattern to connect CTL3 to EXP3 and CTL4 to EXP4.
Because you can choose whether the EXP1 is either the Model A9F or the Model AFF, the
cable patterns are relatively the same, with the diagrams on the left showing the Model A9F
and the diagrams on the right showing the Model AFF.
Figure 1-18 shows the front and rear views of the IBM FlashSystem 7200 system.
Note: The IBM FlashSystem 7200 is also available with the optional purchase of the
ECS, which gives enhanced customer service response times, the services of an
IBM Technical Advisor, and IBM applied code that is purged through the Remote Code
Load process.
As shown in Figure 1-18, the IBM FlashSystem 7200 enclosure consists of redundant PSUs,
node canisters, and fan modules to provide redundancy and HA.
Figure 1-20 shows a picture of the internal hardware components of a node canister. To the
left of the picture is the front of the canister where fan modules and battery backup are,
followed by two Cascade Lake CPUs and Dual Inline Memory Module (DIMM) slots and PCIe
risers for adapters on the right.
For more information about the drive types that are supported, see 1.14, “IBM FlashCore
Module drives, NVMe SSDs, and SCM drives” on page 57.
IBM FlashSystem 9200 > IBM FlashSystem 9100 > IBM FlashSystem 7200 > Storwize 7000
1.6.2 IBM FlashSystem 7200 Expansion Enclosures 12G, 24G, and 92G
The following types of expansion enclosures are available:
IBM FlashSystem 7200 LFF Expansion Enclosure Model 12G
IBM FlashSystem 7200 SFF Expansion Enclosure Model 24G
IBM FlashSystem 7200 LFF Expansion Enclosure Model 92G
Figure 1-21 IBM FlashSystem 7200 LFF Expansion Enclosure Model 12G
IBM FlashSystem 7200 SFF Expansion Enclosure Model 24G includes the following
components:
Two expansion canisters
12 Gb SAS ports for control enclosure and expansion enclosure attachment
A total of 24 slots for 2.5-inch SAS drives
2U 19-inch rack mount enclosure with AC power supplies
The SFF Expansion Enclosure is a 2U enclosure that includes the following components:
A total of twenty-four 2.5-inch drives (hard disk drives (HDDs) or SSDs).
Two Storage Bridge Bay (SBB)-compliant Enclosure Services Manager (ESM) canisters.
Two fan assemblies, which mount between the drive midplane and the node canisters.
Each fan module is removable when the node canister is removed.
Two power supplies.
An RS232 port on the back panel (3.5 mm stereo jack), which is used for configuration
during manufacturing.
Figure 1-22 Front view of an IBM FlashSystem 7200 SFF Expansion Enclosure
Figure 1-23 on page 29 shows the rear view of the expansion enclosure.
Each dense drawer can hold up to 92 drives that are positioned in four rows of 14 and another
three rows of 12 mounted drives assemblies. Two Secondary Expander Modules (SEMs) are
centrally located in the chassis. One Secondary Expander Module (SEM) addresses 54 drive
ports, and the other addresses 38 drive ports.
The drive slots are numbered 1 - 14, starting from the left rear slot and working from left to
right, back to front.
Each canister in the dense drawer chassis features two SAS ports numbered 1 and 2. The
use of SAS port1 is mandatory because the expansion enclosure must be attached to an
IBM FlashSystem 7200 node or another expansion enclosure. SAS connector 2 is optional
because it is used to attach to more expansion enclosures.
Each IBM FlashSystem 7200 system can support up to four dense drawers per SAS chain.
For example, you can combine seven 24G and one 92G expansions (7x1 + 1x2.5 = 9.5 chain
weight), or two 92G enclosures, one 12G, and four 24G (2x2.5 + 1x1 + 4x1 = 10 chain
weight).
An example of chain weight 4.5 with one 24G, one 12G, and one 92G enclosures, all correctly
cabled, is shown in Figure 1-25.
Figure 1-25 Connecting SAS cables while complying with the maximum chain weight
IBM Storage Insights is responsible for monitoring the system and reporting the capacity that
was used beyond the base 35%, which is then billed on the capacity-used basis. You can
grow or shrink usage, and pay only for the configured capacity.
For an example of Utility Model billing, see “Example: Total system capacity of 115 TB” on
page 9.
The innovative IBM FlashSystem family is based on a common storage software platform,
IBM Spectrum Virtualize, that provides powerful all-flash and hybrid-flash solutions that offer
feature-rich, cost-effective, and enterprise-grade storage solutions. Its industry-leading
capabilities include a wide range of data services that can be extended to more than 500
heterogeneous storage systems: automated data movement, synchronous and
asynchronous copy services either on-premises or to the public cloud, HA configurations,
storage automated tiering, and data reduction technologies, including deduplication, among
many others.
Available on IBM Cloud® and Amazon Web Services (AWS), IBM Spectrum Virtualize for
Public Cloud works together with IBM FlashSystem 5200 to deliver consistent data
management between on-premises storage and public cloud. You can move data and
applications between on-premises and public cloud, implement new DevOps strategies, use
public cloud for DR without the cost of a second data center, or improve cyberresiliency with
“air gap” cloud snapshots.
IBM FlashSystem 5200 offers world-class customer support, product upgrades, and other
programs:
IBM Storage Expert Care service and support is simple. You can easily select the level of
support and period that best fits your needs with predictable and upfront pricing that is a
fixed percentage of the system cost.
The IBM Data Reduction Guarantee helps reduce planning risks and lower storage costs
with baseline levels of data compression effectiveness in IBM Spectrum Virtualize based
offerings.
The IBM Controller Upgrade Program enables customers of designated all-flash IBM
storage systems to reduce costs while maintaining leading-edge controller technology for
essentially the cost of ongoing system maintenance.
The IBM FlashSystem 5200 control enclosure supports up to twelve 2.5” NVMe-capable flash
drives in a 1U high form factor.
Figure 1-26 shows the IBM FlashSystem 5200 control enclosure front view with 12 NVMe
drives and a 3/4 ISO view as well.
Figure 1-26 IBM FlashSystem 5200 control enclosure front and 3/4 ISO view
Table 1-5 gives a summary of the host connections, drive capacities, features, and standard
options with IBM Spectrum Virtualize that are available on IBM FlashSystem 5200.
Table 1-5 IBM FlashSystem 5200 host, drive capacity, and functions summary
Feature / Function Description
For more information, see V8.4.0.x Configuration Limits and Restrictions for IBM
FlashSystem 5x00.
The IBM FlashSystem 5100 SFF Control Enclosure Models 4H4 and UHB feature the
following components:
Two node canisters, each with an 8-core processor and integrated hardware-assisted
compression acceleration
64 GB cache (32 GB per canister) standard with the option of 192 GB - 576 GB (per
system)
Eight 10 GbE ports standard for iSCSI connectivity or IP replication
16 Gb or 32 Gb FC connectivity options with FC-NVMe support
25 GbE connectivity options with iSCSI or iSER and iSCSI Extensions for RDMA either
through RoCe V2 or iWARP
Support for up to twenty-four 2.5-inch NVMe flash drives
2U 19-inch rack-mounted enclosure
Figure 1-27 shows the front view of the IBM FlashSystem 5100 Control Enclosure.
Figure 1-27 Front view of an IBM FlashSystem 5100 Control Enclosure with 24 SSD drives
IBM 2078 Model UHB is the IBM FlashSystem 5100 hardware component that is used in the
Storage Utility Offering space. It is physically and functionally identical to the
IBM FlashSystem 5100 Model 4H4, except for target configurations and variable capacity
billing. The variable capacity billing uses IBM Storage Insights to monitor the system usage,
enabling allocated storage usage above a base subscription rate to be billed per terabyte per
month.
Allocated storage is identified as storage that is allocated to a specific host (and unusable to
other hosts), whether data is written or not. For thin provisioning, the data that is written is
considered used. For thick provisioning, total allocated volume space is considered used.
FCM drives integrate IBM MicroLatency® technology, advanced flash management, and
reliability into a 2.5-inch SFF NVMe, with built-in, performance-neutral hardware compression
and encryption.
The following 2.5-inch SFF NVMe SCM industry-standard drives are supported in
IBM FlashSystem 5100 4H4 and UHB control enclosures:
375 GB NVMe SCM drive
750 GB NVMe SCM drive
800 GB NVMe SCM drive
1.6 TB NVMe SCM drive
The following 2.5-inch SFF NVMe FCM drives are supported in the IBM FlashSystem 5100
4H4 and UHB Control Enclosures:
4.8 TB NVMe FCM
9.6 TB NVMe FCM
19.2 TB NVMe FCM
38.4 TB NVMe FCM
The following 2.5-inch SFF NVMe industry-standard drives are supported in the
IBM FlashSystem 5100 4H4 and UHB Control Enclosures:
800 GB 2.5-inch 3 Drive Write Per Day (DWPD) NVMe flash drive
1.92 TB 2.5-inch NVMe flash drive
3.84 TB 2.5-inch NVMe flash drive
7.68 TB 2.5-inch NVMe flash drive
15.36 TB 2.5-inch NVMe flash drive
For more information about the drive types, see 1.14, “IBM FlashCore Module drives, NVMe
SSDs, and SCM drives” on page 57.
All drives are dual-port and hot-swappable. Drives can be intermixed where applicable.
Expansion enclosures can be intermixed behind the SFF control enclosure.
Note: Attachment and intermixing of existing IBM Storwize V5100 / V5000 expansion
enclosure models 12F, 24F, and 92F with IBM FlashSystem 5100 expansion enclosure
models 12G, 24G, and 92G is supported by IBM FlashSystem 5100 Model 4H4 and with
Storwize V5000 models 112, 124, 212, 224, 312, and 324 and Storwize V5100 Model 424.
Attachment and intermixing of existing IBM Storwize V5000 / V5100 expansion enclosure
models AFF and A9F with IBM FlashSystem 5100 expansion enclosure models 24G and
92G is supported by Storwize V5000 Model AF3 and Storwize V5100 Model AF4.
The following 2.5-inch SFF flash drives are supported in the expansion enclosures:
400 GB, 800 GB, 1.6 TB, 1.92 TB, 3.2 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB
The following 3.5-inch LFF flash drives are supported in the expansion enclosures:
1.6 TB, 1.92 TB, 3.2 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB
3.5-inch SAS disk drives (Model 12G):
– 900 GB, 1.2 TB, 1.8 TB, and 2.4 TB 10,000 rpm
– 4 TB, 6 TB, 8 TB, 10 TB, 12 TB, 14 TB, and 16 TB 7,200 rpm
3.5-inch SAS drives (Model 92G):
– 1.6 TB, 1.92 TB, 3.2 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB flash drives
– 1.2 TB, 1.8 TB, and 2.4 TB 10,000 rpm
– 6 TB, 8 TB, 10 TB, 12 TB, 14 TB, and 16 TB 7,200 rpm
2.5-inch SAS disk drives (Model 24G):
– 900 GB, 1.2 TB, 1.8 TB, and 2.4 TB 10,000 rpm
– 2 TB 7,200 rpm
2.5-inch SAS flash drives (Model 24G):
400 GB, 800 GB, 1.6 TB, 1.92 TB, 3.2 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB
0-1 4 16 Gb FC 2
0-1 4 32 Gb FC 2
On board ports
Table 1-7 shows the onboard ports.
1 10 GbE Management IP, Service IP, and Host I/O (iSCSI only)
Figure 1-29 shows all of the connectors of an IBM FlashSystem 5100 control bottom canister.
IBM FlashSystem 5000 is a member of the IBM FlashSystem family of storage solutions.
IBM FlashSystem 5000 delivers increased performance and new levels of storage efficiency
with superior ease of use. This entry storage solution enables organizations to overcome their
storage challenges.
The solution includes technologies to complement and enhance virtual environments, which
deliver a simpler, more scalable, and cost-efficient IT infrastructure. IBM FlashSystem 5000
features two node canisters in a compact, 2U 19-inch rack mount enclosure.
Important: At the time writing, IBM FlashSystem 5010 and IBM FlashSystem 5030 are
End of Marketing (EOM) and were replaced by the IBM FlashSystem 5015 and
IBM FlashSystem 5035. IBM FlashSystem 5015 and IBM FlashSystem 5035 offer superior
CPU power and memory options, but the features and functions remain the same. We
include the IBM FlashSystem 5015 and IBM FlashSytem 5035 charts only as a reference.
Figure 1-30 shows the IBM FlashSystem 5015 and IBM FlashSystem 5035 SFF control
enclosure front view.
Figure 1-30 IBM FlashSystem 5015 and IBM FlashSystem 5035 SFF control enclosure front view
Figure 1-31 IBM FlashSystem 5015 and 5035 LFF control enclosure front view
Table 1-8 shows the model comparison chart for the IBM FlashSystem 5000 family.
Table 1-8 Machine type and model comparison for the IBM FlashSystem 5000
MTM Full name
Table 1-9 shows a summary of the host connections, drive capacities, features, and standard
options with IBM Spectrum Virtualize that are available on IBM FlashSystem 5015.
Table 1-9 IBM FlashSystem 5015 host, drive capacity, and functions summary
Feature / Function Description
Control Enclosure and SAS For SFF enclosures, see Table 1-10.
expansion enclosures For LFF enclosures, see Table 1-11.
supported drives.
Table 1-10 shows the 2.5-inch supported drives for IBM FlashSystem 5000 family.
Table 1-10 2.5-inch supported drives for the IBM FlashSystem 5000 family
2.5-inch (SFF) Capacity
Table 1-11 shows the 3.5-inch supported drives for IBM FlashSystem 5000 family.
Table 1-11 3.5-inch supported drives for the IBM FlashSystem 5000 family
3.5-inch (LFF) Speed Capacity
Available with the IBM FlashSystem 5035 model, DRPs help transform the economics of data
storage. When applied to new or existing storage, they can increase usable capacity while
maintaining consistent application performance. DPRs can help eliminate or drastically
reduce costs for storage acquisition, rack space, power, and cooling, and can extend the
useful life of existing storage assets. Their capabilities include the following ones:
Block deduplication that works across all the storage in a DRP to minimize the number of
identical blocks.
New compression technology that ensures consistent 2:1 or better reduction performance
across a wide range of application workload patterns.
SCSI UNMAP support that de-allocates physical storage when operating systems delete
logical storage constructs such as files in a file system.
Table 1-12 IBM FlashSystem 5035 host, drive capacity, and functions summary
Feature / Function Description
Control enclosure and SAS For SFF enclosures, see Table 1-10.
expansion enclosures For LFF enclosures, see Table 1-11.
supported drives.
For more information, see V8.4.0.x Configuration Limits and Restrictions for IBM
FlashSystem 5015 and IBM FlashSystem 5035.
This next section provides hardware information about the IBM FlashSystem 5010 and 5030
models.
Note: The IBM FlashSystem 5010 solution supports only one SAS expansion chain.
The IBM FlashSystem 5010 control enclosure features the following components:
Two node canisters, each with a two-core processor
16 GB cache (8 GB per canister) with optional 32 GB cache (16 GB per canister) or 64 GB
cache (32 GB per canister)
1 Gb iSCSI connectivity standard with optional 16 Gb FC, 12 Gb SAS, 10 Gb iSCSI
(optical), or 25 Gb iSCSI (optical) connectivity
12 Gb SAS port for expansion enclosure attachment
The LFF enclosure models support up to twelve 3.5-inch drives, and the SFF enclosure
models support up to twenty-four 2.5-inch drives. High-performance disk drives, high-capacity
nearline (NL) disk drives, and flash (SSDs) also are supported. Drives of the same form factor
can be intermixed within an enclosure, which provides the flexibility to address performance
and capacity needs in a single enclosure. You can also intermix LFF and SFF expansion
enclosures behind any control enclosure.
Table 1-13 lists the supported 2.5-inch drives for IBM FlashSystem 5000.
Table 1-14 shows the supported 3.5-inch (LFF) drives for IBM FlashSystem 5000.
Figure 1-32 shows the IBM FlashSystem 5010 SFF Control Enclosure with 24 drives.
Figure 1-34 shows the available connectors and light-emitting diodes (LEDs) on a single
IBM FlashSystem 5010 canister.
Figure 1-34 View of available connectors and LEDs on an IBM FlashSystem 5010 single canister
The IBM FlashSystem 5030 control enclosure models offer the highest level of performance,
scalability, and functions and include the following features:
Support for 760 drives per system with the attachment of eight IBM FlashSystem 5000
High-Density LFF Expansion Enclosures and 1,520 drives with a two-way clustered
configuration
DRPs with deduplication, compression,1 and thin provisioning for improved storage
efficiency
Figure 1-35 shows the IBM FlashSystem 5030 SFF Control Enclosure with 24 drives.
Figure 1-36 shows the rear view of an IBM FlashSystem 5030 Control Enclosure.
Figure 1-37 shows the available connectors and LEDs on a single IBM FlashSystem 5030
canister.
Figure 1-37 View of available connectors and LEDs on an IBM FlashSystem 5030 single canister
In a tiered storage pool, IBM Easy Tier acts to identify this skew and automatically place data
in the appropriate tier to take advantage of it. By moving the hottest data onto the fastest tier
of storage, the workload on the remainder of the storage is reduced. By servicing most of the
application workload from the fastest storage, Easy Tier acts to accelerate application
performance.
Easy Tier is a performance optimization function that automatically migrates (move) extents
that belong to a volume among different storage tiers based on their I/O load. The movement
of the extents is online and unnoticed from a host perspective.
As a result of extent movement, the volume no longer has all its data in one tier, but rather in
two or three tiers. Each tier provides optimal performance for the extent, as shown in
Figure 1-38.
Easy Tier monitors the I/O activity and latency of the extents on all Easy Tier enabled storage
pools to create heat maps. Based on them, Easy Tier creates an extent migration plan and
promotes (moves) high activity or hot extents to a higher disk tier within the same storage
pool. It also demotes extents whose activity dropped off, or cooled, by moving them from a
higher disk tier managed disk (MDisk) back to a lower tier MDisk.
Storage pools that contain only one tier of storage can also benefit from Easy Tier if they have
multiple disk arrays (or MDisks). Easy Tier has a balancing mode: It moves extents from busy
disk arrays to less busy arrays of the same tier, balancing I/O load.
The most common use case, for example, is a host application, such as VMware, freeing
storage in a file system. Then, the storage controller can perform functions to optimize the
space, such as reorganizing the data on the volume so that space is better used.
When a host allocates storage, the data is placed in a volume. To free the allocated space
back to the storage pools, the SCSI UNMAP feature is used. UNMAP enables host OSs to
deprovision storage on the storage controller so that the resources can automatically be freed
in the storage pools and used for other purposes.
A DRP increases infrastructure capacity usage by using new efficiency functions and
reducing storage costs. By using the end-to-end SCSI UNMAP function, a DRP can
automatically de-allocate and reclaim the capacity of thin-provisioned volumes that contain
deleted data so that this reclaimed capacity can be reused by other volumes.
At its core, a DRP uses a Log Structured Array (LSA) to allocate capacity. An LSA enables a
tree-like directory to be used to define the physical placement of data blocks independent of
size and logical location. Each logical block device has a range of logical block addresses
(LBAs), starting from 0 and ending with the block address that fills the capacity.
When written, you can use an LSA to allocate data sequentially and provide a directory that
provides a lookup to match an LBA with a physical address within the array. Therefore, the
volume that you create from the pool to present to a host application consists of a directory
that stores the allocation of blocks within the capacity of the pool.
In DRPs, the maintenance of the metadata results in I/O amplification. I/O amplification
occurs when a single host-generated read or write I/O results in more than one back-end
storage I/O request because of advanced functions. A read request from the host results in
two I/O requests: a directory lookup and a data read. A write request from the host results in
three I/O requests: a directory lookup, a directory update, and a data write. This aspect must
be considered when sizing and planning your data-reducing solution.
Standard pools, which make up a classic solution that is also supported by the
IBM FlashSystem storage systems, do not use LSA. A standard pool works as a container
that receives its capacity from MDisks (disk arrays), splits it into extents of the same fixed
size, and allocates extents to volumes.
Standard pools do not cause I/O amplification and require less processing resource usage
compared to DRPs. In exchange, DRPs provide more flexibility and storage efficiency.
Table 1-15 provides an overview of volume capacity saving types that are available with
standard pools and DRPs.
Best practice: If you want to use deduplication, create thin-provisioned compressed and
deduplicated volumes.
This book provides only an overview of DRP aspects. For more information, see Introduction
and Implementation of Data Reduction Pools and Deduplication, SG24-8430.
In IBM FlashSystem family systems, each volume has virtual capacity and real capacity
parameters. Virtual capacity is the volume storage capacity that is available to a host, and it is
used by to create a file system. Real capacity is the storage capacity that is allocated to a
volume from a pool. It shows the amount of space that is used on a physical storage volume.
Fully allocated volumes are created with the same amount of real capacity and virtual
capacity. This type uses no storage efficiency features.
When a fully allocated volume is created on a DRP, it bypasses the LSA structure and works
in the same manner as in a standard pool, so it has no processing impact and provides no
data reduction options at the pool level.
When using fully allocated volumes on the IBM FlashSystem storage systems with FCM
drives, whether a DRP or standard pool is used, capacity savings are achieved by
compressing data with hardware compression that runs on the FCM drives. Hardware
compression on FCM drives is always on and cannot be turned off. This configuration
provides maximum performance in combination with outstanding storage efficiency.
A thin-provisioned volume presents a different capacity to mapped hosts than the capacity
that the volume uses in the storage pool. Therefore, real and virtual capacities might not be
equal. The virtual capacity of a thin-provisioned volume is typically significantly larger than its
real capacity. As more information is written by the host to the volume, more of the real
capacity is used. The system identifies read operations to unwritten parts of the virtual
capacity, and returns zeros to the server without using any real capacity.
In a shared storage environment, thin provisioning is a method for optimizing the use of
available storage. Thin provisioning relies on the allocation of blocks of data on demand,
versus the traditional method of allocating all of the blocks up front. This method eliminates
almost all white space, which helps avoid the poor usage rates that occur in the traditional
storage allocation method where large pools of storage capacity are allocated to individual
servers but remain unused (not written to).
A thin-provisioned volume in a standard pool will not return unused capacity back to the pool
with SCSI UNMAP.
The IBM FlashSystem family DRP compression is based on the Lempel-Ziv lossless data
compression algorithm that operates by using a real-time method. When a host sends a write
request, the request is acknowledged by the write cache of the system, and then staged to
the DRP.
As part of its staging, the write request passes through the compression engine and is stored
in a compressed format. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage. This process occurs transparently to host systems, which makes
them unaware of the compression.
The IBM Comprestimator tool is available to check whether your data is compressible. It
estimates the space savings that are achieved when using compressed volumes. This utility
provides a quick and easy view of showing the benefits of using compression.
IBM Comprestimator can be run from the system GUI or command-line interface (CLI), and it
checks data that is already stored on the system. In DRPs, IBM Comprestimator is always on
starting at code level 8.4, so you can display the compressibility of the data in the GUI and
IBM Storage Insights at any time. It is also available as a stand-alone, host-based utility that
can analyze data on IBM or third-party storage devices. For more information, see
Comprestimator Utility Version 1.5.3.1.
Deduplication can be configured with thin-provisioned and compressed volumes in DRPs for
added capacity savings. The deduplication process identifies unique chunks of data, or byte
patterns, and stores a signature of the chunk for reference when writing new data chunks.
If the new chunk’s signature matches an existing signature, the new chunk is replaced with a
small reference that points to the stored chunk. The matches are detected when the data is
written. The same byte pattern might occur many times, which greatly reduce the amount of
data that must be stored.
Compression and deduplication are not mutually exclusive: One, both, or neither, features
can be enabled. If the volume is deduplicated and compressed, data is deduplicated first, and
then compressed. Therefore, deduplication references are created on the compressed data
that is stored on the physical domain.
Encryption is performed by the IBM FlashSystem controllers for data that is stored within the
entire system, the IBM FlashSystem Control Enclosure, all attached expansion enclosures,
and for data that is stored as externally virtualized by the IBM FlashSystem storage systems.
Encryption is the process of encoding data so that only authorized parties can read it. Data
encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses a
256-bit symmetric encryption key in XTS mode, as defined in the IEEE 1619-2007 standard
and NIST Special Publication 800-38E as XTS-AES-256.
There are two types of encryption on devices running IBM Spectrum Virtualize: hardware
encryption and software encryption. Which method is used for encryption is chosen
automatically by the system based on the placement of the data:
Hardware encryption: Data is encrypted by using SAS hardware. It is used only for internal
storage (drives).
Software encryption: Data is encrypted by using the nodes’ CPU (the encryption code
uses the AES-NI CPU instruction set). It is used only for external storage that is virtualized
by the IBM FlashSystem storage systems.
Both methods of encryption use the same encryption algorithm, key management
infrastructure, and license.
Note: Only data-at-rest is encrypted. Host to storage communication and data that is sent
over links that are used for remote mirroring are not encrypted.
The IBM FlashSystem also supports self-encrypting drives, where data encryption is
completed in the drive itself.
Before encryption can be enabled, ensure that a license was purchased and activated.
VVOLs simplify operations through policy-driven automation that enables more agile storage
consumption for VMs and dynamic adjustments in real time when they are needed. It
simplifies the delivery of storage service levels to individual applications by providing finer
control of hardware resources and native array-based data services that can be instantiated
with VM granularity.
With VVOLs, VMware offers a paradigm in which an individual VM and its disks, rather than a
logical unit number (LUN), becomes a unit of storage management for a storage system. It
encapsulates VDisks and other VM files, and natively store the files on the storage system.
For more information about VVOLs and the actions that are required to implement this feature
on the host side, see the VMware website.
IBM support for VASA is provided by IBM Spectrum Connect enabling communication
between the VMware vSphere infrastructure and the IBM FlashSystem system. The
IBM FlashSystem administrator can assign ownership of VVOLs to IBM Spectrum Connect by
creating a user with the VASA Provider security role.
Although the system administrator can complete certain actions on volumes and pools that
are owned by the VASA Provider security role, IBM Spectrum Connect retains management
responsibility for VVOLs. For more information about IBM Spectrum Connect, see the
IBM Spectrum Connect documentation.
IBM FlashSystem use a GUI with the same look and feel across all platforms for a consistent
management experience. The GUI has an improved overview dashboard that provides all
information in an easy-to-understand format and enables visualization of effective capacity.
With the GUI, you can quickly deploy storage and manage it efficiently.
Figure 1-39 on page 51 shows the IBM FlashSystem GUI dashboard view. This view is the
default that is displayed after the user logs on to the system.
The IBM FlashSystem storage systems also provide a CLI, which is useful for advanced
configuration and scripting.
The systems support SNMP, email notifications that use Simple Mail Transfer Protocol
(SMTP), and syslog redirection for complete enterprise management access.
If the system is entitled for support, a Problem Management Record (PMR) is automatically
created and assigned to the appropriate IBM Support team. The information that is provided
to IBM is an excerpt from the event log containing the details of the error, and client contact
information from the system. IBM Service Personnel contact the client and arrange service on
the system, which can greatly improve the speed of resolution by removing the need for the
client to detect the error and raise a support call themselves.
The system supports two methods to transmit notifications to the support center:
Call Home with cloud services
Call Home with cloud services sends notifications directly to a centralized file repository
that contains troubleshooting information that is gathered from customers. Support
personnel can access this repository and automatically be assigned issues as problem
reports.
This method of transmitting notifications from the system to support removes the need for
customers to create problem reports manually. Call Home with cloud services also
eliminates email filters dropping notifications to and from support, which can delay
resolution of problems on the system.
This method sends notifications only to the predefined support center.
IBM highly encourages all clients to take advantage of the Call Home feature so that you and
IBM can collaborate for your success.
When you order any IBM FlashSystem storage system, IBM Storage Insights is available at
no additional cost. With this version, you can monitor the basic health, status, and
performance of various storage resources.
IBM Storage Insights is a part of the monitoring and helps to ensure continued availability of
the IBM FlashSystem storage systems.
The tool provides a single dashboard that gives you a clear view of all your IBM block and file
storage and some other storage vendors (the IBM Storage Insights Pro version is required to
view other storage vendors’ storage). You can make better decisions by seeing trends in
performance and capacity. With storage health information, you can focus on areas that need
attention. When IBM Support is needed, IBM Storage Insights simplifies uploading logs,
speeds resolution with online configuration data, and provides an overview of open tickets, all
in one place.
The following features are some of the ones that are available with IBM Storage Insights:
A unified view of IBM systems:
– Provides a single view to see all your system’s characteristics.
– See all of your IBM storage inventory.
– Provides a live event feed so that you know in real time what is going on with your
storage so that you can act fast.
IBM Storage Insights collects telemetry data and Call Home data and provides real-time
system reporting of capacity and performance.
In order for IBM Storage Insights to operate, a lightweight data collector must be deployed in
your data center to stream only system metadata to your IBM Cloud instance. The metadata
flows in one direction: from your data center to IBM Cloud over HTTPS. The actual
application data that is stored on the storage systems cannot be accessed by the data
collector. In
IBM Cloud, your metadata is AES256-encrypted and protected by physical, organizational,
access, and security controls.
For more information about IBM Storage Insights, see the following websites:
IBM Storage Insights Fact Sheet
Functional demonstration environment
IBM Storage Insights security information
IBM Storage Insights registration
The RESTful apiserver does not consider transport security (such as Secure Sockets Layer
(SSL)), but instead assumes that requests are initiated from a local, secured server. The
HTTPS protocol provides privacy through data encryption. The RESTful API provides more
security by requiring command authentication, which persists for 2 hours of activity or 30
minutes of inactivity, whichever occurs first.
Uniform Resource Locators (URLs) target different node objects on the system. The HTTPS
POST method acts on command targets that are specified in the URL. To make changes or
view information about different objects on the system, you must create and send a request to
the system. You must provide certain elements for the RESTful apiserver to receive and
transform the request into a command.
To interact with the system by using the RESTful API, make an HTTPS command request
with a valid configuration node URL destination. Open TCP port 7443 and include the
keyword rest, and then use the following URL format for all requests:
https://system_node_ip:7443/rest/command
Where:
system_node_ip is the system IP address, which is the address that is taken by the
configuration node of the system.
The port number is always 7443 for the IBM Spectrum Virtualize RESTful API.
rest is a keyword.
command is the target command object (such as auth or lseventlog with any
parameters). The command specification follows this format:
command_name,method="POST",headers={'parameter_name': 'parameter_value',
'parameter_name': 'parameter_value',...}
Volume mirroring
By using volume mirroring, a volume can have two physical copies in one IBM FlashSystem
system. Each volume copy can belong to a different pool and use a different set of capacity
saving features.
When a host writes to a mirrored volume, the system writes the data to both copies. When a
host reads a mirrored volume, the system picks one of the copies to read. If one of the
mirrored volume copies is temporarily unavailable, the volume remains accessible to servers.
The system remembers which areas of the volume are written, and resynchronizes these
areas when both copies are available.
Volume mirroring can be used to migrate data to or from an IBM FlashSystem family system.
For example, you can start with a non-mirrored image mode volume in the migration pool, and
then add a copy to that volume in the destination pool on internal storage. After the volume is
synchronized, you can delete the original copy that is in the source pool. During the
synchronization process, the volume remains available.
Volume mirroring is also used to convert fully allocated volumes to use data reduction
technologies, such as thin-provisioning, compression, or deduplication, or to migrate volumes
between storage pools.
FlashCopy
The FlashCopy or snapshot function creates a point-in-time (PiT) copy of data that is stored
on a source volume to a target volume. FlashCopy is sometimes described as an instance of
a time-zero (T0) copy. Although the copy operation takes some time to complete, the resulting
data on the target volume is presented so that the copy appears to have occurred
immediately, and all data is available immediately. Advanced functions of FlashCopy allow
operations to occur on multiple source and target volumes.
Management operations are coordinated to provide a common, single PiT for copying target
volumes from their respective source volumes to create a consistent copy of data that spans
multiple volumes.
The function also supports multiple target volumes to be copied from each source volume,
which can be used to create images from different PiTs for each source volume.
FlashCopy is used to create consistent backups of dynamic data and test applications, and to
create copies for auditing purposes and for data mining. It can be used to capture the data at
a particular time to create consistent backups of dynamic data. The resulting image of the
data can be backed up, for example, to a tape device. When the copied data is on tape, the
data on the FlashCopy target disks becomes redundant and can be discarded.
FlashCopy can perform a restore from any existing FlashCopy mapping. Therefore, you can
restore (or copy) from the target to the source of your regular FlashCopy relationships. When
restoring data from FlashCopy, this method can be qualified as reversing the direction of the
FlashCopy mappings. This approach can be used for various applications, such as recovering
a production database application after an errant batch process that caused extensive
damage.
For an RC relationship, one volume is designated as the primary and the other volume is
designated as the secondary. Host applications write data to the primary volume, and
updates to the primary volume are copied to the secondary volume. Normally, host
applications do not run I/O operations to the secondary volume.
Note: All three types of RC are supported to work over an IP link, but the recommended
type is GMCV.
1.13.1 HyperSwap
The IBM HyperSwap function is a HA feature that provides dual-site, active-active access to a
volume and is available on systems that can support more than one I/O group.
With HyperSwap, a fully independent copy of the data is maintained at each site. When data
is written by hosts at either site, both copies are synchronously updated before the write
operation is completed. The HyperSwap function automatically optimizes itself to minimize
data that is transmitted between two sites, and to minimize host read and write latency.
If the system or the storage at either site goes offline and an online and accessible up-to-date
copy is left, the HyperSwap function can automatically fail over access to the online copy. The
HyperSwap function also automatically resynchronizes the two copies when possible.
The HyperSwap function works with the standard multipathing drivers that are available on a
wide variety of host types, with no additional host support that is required to access the highly
available (HA) volume. Where multipathing drivers support Asymmetric Logical Unit Access
(ALUA), the storage system tells the multipathing driver which nodes are closest to it and
should be used to minimize I/O latency. You tell the storage system which site a host is
connected to, and it configures host pathing optimally.
The following IBM FlashSystem products can support all three versions of these drives as
follows:
IBM FlashSystem 9200 system
IBM FlashSystem 9200R Rack Solution system
IBM FlashSystem 7200 system
IBM FlashSystem 5100 system
Figure 1-41 shows an FCM (NVMe) with a capacity of 19.2 TB that is built by using 64-layer
Triple Level Cell (TLC) flash memory and an Everspin MRAM cache into a U.2 form factor.
FCM drives are designed for high parallelism and optimized for 3D TLC and updated FPGAs.
IBM also enhanced the FCM drives by adding read cache to reduce latency on highly
compressed pages, and added four-plane programming to lower the overall power during
writes. FCM drives offer hardware-assisted compression up to 3:1 and are FIPS 140-2
complaint.
FCM drives carry IBM Variable Stripe RAID (VSR) at the FCM level and use DRAID to protect
data at the system level. VSR and DRAID together optimize RAID rebuilds by offloading
rebuilds to DRAID, and they offer protection against FCM failures.
Storage-class memory
SCM drives use persistent memory technologies that improve endurance and reduce the
latency of flash storage device technologies. All SCM drives use the NVMe architecture.
IBM Research® is actively engaged in researching these new technologies.
For more information about nanoscale devices, see Storage Class Memory at Almaden.
For a comprehensive overview of the flash drive technology see the SNIA Educational
Library.
Easy Tier supports the SCM drives with a new tier that is called tier_scm.
Note: The SCM drive type supports only DRAID 6, DRAID 5, DRAID 1, and TRAID 1 or 10.
Applications typically read and write data as vectors of bytes or records. However, storage
presents data as vectors of blocks of a constant size (512 or in the newer devices, 4096 bytes
per block).
The file, record, and namespace virtualization and file and record subsystem layers convert
records or files that are required by applications to vectors of blocks, which are the language
of the block virtualization layer. The block virtualization layer maps requests of the higher
layers to physical storage blocks, which are provided by storage devices in the block
subsystem.
Each of the layers in the storage domain abstracts away complexities of the lower layers and
hides them behind an easy-to-use, standard interface that is presented to upper layers. The
resultant decoupling of logical storage space representation and its characteristics that are
visible to servers (storage consumers) from underlying complexities and intricacies of storage
devices is a key concept of storage virtualization.
The focus of this publication is block-level virtualization at the block virtualization layer,
which is implemented by IBM as IBM Spectrum Virtualize software that is running on an SVC
and the IBM FlashSystem family. The SVC is implemented as a clustered appliance in the
storage network layer. The IBM FlashSystem storage systems are deployed as modular
systems that can virtualize their internally and externally attached storage.
IBM Spectrum Virtualize uses the SCSI protocol to communicate with its clients and presents
storage space as SCSI logical units (LUs), which are identified by SCSI LUNs.
Note: Although LUs and LUNs are different entities, the term LUN in practice is often used
to refer to a logical disk, that is, an LU.
Although most applications do not directly access storage but work with files or records, the
operating system (OS) of a host must convert these abstractions to the language of storage,
that is, vectors of storage blocks that are identified by LBAs within an LU.
With storage virtualization, you can manage the mapping between logical blocks within an LU
that is presented to a host, and blocks on physical drives. This mapping can be as simple or
as complicated as required. A logical block can be mapped to one physical block, or for
increased availability, multiple blocks that are physically stored on different physical storage
systems, and in different geographical locations.
Importantly, the mapping can be dynamic: With Easy Tier, IBM Spectrum Virtualize can
automatically change underlying storage to which groups of blocks (extent) are mapped to
better match a host’s performance requirements with the capabilities of the underlying
storage systems.
IBM Spectrum Virtualize gives a storage administrator a wide range of options to modify
volume characteristics, from volume resize to mirroring, creating a point-in-time (PiT) copy
with FlashCopy, and migrating data across physical storage systems. Importantly, all the
functions that are presented to the storage users are independent from the characteristics of
the physical devices that are used to store data. This decoupling of the storage feature set
from the underlying hardware and ability to present a single, uniform interface to storage
users that masks underlying system complexity is a powerful argument for adopting storage
virtualization with IBM Spectrum Virtualize.
You can use IBM FlashSystem to preserve your existing investments in storage, centralize
management, and make storage migrations easier with storage virtualization and Easy Tier.
Virtualization helps insulate applications from changes that are made to the physical storage
infrastructure.
To verify whether your storage can be virtualized by IBM FlashSystem, see the IBM System
Storage Interoperation Center (SSIC).
All the IBM FlashSystem family models can migrate data from external storage controllers,
including migrating from any other IBM or third-party storage systems. IBM FlashSystem
uses the functions that are provided by its external virtualization capability to perform the
migration. This capability places external LUs under the control of an IBM FlashSystem
system. Then, hosts continue to access them through the IBM FlashSystem system, which
acts as a proxy.
The GUI of the IBM FlashSystem family provides a storage migration wizard, which simplifies
the migration task. The wizard features intuitive steps that guide users through the entire
process.
Note: The IBM FlashSystem 5010 and IBM FlashSystem 5030 systems do not support
external virtualization for any other purpose other than data migration.
IBM Spectrum Virtualize running on the IBM FlashSystem family is a mature, 10th-generation
virtualization solution that uses open standards and complies with the SNIA storage model.
All the products are appliance-based storage, and use in-band block virtualization engines
that move the control logic (including advanced storage functions) from a multitude of
individual storage devices to a centralized entity in the storage network.
IBM Spectrum Virtualize can improve the usage of your storage resources, simplify storage
management, and improve the availability of business applications.
The HyperSwap feature provides HA volumes that are accessible through two sites at up to
300 km apart. A fully independent copy of the data is maintained at each site. When data is
written by hosts at either site, both copies are synchronously updated before the write
operation is completed. The HyperSwap feature automatically optimizes itself to minimize
data that is transmitted between sites and to minimize host read and write latency.
IBM Spectrum Virtualize V8.4 expands the three-site replication model to include HyperSwap,
which improves data availability options in three-site implementations. Systems that are
configured in a three-site topology have high DR capabilities, but a disaster might take the
data offline until the system can be failed over to an alternative site. HyperSwap allows
active-active configurations to maintain data availability, eliminating the need to failover if
communications are disrupted. This solution provides a more robust environment, allowing up
to 100% uptime for data, and recovery options inherent to DR solutions.
To better assist with three-site replication solutions, IBM Spectrum Virtualize 3-Site
Orchestrator coordinates replication of data for DR and HA scenarios between systems.
IBM Spectrum Virtualize 3-Site Orchestrator is a command-line based application that runs
on a separate Linux host that configures and manages supported replication configurations
on IBM Spectrum Virtualize products.
For more information about this type of implementation, see Spectrum Virtualize 3-Site
Replication, SG24-8474.
1.17 Licensing
All IBM FlashSystem functional capabilities are provided through IBM Spectrum Virtualize
software, and each platform is licensed as described in the following sections.
1.17.1 Licensing IBM FlashSystem 9200, IBM FlashSystem 9200R, and IBM
FlashSystem 7200
The IBM FlashSystem 9200 system has the same licensing scheme as the IBM FlashSystem
9200R system and the IBM FlashSystem 7200 system. They have all-inclusive licensing for
all functions except encryption, which is a country-limited feature code, and external
virtualization.
Any externally virtualized storage requires the External Virtualization license per storage
capacity unit (SCU) that is based on the tier of storage that is available on the external
storage system. In addition, if you use FlashCopy and Remote Mirroring on an external
storage system, you must purchase a per-tebibyte license to use these functions.
The SCU is defined in terms of the category of the storage capacity, as listed in Table 1-19 on
page 67.
Flash All flash devices, other than SCU equates to 1.18 TiB usable of
SCM drives Category 2 storage.
In addition to these enclosure-based licensed functions, the system also supports encryption
through a key-based license.
If you use a trial license, the system warns you when the trial is about to expire at regular
intervals. If you do not purchase and activate the license on the system before the trial license
expires, all configurations that use the trial licenses are suspended.
The encryption feature uses a key-based license that is activated by an authorization code.
The authorization code is sent with the IBM FlashSystem 5100 Licensed Function
Authorization documents that you receive after purchasing the license.
The Encryption USB Flash Drives (Four Pack) feature or an external key manager such as
the IBM Security Key Lifecycle Manager are required for encryption keys management.
Each function is licensed to an IBM FlashSystem 5000 control enclosure. It covers the entire
system (control enclosure and all attached expansion enclosures) if it consists of one I/O
group. If the IBM FlashSystem 5030 system consists of two I/O groups, two keys are required.
To help evaluate the benefits of these new capabilities, Easy Tier and RC licensed functions
can be enabled at no additional charge for a 90-day trial. Trials are started from the
IBM FlashSystem management GUI and do not require any IBM intervention. When the trial
expires, the function is automatically disabled unless a license key for that function is installed
onto the machine.
If you use a trial license, the system warns you at regular intervals when the trial is about to
expire. If you do not purchase and activate the license on the system before the trial license
expires, all configurations that use the trial licenses are suspended.
Note: Encryption hardware feature is available only on the IBM FlashSystem 5030 (not on
the IBM FlashSystem 5010).
This encryption feature uses a key-based license and is activated with an authorization code.
The authorization code is sent with the IBM FlashSystem 5000 Licensed Function
Authorization documents that you receive after purchasing the license.
The Encryption USB flash drives (Four Pack) feature or IBM Security Key Lifecycle Manager
are required for encryption keys management.
Chapter 2. Planning
This chapter describes the steps that are required to plan the installation and configuration of
IBM FlashSystem systems in your storage network. Not all features that are described in this
chapter are available and supported on all IBM FlashSystem systems. To learn which product
features that are relevant to your IBM FlashSystem system are supported, see 1.3, “IBM
FlashSystem family” on page 4.
This chapter is not intended to provide in-depth information about the described topics; it
provides only general guidelines. For an enhanced analysis, see IBM FlashSystem 9200 and
9100 Best Practices and Performance Guidelines, SG24-8448, IBM System Storage SAN
Volume Controller, IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and
Performance Guidelines, SG24-7521, and IBM FlashSystem 9100 Architecture,
Performance, and Implementation, SG24-8425.
Note: Make sure that the planned configuration is reviewed by IBM or an IBM Business
Partner before implementation. Such a review can both increase the quality of the final
solution and prevent configuration errors that might impact the solution delivery.
The general rule of planning is to define your goals and then plan a solution that makes you
able to reach these goals.
Note: Contact your IBM sales representative or IBM Business Partner to perform
these calculations.
Assess your recovery point objective (RPO) / recovery time objective (RTO) requirements
and plan for high availability (HA) and Remote Copy (RC) functions. Decide whether you
require a dual-site or three-site deployment, and decide whether you must implement RC
and determine its type (synchronous or asynchronous). Review the extra configuration
requirements that are imposed.
Define the number of input/output (I/O) groups (control enclosures) and expansion
enclosures. The number of necessary enclosures depends on the solution type, overall
performance, and capacity requirements.
Plan for host attachment interfaces, protocols, and storage area network (SAN). Consider
the number of ports, bandwidth requirements, and HA.
Perform configuration planning by defining the number of internal storage arrays and
external storage arrays that will be virtualized. Define a number and the type of pools, the
number of volumes, and the capacity of each of the volumes.
Define a naming convention for the system nodes, volumes, and other storage objects.
Plan a management IP network and management users’ authentication system.
Plan for the physical location of the equipment in the rack.
Verify that your planned environment is a supported configuration.
Verify that your planned environment does not exceed system configuration limits.
Note: For more information about your platform and code version, see Configuration
Limits and Restrictions.
Review the planning aspects that are described in the following sections of this chapter.
For more information about power and environmental requirements, see the IBM
Documentation information that is relevant to your IBM FlashSystem platform. For example,
to see the IBM FlashSystem 9200 related information, go to IBM FlashSystem 9200
documentation and expand Planning → Planning for hardware → Physical installation
planning, and then select Connections for control enclosures and SAS expansion
enclosure requirements.
Chapter 2. Planning 73
Your system order includes a printed copy of the Quick Installation Guide, which also provides
information about environmental and power requirements.
Create a cable connection table that follows your environment’s documentation procedure to
track the following connections that are required for the setup:
Power
Serial-attached Small Computer System Interface (SCSI) (SAS)
Ethernet
Fibre Channel (FC)
When planning for power, plan for a separate independent power source for each of the two
redundant power supplies of a system enclosure.
Distribute your expansion enclosures between control enclosures and SAS chains, as
described in 13.1.4, “Enclosure SAS cabling” on page 801. For more information, see the IBM
Documentation information that is relevant to your IBM FlashSystem platform. For example,
to see the IBM FlashSystem 9200 related information, go to IBM FlashSystem 9200
documentation and expand Installing → Connecting the components → Connecting 2U
expansion enclosures to the control enclosure.
When planning SAN cabling, make sure that your physical topology adheres to zoning rules
and recommendations.
The physical installation and initial setup of IBM FlashSystem 9100 and IBM FlashSystem
9200 is performed by an IBM System Services Representative (IBM SSR).
IBM FlashSystem 7200, IBM FlashSystem 5100, IBM FlashSystem 5030, and IBM
FlashSystem 5010 are classified as Customer Setup Units (CSUs), and the physical
installation and initial setup is the responsibility of the customer. IBM can be contracted to
perform these services for a fee.
On the IBM FlashSystem 5010, as opposed to other platforms, the technician port is not
dedicated. On those systems, after the initial configuration, it is converted to a regular
Ethernet port that can be connected to the network and used for management tasks and to
serve I/O to hosts with internet Small Computer Systems Interface (iSCSI).
For management, each system node requires at least one Ethernet connection. The cable
must be connected to port 1, which is a 10 Gbps Ethernet port (it does not negotiate speeds
below 1 Gbps). For increased availability, an optional management connection may be
configured over Ethernet port 2.
For configuration and management, you must allocate an IP address to each node canister,
which is referred to as the service IP address. Both IPv4 and IPv6 are supported.
In addition to a service IP address on each node, each system has a cluster management IP
address. The cluster management IP address cannot be the same as any of the defined
service IP addresses. The cluster management IP can automatically fail over between cluster
nodes if there are maintenance actions or a node failure.
Ethernet ports 1 and 2 are not reserved only for management. They may be also used for
iSCSI or IP replication traffic if they are configured to do so. However, management and
service IP addresses cannot be used for host or back-end storage communication.
System management is performed by using an embedded GUI that is running on the nodes;
the command-line interface (CLI) is also available. To access the management GUI, point a
web browser to the cluster management IP address. To access the management CLI, point a
Secure Shell (SSH) client to a cluster management IP and use the default SSH protocol port
(22/TCP).
By connecting to a service IP address with a browser or SSH client, you can access the
Service Assistant Interface, which may be used for maintenance and service tasks.
When you plan your management network, note that the IP Quorum applications and
Transparent Cloud Tiering (TCT) are communicating with a system through the management
ports. For more information about cloud backup requirements, see 10.3, “Transparent Cloud
Tiering” on page 621.
Password policy support allows administrators to set security rules that are based on your
organization's security guidelines and restrictions. The system supports the password and
security-related rules that are described in the following subsections.
Chapter 2. Planning 75
Set a password to expire immediately.
Set number of failed login attempts before the account is locked.
Set a period for locked accounts.
Automatic log out for inactivity.
Locking superuser account access.
Note: Systems that support a dedicated technician port can lock the superuser account.
The superuser account is the default user that can complete installation, initial
configuration, and other service-related actions on the system. If the superuser account is
locked, service tasks cannot be completed.
For more information about implementing these features, see Chapter 4, “IBM Spectrum
Virtualize GUI” on page 155.
Table 2-1 lists the communication types that can be used for communicating between system
nodes, hosts, and back-end storage systems. All types can be used concurrently.
FC-NVMe Yes No No No
In an environment where you have a fabric with mixed port speeds (8 Gb, 16 Gb, and 32 Gb),
the best practice is to connect the system to the switch operating at the highest speed.
The connections between the system’s enclosures (node-to-node traffic) and between a
system and the virtualized back-end storage require the best available bandwidth. For optimal
performance and reliability, ensure that paths between the system nodes and storage
systems do not cross inter-switch links (ISLs). If you use ISLs on these paths, make sure that
sufficient bandwidth is available. SAN monitoring is required to identify faulty ISLs.
No more than three ISL hops are permitted among nodes that are in the same system but in
different I/O groups. If your configuration requires more than three ISL hops for nodes that are
in the same system but in different I/O groups, contact your IBM Support Center.
Direct connection of the system FC ports to host systems or between nodes in the system
without using an FC switch is supported. For more information, see the IBM Documentation
information that is relevant to your IBM FlashSystem platform. For example, for
IBM FlashSystem 9200 related information, go to IBM FlashSystem 9200 documentation and
expand Planning → Planning your network and storage network → Planning for a
direct-attached configuration.
For the planning and topology requirements for HyperSwap configurations, see IBM
Spectrum Virtualize HyperSwap SAN Implementation and Design Best Practices,
REDP-5597 and IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware
Implementation, SG24-8317.
For the planning and topology requirements for three-site replication configurations, see
Spectrum Virtualize 3-Site Replication, SG24-8474.
2.6.2 Zoning
A SAN fabric must have four distinct zone classes:
Inter-node zones For communication between nodes in the same system
Storage zones For communication between the system and back-end storage
Host zones For communication between the system and hosts
Inter-system zones For remote replication
Chapter 2. Planning 77
Figure 2-1 shows the system zoning classes.
The fundamental rules of system zoning are described in the rest of this section. However,
you must review the latest zoning guidelines and requirements when designing zoning for the
planned solution by reviewing the IBM Documentation information that is relevant to your IBM
FlashSystem platform. For example, for the IBM FlashSystem 9200 related information, go to
IBM FlashSystem 9200 documentation and expand Configuring → Configuration
details → SAN configuration and zoning rules summary.
NPIV mode creates a virtual worldwide port name (WWPN) for every system physical FC
port. This WWPN is available only for host connection. During node maintenance, restart, or
failure, the virtual WWPN from that node is transferred to the same port of the other node in
the I/O group.
For more information about NPIV mode and how it works, see Chapter 7, “Hosts” on
page 405.
Ensure that the FC switches give each physically connected system port the ability to create
four more NPIV ports.
When performing zoning configuration, virtual WWPNs are used only for host communication,
that is, “system to host” zones must include virtual WWPNs, and internode, intersystem, and
back-end storage zones must use the WWPNs of physical ports. Ensure that equivalent ports
(with the same port ID) are on the same fabric and in the same zone.
Traffic between nodes in one control enclosure is sent over a Peripheral Component
Interconnect Express (PCIe) connection over an enclosure backplane. However, for
redundancy, you must configure an inter-node SAN zone even if you have a single I/O group
system. For a system with multiple I/O groups, all traffic between control enclosures must
pass through a SAN.
A system node cannot have more than 16 fabric paths to another node in the same system.
All nodes in a system must connect to the same set of back-end storage system ports on
each device.
If the edge devices contain more stringent zoning requirements, follow the storage system
rules to further restrict the system zoning rules.
Note: Cisco Smart Zoning and Brocade Peer Zoning are supported, which let you put
target ports and multiple initiator ports in a single zone for easy of management but act the
same as though each initiator and target are configured in isolated zones. Using these
zoning techniques are supported for both host attachment and for storage virtualization. As
a best practice, use normal zones when configuring ports for clustering or for replication
because these functions require the port to be both an initiator and a target.
For more information connecting back-end storage systems, see the IBM Documentation
information that is relevant to your IBM FlashSystem platform. For example, for
IBM FlashSystem 9200 related information, go to IBM FlashSystem 9200 documentation and
expand Configuring → Configuration details → External storage system configuration
details (Fibre Channel) and Configuring → Configuring and servicing storage
systems → External storage system configuration with Fibre Channel connections.
Chapter 2. Planning 79
2.6.6 Host zones
A host must be zoned to an I/O group to access volumes that are presented by this I/O group.
The preferred zoning policy is single initiator zoning. To implement it, create a separate zone
for each host bus adapter (HBA) port, and place exactly one port from each node in each I/O
group that the host accesses in this zone. For deployments with more than 64 hosts that are
defined in the system, this host zoning scheme is mandatory.
Note: Cisco Smart Zoning and Brocade Peer Zoning are supported, which let you put
target ports and multiple initiator ports in a single zone for easy of management but act the
same as though each initiator and target are configured in isolated zones. Using these
zoning techniques are supported for both host attachment and for storage virtualization. As
a best practice, use normal zones when configuring ports for clustering or for replication
because these functions require the port to be both an initiator and a target.
For smaller installations, you may have up to 40 FC ports (including both host HBA ports and
the system’s virtual WWPNs) in a host zone if the zone contains similar HBAs and operating
systems (OSs). A valid zone can be 32 host ports plus eight system ports.
Consider the following rules for zoning hosts over either SCSI or FC-NVMe:
For any volume, the number of paths through the SAN from the host to a system must not
exceed eight. For most configurations, four paths to an I/O group are sufficient.
In addition to zoning, you can use a port mask to control the number of host paths. For
more information, see 3.4.5, “Configuring the local Fibre Channel port masking” on
page 134.
Balance the host load across the system’s ports. For example, zone the first host with
ports 1 and 3 of each node in I/O group, zone the second host with ports 2 and 4, and so
on. To obtain the best overall performance of the system, the load of each port should be
equal. Assuming that a similar load is generated by each host, you can achieve this
balance by zoning approximately the same number of host ports to each port.
Spread the load across all system ports. Use all ports that are available on your machine.
Balance the host load across HBA ports. If the host has more than one HBA port per
fabric, zone each host port with a separate group of system ports.
All paths must be managed by the multipath driver on the host side. Make sure that the
multipath driver on each server can handle the number of paths that is required to access all
volumes that are mapped to the host.
When designing zoning for a geographically dispersed solution, consider the effect of the
cross-site links on the performance of the local system.
Using mixed port speeds for intercluster communication can lead to port congestion, which
can negatively affect the performance and resiliency of the SAN. Therefore, it is not
supported.
Note: If you limit the number of ports that are used for remote replication to two ports on
each node, you can limit the effect of a severe and abrupt overload of the intercluster link
on system operations.
If all node ports (N_Ports) are zoned for intercluster communication and the intercluster
link becomes severely and abruptly overloaded, the local FC fabric can become congested
so that no FC ports on the local system can perform local intracluster communication,
which can result in cluster consistency disruption.
For more information about how to avoid such situations, see 2.6.8, “Port designation
recommendations” on page 81.
For more information about zoning best practices, see IBM FlashSystem 9200 and 9100 Best
Practices and Performance Guidelines, SG24-8448, and IBM System Storage SAN Volume
Controller, IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and Performance
Guidelines, SG24-7521.
Intra-cluster communication must be protected because it is used for heartbeat and metadata
exchange between all nodes of all I/O groups of the cluster.
In solutions with multiple I/O groups, upgrade nodes beyond the standard four FC port
configuration. This upgrade provides an opportunity to dedicate ports to local node traffic,
which separates them from other cluster traffic on the remaining ports.
Isolating remote replication traffic to dedicated ports is beneficial because it ensures that any
problems that affect the cluster-to-cluster interconnect do not affect all ports on the local
cluster.
Chapter 2. Planning 81
To isolate both node-to-node and system-to-system traffic, use the port designations that are
shown in Figure 2-2.
To achieve traffic isolation, use a combination of SAN zoning and local and partner port
masking. For more information about how to send port masks, see Chapter 3, “Initial
configuration” on page 107.
Alternative port mappings that spread traffic across HBAs might allow adapters to come back
online after a failure. However, they do not prevent a node from going offline temporarily to
restart and attempt to isolate the failed adapter and then rejoin the cluster. Also, the mean
time between failures (MTBF) of the adapter is not significantly shorter than that of the
non-redundant node components. The approach that is presented here accounts for all these
considerations with the idea that increased complexity can lead to migration challenges in the
future, so a simpler approach is better.
Each node may also be configured with one, two, or three 2-port 25 Gbps RDMA-capable
Ethernet adapters. The maximum number of adapters depends on the system hardware type.
Adapters can auto-negotiate link speeds 1 - 25 Gbps. All their ports may be used for host I/O
with iSCSI or iSER, external storage virtualization with iSCSI, node-to-node traffic, and for IP
replication.
With IBM Spectrum Virtualize V8.4, support for 10 Gbps Finisar small form factor pluggable
(SFP) (Finisar FTLX8574D3BCL) on the Mellanox and Chelsio 25 Gbps Ethernet adapters is
introduced.
Note: At the time of writing, only the 10 Gbps Finisar SFP is supported on the 25 GbE
adapters. In all other instances, connecting a 10 Gbps switch to a 25 Gbps interface is
supported only through a SCORE request. For more information, contact your IBM
representative.
You can set virtual local area network (VLAN) settings to separate network traffic for Ethernet
transport. The system supports VLAN configurations for the system, host attachment, storage
virtualization, and IP replication traffic. VLANs can be used with priority flow control (PFC)
(IEEE 802.1Qbb).
All ports may be configured with an IPv4 address, an IPv6 address, or both. Each application
of a port needs a separate IP. For example, port 1 of every node can be used for
management, iSCSI, and IP replication, but three unique IP addresses are required.
If node Ethernet ports are connected to different isolated networks, then a different subnet
must be used for each network.
The iSER is a network protocol that extends iSCSI to use RDMA. RDMA is provided by either
the internet Wide Area RDMA Protocol (iWARP) or RDMA over Converged Ethernet (RoCE).
It permits data to be transferred directly into and out of SCSI buffers, providing faster
connection and processing time than traditional iSCSI connections.
iSER requires optional 25 Gbps RDMA-capable Ethernet cards. RDMA links work only
between RoCE ports or between iWARP ports: from a RoCE node canister port to a RoCE
port on a host, or from an iWARP node canister port to an iWARP port on a host. So, there
are two types of 25 Gbps adapters that are available for a system, and they cannot be
interchanged without a similar RDMA type change on the host side.
Either iSCSI or iSER works for standard iSCSI communications, that is, ones that do not use
RDMA.
Chapter 2. Planning 83
The 25 Gbps adapters come with SFP28 fitted, which can be used to connect to switches that
use OM3 optical cables.
For more information about the Ethernet switches and adapters that are supported by iSER
adapters, see SSIC.
With IBM Spectrum Virtualize V8.4, support for 10 Gbps Finisar SFP (Finisar
FTLX8574D3BCL) on the Mellanox and Chelsio 25 Gbps Ethernet adapters is introduced.
Note: At the time of writing, only the 10 Gbps Finisar SFP is supported on the 25 Gbps
Ethernet adapters. In all other instances, connecting a 10 Gbps switch to a 25 Gbps
interface is supported only through a SCORE request. For more information, contact your
IBM representative.
You can configure a priority tag for each of these traffic classes. The priority tag can be any
value 0 - 7. You can set identical or different priority tag values to all these traffic classes. You
can also set bandwidth limits to ensure quality of service (QoS) for these traffic classes by
using the Enhanced Transmission Selection (ETS) setting on the network.
To use PFC and ETS, ensure that the following tasks are completed:
Configure a VLAN on the system to use PFC capabilities for the configured IP version.
Ensure that the same VLAN settings are configured on the all entities, including all
switches between the communicating end points.
On the switch, enable Data Center Bridging Exchange (DCBx). DCBx enables switch and
adapter ports to exchange parameters that describe traffic classes and PFC capabilities.
For these steps, check your switch documentation for details.
For each supported traffic class, configure the same priority tag on the switch. For
example, if you plan to have a priority tag setting of 3 for storage traffic, ensure that the
priority is also set to 3 on the switch for that traffic type.
If you are planning on using the same port for different types of traffic, ensure that ETS
settings are configured on the network.
For more information, see the IBM Documentation information that is relevant to your
IBM FlashSystem platform. For example, for the IBM FlashSystem 9200 related information,
see IBM FlashSystem 9200 documentation and expand Configuring → Configuring
priority flow control.
A minimum of two dedicated RDMA-capable ports are required for node-to-node RDMA
communications to ensure best performance and reliability. These ports must be configured
for inter-node traffic only and cannot be used for host attachment, virtualization of
Ethernet-attached external storage, or IP replication traffic.
Note: RDMA clustering is not supported on IBM FlashSystem 5010 or IBM FlashSystem
5030.
The following limitations apply to a configuration of ports that are used for RDMA-clustering:
Only IPv4 addresses are supported.
Only the default value of 1500 is supported for the maximum transmission unit (MTU).
Port masking is not supported on RDMA-capable Ethernet ports. Due to this limitation, do
not exceed the maximum of four ports for node-to-node communications.
Node-to-node communications that use RDMA-capable Ethernet ports are not supported
in a network configuration that contain more than two hops in the fabric of switches.
Some environments might not include a stretched layer 2 subnet. In such scenarios, a
layer 3 network such as in standard topologies or long-distance RDMA node-to-node
HyperSwap configurations is applicable. To support the layer 3 Ethernet network, the
unicast discovery method can be employed for RDMA node-to-node communication. This
method relies on unicast-based fabric discovery rather than multicast discovery. To
configure unicast discovery, see the man pages for the addnodediscoverysubnet,
rmnodediscoverysubnet, or lsnodediscoverysubnet commands.
For more information, see the IBM Documentation information that is relevant to your
IBM FlashSystem platform. For example, for the IBM FlashSystem 9200 related information,
go to IBM FlashSystem 9200 documentation and expand Configuring → Configuration
details → Configuration details for using RDMA-capable Ethernet ports for
node-to-node communications.
Note: Before you configure a system that uses RDMA-capable Ethernet ports for
node-to-node communications in a standard or HyperSwap topology system, contact your
IBM representative.
Chapter 2. Planning 85
To avoid a SPOF, a dual-switch configuration is recommended. For full redundancy, a
minimum of two paths between each initiator node and target node must be configured
with each path going through a separate switch.
Extra paths can be configured to increase throughput if both initiator and target nodes
support more ports.
All planning and implementation aspects of external storage virtualization with iSCSI are
described in detail in iSCSI Implementation and Best Practices on IBM Storwize Storage
Systems, SG24-8327.
For each Ethernet port on a node, a maximum of one IPv4 address and one IPv6 address can
be designated for iSCSI or iSER I/O. You can configure the internet Storage Name Service
(iSNS) to facilitate a scalable configuration and management of iSCSI storage devices.
The same ports can be used for iSCSI and iSER host attachment concurrently, but a single
host can establish either an iSCSI or iSER session, but not both.
iSCSI or iSER hosts connect to the system through IP addresses, which are assigned to the
Ethernet ports of the node. If the node fails, the address becomes unavailable and the host
loses communication with the system through that node. To allow hosts to maintain access to
data, the node-port IP addresses for the failed node are transferred to the partner node in the
I/O group. The partner node handles requests for both its own node-port IP addresses and
also for node-port IP addresses on the failed node. This process is known as node-port IP
failover. In addition to node-port IP addresses, the iSCSI name and iSCSI alias for the failed
node are also transferred to the partner node. After the failed node recovers, the node-port IP
address and the iSCSI name and alias are returned to the original node.
Note: The cluster name and node name form parts of the iSCSI name. Changing any of
them requires reconfiguration all iSCSI hosts that communicate with the system.
iSER supports only one-way authentication through the Challenge Handshake Authentication
Protocol (CHAP). iSCSI supports two types of CHAP authentication: one-way authentication
(iSCSI target authenticating iSCSI initiators) and two-way (mutual) authentication (iSCSI
target authenticating iSCSI initiators, and vice versa).
For more information about iSCSI host attachment, see iSCSI Implementation and Best
Practices on IBM Storwize Storage Systems, SG24-8327.
Make sure that iSCSI initiators, host iSER adapters, and Ethernet switches that are attached
to the system are supported by using SSIC.
IP replication is supported on both onboard 10G bps Ethernet ports and optional 25 Gbps
Ethernet ports. However, when configured over 25 Gbps ports, it does not use RDMA
capabilities, and it does not provide a performance improvement compared to 10 Gbps ports.
Specific intersite link requirements must be met when you are planning to use IP partnership
for RC. These requirements are described in the IBM Documentation information that is
relevant to your IBM FlashSystem platform. For example, for the IBM FlashSystem 9200
related information, go to IBM FlashSystem 9200 documentation and select Configuring →
Configuring IP partnerships → Intersite link planning. Also, see Chapter 10, “Advanced
Copy Services” on page 553.
For a list of mandatory and optional network flows that are required for operating, see the IBM
Documentation information relevant to your IBM FlashSystem platform. For example, for the
IBM FlashSystem 9200 related information, go to IBM FlashSystem 9200 documentation and
expand Planning → Planning for hardware → Physical installation planning → IP
address allocation and usage.
Chapter 2. Planning 87
The HyperSwap topology uses extra system resources to support a full independent cache
on each site, enabling full performance even if one site is lost.
With the three-site replication topology, data is replicated from the primary site or production
site to two alternative sites. This feature ensures that if a disaster situation occurs at any one
of the sites, the remaining two sites can establish a consistent replication operation with
minimal data transfer. The RC relationships are synchronous or asynchronous, depending on
which site failed.
The three-site replication topology places three I/O groups at three different sites. It can
ensure the availability of a minimum of two copies of data always.
Note: Make sure that the planned configuration is reviewed by IBM or an IBM Business
Partner before implementation. Such a review can increase both the quality of the final
solution and prevent configuration errors that might impact solution delivery.
Note: IBM FlashSystem 5010 and IBM FlashSystem 5030 support external virtualization
for migration purposes only.
The back-end storage subsystem configuration must be planned for all external storage
systems that are attached. Apply the following general guidelines:
Most of the supported FC-attached storage controllers must be connected through an FC
SAN switch. However, a limited number of systems (including IBM FlashSystem 900 and
members of the Storwize, IBM FlashSystem 5000, IBM FlashSystem 7000, and
IBM FlashSystem 9000 family) can be direct-attached by using FC.
Connect all back-end storage ports to the SAN switch up to a maximum of 16 and zone
them to all of the system to maximize bandwidth. The system is designed to handle many
paths to the back-end storage.
In general, configure back-end controllers as though they are used as stand-alone systems.
However, there might be specific requirements or limitations as to the features that are usable
in the specific back-end storage system. For more information about the requirements that
are specific to your back-end controller, see the IBM Documentation information that is
relevant to your IBM FlashSystem platform. For example, for the IBM FlashSystem 9200
related information, go to IBM FlashSystem 9200 documentation and expand Configuring →
Configuring and servicing storage systems.
The system’s large cache and advanced cache management algorithms also allow it to
improve the performance of many types of underlying disk technologies. Because hits to the
cache can occur in the upper (the system itself) and the lower (back-end controller) level of
the overall solution, the solution as a whole can use the larger amount of cache wherever it is.
Therefore, the system’s cache also provides more performance benefits for back-end storage
systems with extensive cache banks.
However, the system cannot increase the throughput potential of the underlying disks in all
cases. The performance benefits depend on the underlying back-end storage technology and
the workload characteristics, including the degree to which the workload exhibits hotspots or
sensitivity to cache size or cache algorithms.
Chapter 2. Planning 89
2.10 Internal storage configuration
For general-purpose storage pools with various I/O applications, follow the storage
configuration wizard recommendations in the GUI. For specific applications with known I/O
patterns, use the CLI to create arrays that suit your needs.
An array-level recommendation for all types of internal storage except storage-class memory
(SCM) is DRAID 6, which outperforms other available RAID levels in most applications while
providing fault tolerance and high rebuild speeds.
In specific IBM FlashSystem configurations, for example, small SCM or flash arrays, the
newly introduced DRAID 1 feature is suggested to allow for high I/O performance due to all
member drive participation in the I/O and the optimized I/O path for multi-core CPUs. It also
adds fast rebuilt times on smaller arrays due to the distributed rebuild area.
With IBM Spectrum Virtualize V8.4, up to 12 SCM drives are supported in IBM FlashSystem
enclosures. DRAID 1 is recommended for best performance.
DRAID 1 is the only DRAID that can be configured without a rebuild area, supports arrays
with a minimum of two member drives, and is limited to 16 member drives (after expansion).
Initially, start with six or less member drives, so based on the anticipated capacity (current
and future), consider whether to start with a DRAID 1 array or plan for a DRAID 6 array (which
can expand even more).
Important:
DRAID 1 is not recommended with two member drives (and no rebuild area) for HDDs
of any size.
DRAID 1 is not recommended with two member drives (and no rebuild area) for SSDs
(either SAS, FCM, or NVMe) larger than 20 TB of physical capacity.
DRAID 1 is not recommended with two member drives (and no rebuild area) for SCMs
larger than 8 TB of physical capacity.
DRAID 1 is not recommended with three to six member drives (and one rebuild area)
for HDDs larger than 8 TB of physical capacity.
DRAID 1 supports only a single rebuild area per 3 - 16 member drives.
Due to their mirrored nature, DRAID 1 arrays can use only half of the array's capacity for data.
DRAID 6 can achieve better capacity utilization ratios.
At the time of writing, DRAID 1 is supported only on the existing IBM FlashSystem 9200
(AG8/UG8) and IBM FlashSystem 7200 (824/U7C) platforms. Traditional RAID (TRAID) 1 is
still supported on IBM FlashSystem 5100, IBM FlashSystem 5030, and IBM FlashSystem
5010.
Figure 2-3 provides some planning guidance for the recommended DRAID configuration
based on the number of array member drives.
For more information about internal storage configuration, see IBM FlashSystem 9200 and
9100 Best Practices and Performance Guidelines, SG24-8448 and IBM System Storage SAN
Volume Controller, IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and
Performance Guidelines, SG24-7521.
Chapter 2. Planning 91
Summary of supported array types and RAID levels
IBM FlashSystem systems support FCM NVMe drives, industry standard NVMe drives, SCM
drives with NVMe architecture, and SAS drives that are within expansion enclosures. The
type and level of arrays vary depending on the type of drives in the I/O group.
Table 2-3 summarizes the supported levels. For storage arrays with fewer than seven drives,
DRAID 1 is recommended because it offers enhanced resiliency over DRAID 6 arrays. DRAID
6 is recommended for storage arrays with seven or more drives because it can handle two
concurrent drive failures.
Table 2-3 Summary of supported drives, array types, and RAID levels
Drive type Non-DRAIDs DRAIDs
SCM x x x
The system supports two types of pools: standard pools and Data Reduction Pools (DRP).
The type is configured when a pool is created and it cannot be changed later. The type of the
pool determines the set of features that is available on the system:
A feature that can be implemented only with standard pools is VMware vSphere
integration with VMware vSphere Virtual Volumes (VVOLs).
Features that can be implemented only with DRPs are:
– Automatic capacity reclamation with SCSI UNMAP (This feature returns capacity that
is marked as no longer used by a host back to storage pool.)
– DRP compression (in-flight data compression)
– DRP deduplication
– FlashCopy with redirect-on-write (RoW)
Note: FlashCopy with RoW is usable only for volumes with supported deduplication
without mirroring relationships and within the same pool and I/O group. Automatic mode
selection (RoW/copy-on-write (CoW)) is based on these conditions.
In addition to providing data reduction options, DRP amplifies the I/O and CPU workload,
which should account for during performance sizing and planning.
Also, self-compressing drives (FCM drives) still perform compression independently of the
pool type.
Another base storage pool parameter is the extent size. There are two implications of a
storage pool extent size:
The maximum volume, MDisks, and managed storage capacity depend on the extent size.
The bigger the extent that is defined for the specific pool, the larger is the maximum size of
this pool, the maximum MDisk size in the pool, and the maximum size of a volume that is
created in the pool.
The volume sizes must be a multiple of the extent size of the pool in which the volume is
defined. Therefore, the smaller the extent size, the better control that you have over the
volume size.
The system supports extent sizes of 16 - 8192 mebibytes (MiB). The extent size is a property
of the storage pool, and it is set when the storage pool is created.
Note: The base pool parameters, pool type, and extent size are set during pool creation
and cannot be changed later. If you need to change the extent size or pool type, all
volumes must be migrated from a storage pool and then the pool itself must be deleted and
re-created.
When planning pools, the encryption is defined on a pool level and the encryption setting
cannot be changed after a pool is created. If you create an unencrypted pool, there is no way
to encrypt it later. Your only option is to delete it and re-create it as encrypted.
Chapter 2. Planning 93
2.11.1 Child pools
Instead of being created directly from MDisks, child pools are created from existing usable
capacity that is assigned to a parent pool. As with parent pools, volumes can be created that
specifically use the usable capacity that is assigned to the child pool. Child pools are similar
to parent pools with similar properties and can be used for volume copy operation.
When a standard child pool is created, the usable capacity for a child pool is reserved from
the usable capacity of the parent pool. The usable capacity for the child pool must be smaller
than the usable capacity in the parent pool. After the child pool is created, the amount of
usable capacity that is specified for the child pool is no longer reported as usable capacity of
its parent pool.
When a data reduction child pool is created, the usable capacity for the child pool is the entire
usable capacity of the data reduction parent pool without limit. After a data reduction child
pool is created, the usable capacity of the child pool and the usable capacity of the parent
pool are reported as the same.
A number of administration tasks benefit from being able to define and work with a part of a
pool. For example, the system supports VVOLs, which are used in VMware vCenter and
vSphere APIs for Storage Awareness (VASA) applications. Before a child pool can be used
for virtual volumes for these applications, the system must be enabled for virtual volumes.
Consider the following general guidelines when you create or work with a child pool:
The management GUI displays only the capacity details for child and migration pools.
Child pools can be created and changed with the CLI or GUI.
When using child pools with standard pools, you can specify a warning threshold that
alerts you when the used capacity of the child pool is reaching its upper limit. Use this
threshold to ensure that access is not lost when the used capacity of the child pool is close
to its usable capacity.
On systems with encryption enabled, standard child pools can be created to migrate
existing volumes in a non-encrypted pool to encrypted child pools. When you create a
standard child pool after encryption is enabled, an encryption key is created for the child
pool even when the parent pool is not encrypted. You can then use volume mirroring to
migrate the volumes from the non-encrypted parent pool to the encrypted child pool.
Encrypted data reduction child pools can be created only if the parent pool is encrypted.
The data reduction child pool inherits an encryption key from the parent pool.
Ensure that any child pools that are associated with a parent pool have enough usable
capacity for the volumes that are in the child pool before removing MDisks from a parent
pool. The system automatically migrates all extents that are used by volumes to other
MDisks in the parent pool to ensure that data is not lost.
You cannot shrink the usable capacity of a child pool below its used capacity. The system
also resets the warning level when the child pool is shrunk and issues a warning if the
level is reached when the usable capacity is shrunk.
The system supports migrating a copy of volumes between child pools within the same
parent pool or migrating a copy of a volume between a child pool and its parent pool.
Migrations between a source and target child pool with different parent pools are not
supported. However, you can migrate a copy of the volume from the source child pool to its
parent pool. The volume copy can then be migrated from the parent pool to the parent pool
of the target child pool. Finally, the volume copy can be migrated from the target parent
pool to the target child pool.
Ownership can be defined explicitly or it can be inherited from the user, user group, or from
other parent resources, depending on the type of resource. Ownership of child pools must be
assigned explicitly, and they do not inherit ownership from other parent resources. New or
existing volumes that are defined in the child pool inherit the ownership group that is assigned
for the child pool.
For more information about ownership groups, see Chapter 11, “Ownership groups” on
page 723.
1 100%
2 066%
3 040%
4 030%
5 or more 025%
No single partition can occupy more than its upper limit of write cache capacity. When the
maximum cache size is allocated to the pool, the system starts to limit incoming write I/Os for
volumes that are created from the storage pool. The host writes are limited to the destage
rate on a one-out-one-in basis.
Only writes that target the affected storage pool are limited. The read I/O requests for the
throttled pool continue to be serviced normally. However, because the system is offloading
cache data at the maximum rate that the back-end storage can sustain, read response times
are expected to be affected.
All I/O that is destined for other (non-throttled) storage pools continues as normal.
Chapter 2. Planning 95
2.12 Volume configuration
When planning a volume, consider the required performance, availability, and capacity. Every
volume is assigned to an I/O group that defines which pair of system nodes services I/O
requests to the volume.
Note: No fixed relationship exists between I/O groups and storage pools.
When a host sends I/O to a volume, it can access the volume with either of the nodes in the
I/O group but each volume has a preferred node. Many of the multipathing driver
implementations that the system supports use this information to direct I/O to the preferred
node. The other node in the I/O group is used only if the preferred node is not accessible.
During volume creation, the system selects the node in the I/O group that has the fewest
volumes to be the preferred node. After the preferred node is chosen, it can be changed
manually, if required.
Strive to distribute volumes evenly across available I/O groups and nodes within the system.
For more information about volume types, see Chapter 6, “Volumes” on page 299.
Image mode volumes are a useful tool in storage migration and during system
implementation to a working environment.
Fully allocated volumes provide the best performance because they do not cause I/O
amplification, and they require less CPU time compared to other volume types.
Using the thin-provisioned volume feature that is called zero detect, you can reclaim unused
allocated disk space (zeros) when you convert a fully allocated volume to a thin-provisioned
volume by using volume mirroring.
Compression is available through data reduction support as part of the system. If you want
volumes to use compression as part of data reduction support, compressed volumes must
belong to DRPs.
If you use compressed volumes over a pool with self-compressing drives, the drive still
attempts compression because it cannot be disabled on the drive level. However, there is no
performance impact due to the algorithms that FCM uses to manage compression.
Before implementing compressed volumes, perform data analysis to discover your average
compression ratio and ensure that performance sizing was done for compression.
IBM Spectrum Virtualize V8.4 introduces the Comprestimation Always On feature, which
ensures the continuous comprestimation of all VDisks is provided so that compressibility
estimations are always available. This feature is on by default.
Note: If you use compressed volumes over FCM drives, the compression ratio on a drive
level must be assumed to be 1:1 to avoid array overprovisioning and running out of space.
With deduplication, the system identifies unique chunks of data that is called signatures to
determine whether new data is written to the storage. Deduplication is a hash-based solution,
which means chunks of data are compared to their signatures rather than to the data itself. If
the signature of the new data matches an existing signature that is stored on the system, then
the new data is replaced with a reference. The reference points to the stored data instead of
writing the data to storage. This process saves the capacity of the back-end storage by not
writing new data to storage, and it might improve the performance of read operations to data
that has an existing signature.
Chapter 2. Planning 97
The same data pattern can occur many times, and deduplication decreases the amount of
data that must be stored on the system. A part of every hash-based deduplication solution is
a repository that supports looking up matches for incoming data. The system contains a
database that maps the signature of the data to the volume and its virtual address. If an
incoming write operation does not have a signature that is stored in the database, then a
duplicate is not detected and the incoming data is stored on back-end storage.
To maximize the space that is available for the database, the system distributes this
repository between all nodes in the I/O groups that contain deduplicated volumes. Each node
carries a distinct portion of the records that are stored in the database. If nodes are removed
or added to the system, the database is redistributed between the nodes to ensure full use of
the available memory.
Note: With IBM Spectrum Virtualize V8.4, FC-NVMe host attachment with HyperSwap
configurations is supported.
For more information about how to calculate the correct host queue depth for your
environment, see the IBM Documentation information that is relevant to your
IBM FlashSystem platform. For example, for the IBM FlashSystem 9200 related information,
go to IBM FlashSystem 9200 documentation and expand Configuring → Host attachment.
For best performance, split each host group into two sets. For each set, configure the
preferred access node for volumes that are presented to the host set to one of the I/O group
nodes. This approach helps to evenly distribute load between the I/O group nodes.
Note: A volume can be mapped only to a host that is associated with the I/O group to
which the volume belongs.
The IBM FlashSystem supports end-to-end UNMAP compatibility, which means that a
command that is issued by a host is processed and sent to the back-end storage device or
drive.
Host UNMAP support is enabled by default on FlashSystem 9100 and 9200 and disabled by
default on all other IBM FlashSystem family systems.
Thorough planning is required if you want to switch host UNMAP support on. Enabling it will
allow you to fully benefit from capacity reclamation features in Data Reduction Pools, but host
UNMAP requests might overload IBM FlashSystem back-end if it has spinning disks,
especially NL-SAS drives, causing serious performance problems.
Back-end UNMAP is enabled by default on all IBM FlashSystem platforms, and it is a best
practice to keep it turned on for most use cases.
Chapter 2. Planning 99
2.14 Planning copy services
IBM FlashSystem systems offer a set of copy services, such as IBM FlashCopy (snapshots)
and RC, in synchronous and asynchronous modes. For more information about copy
services, see Chapter 10, “Advanced Copy Services” on page 553.
While the FlashCopy operation is performed, the source volume is stopped briefly to initialize
the FlashCopy bitmap, and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately.
The FlashCopy function operates at the block level below the host OS and cache, so those
levels must be flushed by the OS for a FlashCopy copy to be consistent.
When you use the FlashCopy function, observe the following guidelines:
Both the FlashCopy source and target volumes should use the same preferred node.
If possible, keep the FlashCopy source and target volumes on separate storage pools.
With IBM Spectrum Virtualize V8.4, a FlashCopy with RoW mechanism is available with
DRPs. FlashCopy with RoW uses the DRP internal deduplication referencing capabilities to
reduce overheads by creating references instead of copying the data. It provides for better
performance and reduces back-end I/O amplification for FlashCopies and snapshots.
Note: FlashCopy with RoW is usable only for volumes with supported deduplication
without mirroring relationships and within the same pool and I/O group. Automatic mode
selection (RoW/CoW) is based on these conditions.
For more information about planning for the FlashCopy function, see IBM FlashSystem 9200
and 9100 Best Practices and Performance Guidelines, SG24-8448 and IBM System Storage
SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and
Performance Guidelines, SG24-7521.
GM is a copy service that is similar to MM, but copies data asynchronously. You do not have
to wait for the write to the secondary system to complete. For long distances, performance is
improved compared to MM. However, if a failure occurs, you might lose data.
GM uses one of two methods to replicate data. Multicycling GM is designed to replicate data
while adjusting for bandwidth constraints. It is appropriate for environments where it is
acceptable to lose a few minutes of data if a failure occurs.
100 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For environments with higher bandwidth, non-cycling GM can be used so that less than a
second of data is lost if a failure occurs. GM also works well when sites are more than 300
kilometers (186.4 miles) apart.
102 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2.15 Data migration
Data migration is an important part of an implementation, so you must prepare a detailed data
migration plan. You might need to migrate your data for one of the following reasons:
Redistribute a workload within a clustered system across back-end storage subsystems.
Move a workload on to newly installed storage.
Move a workload off old or failing storage ahead of decommissioning it.
Move a workload to rebalance a changed load pattern.
Migrate data from an older disk subsystem.
Migrate data from one disk subsystem to another one.
Because multiple data migration methods are available, choose the method that best fits your
environment, OS platform, type of data, and the application’s service-level agreement (SLA).
For more information about system data migration tools, see Chapter 8, “Storage migration”
on page 485 and Chapter 10, “Advanced Copy Services” on page 553.
Available at no additional charge, the cloud-based IBM Storage Insights product provides a
single dashboard that provides a clear view of all your IBM block storage. You can make
better decisions by seeing trends in performance and capacity.
With storage health information, you can focus on areas needing attention and when IBM
support is needed, IBM Storage Insights simplifies uploading logs, speeds resolution with
online configuration data, and provides an overview of open tickets all in one place.
IBM Storage Insights provides a unified view of IBM systems. By using it, you can see all of
your IBM storage inventory as a live event feed so that you know what is going on with your
storage.
IBM Storage Insights provides advanced customer service with an event filter that provides
the following functions:
The ability for you and support to view support tickets and open and close them, and to
track trends.
With the auto log collection capability, you can collect the logs and send them to IBM
before IBM Support starts looking into the problem. This feature can reduce the time to
solve the case by as much as 50%.
Figure 2-4 shows the architecture of the IBM Storage Insights application, the supported
products, and the three main teams who can benefit from the use of the tool.
IBM Storage Insights provides a lightweight data collector that is deployed on a Linux,
Windows, or AIX server or a guest in a virtual machine (VM) (for example, a VMware guest).
The data collector streams performance, capacity, asset, and configuration metadata to your
IBM Cloud instance.
104 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The metadata flows in one direction, that is, from your data center to IBM Cloud over HTTPS.
In the IBM Cloud, your metadata is protected by physical, organizational, access, and security
controls. IBM Storage Insights is ISO/IEC 27001 Information Security Management certified.
To monitor storage systems, you must provide a username and password to log in to the
storage systems. The role or user group that is assigned to the username must have the
appropriate privileges to monitor the data that is collected. As of IBM Spectrum Virtualize
V8.3.1.2 and SI/ IBM Spectrum Control V5.3.7 or later, data collection can be done with the
Monitor (least privileged) role.
Figure 2-5 shows the data flow from systems to the IBM Storage Insights cloud.
Figure 2-5 Data flow from the storage systems to the IBM Storage Insights cloud
Metadata about the configuration and operations of storage resources is collected, such as:
Name, model, firmware, and type of storage system
Inventory and configuration metadata for the storage system's resources, such as
volumes, pools, disks, and ports
Capacity values, such as capacity, unassigned space, used space, and the compression
ratio
Performance metrics, such as read and write data rates, I/O rates, and response times
The application data that is stored on the storage systems cannot be accessed by the data
collector.
For more information, see 13.12, “IBM Storage Insights monitoring” on page 865.
106 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3
Note: IBM FlashSystem 9100 and IBM FlashSystem 9200 are installed by an IBM System
Services Representative (IBM SSR). You must provide all the necessary information to the
IBM SSR by filling out the planning worksheets, which can be found inIBM FlashSystem
9200 documentation by selecting Planning → Planning worksheets (customer task).
After the IBM SSR completes their task, continue the setup by following the instructions in
3.3, “System setup” on page 113.
Before initializing and setting up the system, ensure that the following prerequisites are met:
The physical components fulfill all the requirements and are correctly installed, including:
– The control enclosures are physically installed in the racks.
– The Ethernet and Fibre Channel (FC) cables are connected.
– The expansion enclosures, if available, are physically installed and attached to the
control enclosures that will use them.
– The system control enclosures and optional expansion enclosures are powered on.
The web browser that is used for managing the system is supported by the management
GUI. For the list of supported browsers, see Supported Browsers.
You have the required information, which can be found in IBM Documentation, including:
– The IPv4 (or IPv6) addresses that are assigned for the system’s management
interfaces:
• The unique cluster IP address, which is the address that is used for the
management of the system.
• Unique service IP addresses, which are used to access node service interfaces.
You need one address for each node (two per control enclosure).
• The IP subnet mask for each subnet that is used.
• The IP gateway for each subnet that is used.
– The licenses that might be required to use particular functions (depending on the
system type):
• Remote Copy (RC).
• External virtualization.
• IBM FlashCopy.
• Compression.
• Encryption.
– Information that is used by a system when performing Call Home functions, such as:
• The company name and system installation address.
• The name, email address, and phone number of the storage administrator whom
IBM can contact if necessary.
– (optional) The Network Time Protocol (NTP) server IP address.
108 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
– (optional) The Simple Mail Transfer Protocol (SMTP) server IP address, which is
necessary only if you want to enable Call Home or want to be notified about system
events through email.
– (optional) The IP addresses for Remote Support Proxy Servers, which are required
only if you want to use them with the Remote Support Assistance feature.
On IBM FlashSystem 5010, the technician port is enabled initially, but after the setup wizard is
complete, the port is switched to internet Small Computer Systems Interface (iSCSI) host
attachment mode. However, to re-enable the onboard Ethernet port 2 on a system to be used
as the technician port, run the command that is shown in Example 3-1.
Example 3-1 Reenabling the onboard Ethernet port 2 as the technician port
IBM_IBM FlashSystem 5010:superuser>satask chserviceip -techport enable -force
The location of the technician port on Storwize V7000 Gen2 is shown in Figure 3-2.
The location of the technician port on IBM FlashSystem 5010 is shown in Figure 3-4.
The technician port runs an IPv4 DHCP server, and it can assign an address to any device
that is connected to this port. Ensure that your PC or Notebook Ethernet adapter is
configured to use a DHCP client if you want the IP to be assigned automatically. If you prefer
not to use DHCP, you can set a static IP on the Ethernet port from the 192.168.0.x/24
subnet, for example, 192.168.0.2 with the netmask 255.255.255.0.
The default IP address of a technician port on a node canister is 192.168.0.1. Do not use this
IP address for your PC or Notebook.
Note: Ensure that the technician port is not connected to the organization’s network. No
Ethernet switches or hubs are supported on this port.
You must specify IPv4 or an IPv6 system management addresses, which are assigned to
Ethernet port 1 on each node and used to access the management GUI and CLI. After the
system is initialized, you can specify other IP addresses.
Note: Do not perform the system initialization procedure on more than one node canister
of one control enclosure. After initialization completes, use the management GUI or CLI to
add control enclosures to the system.
110 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To do the initialization of a new system, complete the following steps:
1. Connect your PC or Notebook to a technician port of any canister of the control enclosure.
Ensure that you obtained a valid IPv4 address with DHCP.
2. Open a supported web browser and go to http://install. The browser is automatically
redirected to the System Initialization wizard. You can also use the IP address
http://192.168.0.1 if you are not automatically redirected.
Note: During the system initialization, you are prompted to accept untrusted certificates
because the system certificates are self-signed. If you are directly connected to the
service interface, there is no doubt about the identity of the certificate issuer, so you can
safely accept the certificates.
If the system is not in a state that allows initialization, you are redirected to the Service
Assistant interface. Use the displayed error codes to troubleshoot the problem.
3. The Welcome dialog box opens, as shown in Figure 3-5. Click Next to start the procedure.
Figure 3-6 System Initialization: Create a system or expand the existing one
6. Click Next.
112 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7. A window with restart timer opens. When the timeout is reached, you can click Next to see
the final initialization window, as shown in Figure 3-8. Follow the instructions, and browser
is redirected to the management IP address to access the system GUI after you click
Finish.
If you cannot connect to a network that has access to the management IP, you can
continue the system setup from any other workstation that can reach it.
The first time that you connect to the management GUI, you are prompted to accept
untrusted certificates because the system certificates are self-signed. If your company policy
requests certificates that are signed by a trusted certificate authority (CA), you can install
them after you complete the system setup. For more information about how to perform this
task, see 3.5.1, “Configuring secure communications” on page 139.
Note: The default password for the superuser account is passw0rd (with the number
zero and not the capital letter O). The default password must be changed by using the
system setup wizard or after the first CLI login. The new password cannot be set to the
default one.
114 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. The welcome window opens, as shown in Figure 3-10. Verify the prerequisites and click
Next.
3. Carefully read the license agreement, select I agree with the terms in the license
agreement if you want to continue the setup, as shown in Figure 3-11, and click Next.
Figure 3-12 System Setup: Changing the password for the superuser
Note: All configuration changes that are done with the system setup wizard are applied
immediately, including the password change.
5. Enter the name that you want to give the new system, as shown on Figure 3-13. Click
Apply and then Next.
Avoid using an underscore (_) in a system name. While permitted here, it is not allowed in
domain name server (DNS) shortnames and fully qualified domain names (FQDNs), so
such naming might cause confusion and access issues. The following characters can be
used: A - Z, a - z, 0 - 9, and - (hyphen).
Note: In a 3-Site replication solution, to prepare the IBM Spectrum Virtualize clusters at
Master, AuxNear, and AuxFar sites to work, make sure that the system name is unique
for all three clusters. The system names must remain different through the life of the
3-Site configuration.
If required, the system name can be changed by running the chsystem -name
<new_system_name> command.
116 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. Enter the number of licensed enclosures or licensed capacity for each function, as shown
on Figure 3-14.
Note: IBM FlashSystem 5010 and IBM FlashSystem 5030 work with Licensed Internal
Code (LIC). All licenses are controller-based. There is no capacity or enclosure
licensing.
The IBM FlashSystem 5100 system follows an enclosure-based licensing scheme that
allows the use of certain licensed functions on the number of enclosures (control and
expansion) that is indicated in the license.
IBM FlashSystem 7200, IBM FlashSystem 9100, and IBM FlashSystem 9200 systems use
differential and capacity-based licensing. For external virtualization, differential licensing
offers different pricing rates for different types of storage and is based on the number of
storage capacity units (SCUs) that are purchased. For other licensed functions, the
system supports capacity-based licensing.
Make sure that the numbers you enter here match the numbers in your license
authorization papers. For more information, see 1.17, “Licensing” on page 66.
When done, click Apply and then Next.
Note: Encryption uses a key-based licensing scheme, and it is activated later in the
wizard.
If you choose to manually enter these settings, you are prompted to input the date, time,
and time zone, or you can take those settings from your web browser. You cannot use a
24-hour clock system here, but you can switch to it later by using the system GUI.
When the data is set, click Apply and then Next.
8. Select whether the encryption feature was purchased for this system, as shown in
Figure 3-16.
If encryption is not planned at this moment, select No and click Next. You can enable this
feature later, as described in Chapter 12, “Encryption” on page 735.
If you purchased the encryption feature, you are prompted to activate your license
manually or automatically. The encryption license is key-based and required for each
control enclosure.
You can use automatic activation if the PC or Notebook that you use to connect to the GUI
and run the system setup wizard has internet access. If no internet connection is available,
use manual activation and follow the instructions. For more information, see Chapter 12,
“Encryption” on page 735.
After the encryption license is activated, you see a green check mark for each enclosure,
as shown in Figure 3-17 on page 119. After all the control enclosures show that encryption
is licensed, click Next.
118 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 3-17 System Setup: Encryption licensed
9. Set up the Call Home functions, as shown in Figure 3-18. With Call Home enabled, IBM
automatically opens problem reports and contacts you to verify whether replacement parts
are required.
Note: It is a best practice to configure Call Home and keep it enabled if your system is
under warranty or if you have a hardware maintenance agreement.
On IBM FlashSystem 9100 and IBM FlashSystem 9200 systems, an IBM SSR configures
Call Home during installation. You need to check only whether all the entered data is
correct.
The system supports two methods of sending Call Home notifications to IBM:
– Cloud Call Home
– Call Home with email notifications
Cloud Call Home is the default and preferred option for a system to report event
notifications to IBM Support. With this method, the system uses RESTful application
programming interfaces (APIs) to connect to an IBM centralized file repository that
contains troubleshooting information that is gathered from customers. This method
requires no extra configuration.
The system may also be configured to use email notifications for this purpose. If this
method is selected, you are prompted to enter the SMTP server IP address.
After clicking Next, you can provide business-to-business contact information that IBM
Support uses to contact a person who manages this machine if it is necessary, as shown
in Figure 3-20.
If the Email notifications option was selected, you are prompted to enter the details for
the email servers to be used for Call Home. Figure 3-21 on page 121 shows an example.
You can click Ping to verify that the email server is reachable over the network. Click
Apply and then Next.
120 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 3-21 System Setup: Email servers
10.IBM FlashSystem family systems may be used with IBM Storage Insights, which is an IBM
cloud storage monitoring and management tool. During this setup phase, the system tries
to contact the IBM Storage Insights web service. If it is available, you are prompted to sign
up, as shown in Figure 3-22.
If a connection cannot be established, you are prompted to add the system that you are
currently working on to the IBM Storage Insights setup manually, as shown in Figure 3-23.
For more information about IBM Storage Insights, see Chapter 13, “Reliability, availability,
and serviceability, monitoring and logging, and troubleshooting” on page 793.
With the Support Assistance feature, you allow IBM Support to perform maintenance
tasks on your system while an IBM SSR is onsite. The IBM SSR can log in locally with
your permission and a special user ID and password so that a superuser password does
not need to be shared with the IBM SSR.
You can also enable Support Assistance with remote support to allow IBM Support
personnel to log in remotely to the machine with your permission through a secure tunnel
over the internet.
For more information about the Support Assistance feature, see Chapter 13, “Reliability,
availability, and serviceability, monitoring and logging, and troubleshooting” on page 793.
If you allow remote support, you are given the IP addresses and ports of the remote
support centers and an opportunity to provide proxy server details (if required) to allow the
connectivity, as shown in Figure 3-25. Also, you can allow remote connectivity at any time
or only after obtaining permission from the storage administrator.
12.As the last initial system setup step, you are prompted to perform automatic configuration
for the system that you will use as FC-attached back-end storage for IBM SAN Volume
Controller (SVC).
122 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If you plan to use the system in stand-alone mode (not behind an SVC), leave Automatic
Configuration turned off, as shown in Figure 3-26. If your solution design later changes
and the system becomes an SVC back end, you can run automatic configuration later by
using the GUI.
If you turn on automatic configuration, after the system setup completes, the system
redirects you to the Automatic Configuration for Virtualization wizard, which is described in
3.4.6, “Automatic configuration for IBM SAN Volume Controller back-end storage” on
page 136.
Figure 3-26 System Setup: Automatic configuration for IBM SAN Volume Controller
13.On the Summary page, the settings that were set by the system setup wizard are shown.
If corrections are needed, you may return to a previous step by clicking Back. Otherwise,
click Finish to be redirected to a system GUI.
After the wizard completes, your system consists only of the control enclosure that includes
the node canister that you used to initialize the system and its partner, and the expansion
enclosures that are attached to them. If you have other control and expansion enclosures, you
must add them to complete the system setup. For more information about how to add a
control or expansion enclosure, see 3.4.2, “Adding an enclosure” on page 127.
If you have no more enclosures to add to this system, the system setup process is complete.
All the mandatory steps of the initial configuration are done. If required, you can configure
other global functions, such as system topology, user authentication, or local port masking,
before configuring the volumes and provisioning them to hosts.
Prerequisites
Before RDMA clustering is configured, ensure that the following prerequisites are met:
25 gigabits per second (Gbps) RDMA-capable Ethernet cards are installed in each node.
RDMA-capable adapters in all nodes use the same technology, such as RDMA over
Converged Ethernet (RoCE) or internet Wide Area RDMA Protocol (iWARP).
RDMA-capable adapters are installed in the same slots across all the nodes of the
system.
Ethernet cables between each node are connected correctly.
The network configuration does not contain more than two hops in the fabric of switches.
The router must not be placed between nodes that use RDMA-capable Ethernet ports for
node-to-node communication.
The negotiated speeds on the local and remote adapters are the same.
The local and remote port (RPORT) virtual local area network (VLAN) identifiers are the
same. All the ports that are used for node-to node communication must be assigned to
one VLAN ID, and ports that are used for host attachment must have a different VLAN ID.
If you plan to use VLAN to create this separation, you must configure VLAN support on the
all the Ethernet switches in your network before you define the RDMA-capable Ethernet
ports on nodes in the system. On each switch in your network, set the VLAN to Trunk
mode and specify the VLAN ID for the RDMA-ports that will be in the same VLAN.
A minimum of two dedicated RDMA-capable Ethernet ports are required for node-to-node
communications to ensure best performance and reliability. These ports must be
configured for inter-node traffic only and must not be used for host attachment,
virtualization of Ethernet-attached external storage, or IP replication traffic.
A maximum of four RDMA-capable Ethernet ports per node are allowed for node-to-node
communications.
Configuration process
To enable RDMA clustering, IP addresses must be configured on each port of each node that
is used for node-to-node communication. Complete the following steps:
1. Connect to a Service Assistant of a node by going to
https://<node_service_IP>/service and clicking Change Node IP, as shown in
Figure 3-27 on page 125.
124 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 3-27 Node IP address setup for Remote Direct Memory Access clustering
Figure 3-27 shows that ports 1 - 4 do not show any RDMA type, so they cannot be used
for node-to-node traffic. Ports 5 and 6 show RDMA type RoCE, so they can be used.
2. Hover your cursor over a tile with a port and click Modify to set the IP address, netmask,
gateway address, and VLAN ID for a port. The IP address for each port must be unique
and cannot be used anywhere else on the system. The VLAN ID for ports that are used for
node-to-node traffic must be the same on all nodes. When the required information is
entered, click Save and verify that the operation completed successfully, as shown in
Figure 3-28. Repeat this step for all ports that you intend to use for node-to-node traffic,
with a minimum of two and a maximum of four ports per node.
3. Some environments might not include a stretched layer 2 subnet. In such scenarios, a
layer 3 network such as in standard topologies or long-distance RDMA node-to-node
HyperSwap configurations is applicable. To support the layer 3 Ethernet network, use the
unicast discovery method for RDMA node-to-node communication. This method relies on
unicast-based fabric discovery rather than multicast discovery.
To configure unicast discovery, see the information about the satask
addnodediscoverysubnet, satask rmnodediscoverysubnet, or sainfo
lsnodediscoverysubnet commands in Command-line Interface You can also configure
discovery subnets by using the Service Assistant interface menu option Change Node
Discovery Subnet, as shown in Figure 3-29.
126 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4. After the IP addresses are configured on all nodes in a system, run the sainfo
lsnodeipconnectivity command or use the Service Assistant GUI menu Ethernet
Connectivity to verify that the partner nodes are visible on the IP network, as shown in
Figure 3-30. If necessary, troubleshoot connection problems by running the ping and
sainfo traceroute commands.
When all the nodes that are joined to the cluster are connected, the enclosure may be added
to the cluster.
Before beginning this process, ensure that the new control enclosure is correctly installed and
cabled to the existing system. For FC node-to-node communication, verify that correct the
SAN zoning is set. For node-to-node communication over RDMA-capable Ethernet ports,
ensure that the IP addresses are configured and a connection between nodes can be
established.
Note: If the Add Enclosure button does not appear, review the installation instructions
to verify that the new enclosure is connected and set up correctly.
3. Review the summary in the next window and click Finish to add the expansion enclosure
or the control enclosure and all expansions that are attached to it to the system.
Note: When a new control enclosure is added, the software version running on its
nodes is upgraded or rolled back to match the system software version. This process
can take up to 30 minutes or more, and the enclosure is added only when this process
completes.
4. After the control enclosure is successfully added to the system, a success message
appears. Click Close to return to the System Overview window and check that the new
enclosure is visible and available for management.
To perform the same procedure by using a CLI, complete the following steps. For more
information about the detailed syntax for each command, go to Command-line Interface.
1. When adding control enclosures, check for unpopulated I/O groups by running lsiogrp.
Each control enclosure has two nodes, so it forms an I/O group. Example 3-3 shows that
only io_grp0 has nodes, so a new control enclosure can be added to io_grp1.
2. To list control enclosures that are available to add, run the lscontrolenclosurecandidate
command, as shown in Example 3-4 on page 129. To list the expansion enclosures, run
the lsenclosure command. Expansions that have the managed parameter set to no are
available for addition.
128 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 3-4 Listing the candidate control enclosures
IBM_IBM FlashSystem:ITSO-FS9100:superuser>lscontrolenclosurecandidate
serial_number product_MTM machine_signature
78E005D 9848-AF8 4AD2-EA69-8B5E-D0C0
4. To add an expansion enclosure, change its status to managed = yes by running the
chenclosure command, as shown in Example 3-6.
The HyperSwap function is a high availability (HA) feature that provides dual-site,
active-active access to a volume. You can create an HyperSwap topology system
configuration where each I/O group in the system is physically on a different site. When these
configurations are used with HyperSwap volumes, they can be used to maintain access to
data on the system if site-wide outages occur.
If your solution is designed to use the HyperSwap function, use the guidance in this section to
configure a cluster for a multi-site HyperSwap topology.
3. Assign I/O groups to sites. Click the marked icons in the center of the window to swap site
assignments, as shown in Figure 3-35. Click Next.
4. If any host objects or back-end storage controllers are configured, you must assign a site
for each of them. Right-click the object and click Modify Site, as shown in Figure 3-36.
130 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5. Set the maximum background copy operations bandwidth between the sites. Background
copy is the initial synchronization and any subsequent resynchronization traffic for
HyperSwap volumes. Use this setting to limit the impact of volume synchronization to host
operations. You may also set it higher during the initial setup (when there are no host
operations on the volumes yet), and set it lower when the system is in production.
As shown in Figure 3-37, you must specify the total bandwidth between the sites in
megabits per second (Mbps) and what percentage of this bandwidth that can be used for
background copying. Click Next.
6. Review the summary and click Finish. The wizard starts implementing changes to migrate
the system to the HyperSwap solution.
When you later add a host or back-end storage controller objects, the GUI prompts you to set
an object site during the creation process.
One of these items is selected for the active quorum role, which is used to resolve failure
scenarios where half the nodes on the system become unavailable or a link between
enclosures is disrupted. The active quorum determines which nodes can continue processing
host operations and to avoid a “split brain” condition, which happens when both halves of the
system continue I/O processing independently of each other.
For systems with a single control enclosure, quorum devices are selected automatically. No
special configuration actions are required. This function also applies for systems with multiple
control enclosures, a standard topology, and virtualizing external storage.
For HyperSwap topology systems, an active quorum device must be on a third, independent
site. Due to the costs that are associated with deploying a separate FC-attached storage
device on a third site, an IP-based quorum device may be used for this purpose.
On a standard topology system with two or more control enclosures and no external storage,
an active quorum device cannot be on an internal drive of an FCM. For such configurations, it
is a best practice to deploy an IP-based quorum application.
2. After you click Download..., a window opens, as shown in Figure 3-39. It provides an
option to create an IP application that is used for tie-breaking only, or an application that
can be used as a tie-breaker and to store recovery metadata.
An application that does not store recovery metadata requires less channel bandwidth for
a link between the system and the quorum app, which might be a decision-making factor
for using a multi-site HyperSwap system.
For a full list of IP quorum app requirements, see IBM Documentation and expand
Configuring → Configuration details → Configuring quorum → IP quorum
application configuration.
132 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. After you click OK, the ip_quorum.jar file is created. Save the file and transfer it to a
supported AIX, Linux, or Windows host that can establish an IP connection to the service
IP address of each system node. Move it to a separate directory and start the app, as
shown in Example 3-7.
Example 3-7 Starting the IP quorum application on the Windows operating system
C:\IPQuorum>java -jar ip_quorum.jar
=== IP quorum ===
Name set to null.
Successfully parsed the configuration, found 4 nodes.
....
Note: Add the IP quorum application to the list of auto-started applications at each start
or restart or configure your operating system (OS) to run it as an auto-started service in
the background. The server that runs the IP quorum must be in the same subnet as the
IBM FlashSystem. You can have a total of five IP quorums.
The IP quorum log file and recovery metadata are stored in the same directory with the
ip_quorum.jar file.
4. Check that the IP quorum application is successfully connected and running by verifying
its Online status by selecting System → Settings → IP Quorum, as shown in
Figure 3-40.
The Preferred and Winner quorum modes are supported only with an IP quorum. For a
FC-attached active quorum MDisk, only Standard mode is possible.
To decide whether your system must have port masks configured, see 2.6.8, “Port
designation recommendations” on page 81.
To set the FC port mask by using the GUI, complete the following steps:
1. Select System → Network → Fibre Channel Ports. In a displayed list of FC ports, the
ports are grouped by a system port ID. Each port is configured identically across all nodes
in the system. You can click the arrow next to the port ID to expand a list and see which
node ports (N_Port) belong to the selected system port ID and their worldwide port names
(WWPNs).
2. Right-click a system port ID that you want to change and select Modify Connection, as
shown in Figure 3-42.
By default, all system ports can send and receive traffic of any kind:
Host traffic
Traffic to virtualized back-end storage systems
Local system traffic (node-to-node)
Partner system (remote replication) traffic
134 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The first two types are always allowed, and you may control them only with SAN zoning. The
other two types can be blocked by port masking. In the Modify Connection dialog box, as
shown in Figure 3-43, you can choose which type of traffic that a port can send.
Port masks can also be set by using the CLI. Local and remote partner port masks are
internally represented as a string of zeros and ones. The last digit in the string represents port
one. The previous digits represent ports two, three, and so on. If the digit for a port is set to
“1”, the port is enabled for the specific type of communication. If it is set to “0”, the system
does not send or receive traffic that is controlled by a mask on the port.
To view the current port mask settings, run the lssystem command, as shown in Example 3-8.
The output shows that all system ports allow all kinds of traffic.
To set the local or RPORT mask, run the chsystem command. Example 3-9 shows the mask
setting for a system with four FC ports on each node and that has RC relationships. Masks
are applied to allow local node-to-node traffic on ports 1 and 2, and replication traffic on ports
3 and 4.
Example 3-9 Setting a local port mask by running the chsystem command
IBM_IBM FlashSystem:ITSO-FS9100:superuser>chsystem -localfcportmask 0011
IBM_IBM FlashSystem:ITSO-FS9100:superuser>chsystem -partnerfcportmask 1100
IBM_IBM FlashSystem:ITSO-FS9100:superuser>lssystem |grep mask
local_fc_port_mask 0000000000000000000000000000000000000000000000000000000000000011
partner_fc_port_mask 0000000000000000000000000000000000000000000000000000000000001100
Note: When replacing or upgrading your node hardware, consider that the number of FC
ports and their arrangement might be changed. If so, make sure that any configured port
masks are still valid for the new configuration.
Automatic Configuration for Virtualization is intended for a new system. If there are host, pool,
or volume objects that are configured, all the user data must be migrated out of the system,
and those objects must be deleted.
The Automatic Configuration for Virtualization wizard starts immediately after you complete
the initial setup wizard if you set Automatic Configuration to On. The following steps are
performed by it:
1. Add control or expansion enclosures to the system that are not added yet. Click Add
Enclosure to start the adding process, or click Skip to move to the next step. You can turn
off the Automatic Configuration for Virtualization wizard at any step by clicking the ...
(hamburger) symbol in the upper right, as shown in Figure 3-44.
2. The wizard checks whether the SVC is correctly zoned to the system. By default, newly
installed systems run in N_Port ID Virtualization (NPIV) mode (Target Port Mode). The
system’s virtual (host) WWPNs must be zoned for SVC. On the SVC side, physical
WWPNs must be zoned to a back-end system independently of the NPIV mode setting.
3. Create a host cluster object for SVC. Each SVC node has its own worldwide node name
(WWNN). Make sure to select all WWNNs that belong to nodes of the same SVC cluster.
136 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 3-45 shows that the system detects an SVC cluster with a single I/O group, so two
WWNNs are selected.
4. When all nodes of an SVC cluster including the spare one are selected, you can change
the host object name for each one, as shown in Figure 3-46. For convenience, name the
host objects to match the SVC node names or serial numbers.
Figure 3-46 Hosts inside an IBM SAN Volume Controller host cluster
5. Click Automatic Configuration and check the list of internal resources that are used.
Click Cancel if the list is not correct; otherwise, click Next.
7. Review the pool (or pools) configuration, as shown in Figure 3-48, and click Proceed to
trigger commands that will apply it.
138 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
8. When the Automatic Configuration for Virtualization wizard completes, you see the
window that is shown in Figure 3-49. After clicking Close, you may proceed to the SVC
GUI and configure a new provisioned storage.
You can export the system volume configuration data in .csv format by using this window
or anytime later by selecting Settings → System → Automatic Configuration.
Signed SSL certificates are issued by a trusted CA. A browser maintains a list of trusted CAs
that are identified by their root certificate. The root certificate must be included in this list in
order for the signed certificate to be trusted.
Based on the security requirements for your system, you can create either a new self-signed
certificate or install a signed certificate that is created by a third-party CA.
Note: Before re-creating a self-signed certificate, ensure that your browser supports
the type of keys that you are going to use for a certificate. See your organization’s
security policy to ensure what key type is required.
3. Click Update.
You are prompted to confirm the action. Click Yes to proceed. Close the browser, wait
approximately 2 minutes, and reconnect to the management GUI.
To regenerate an SSL certificate by using a CLI, run the chsystemcert command, as shown in
Example 3-10. Valid values for -keytype are rsa2048, ecdsa384, or ecdsa521.
140 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Configuring a signed certificate
If you company’s security policy requests certificates to be signed by a trusted authority,
complete the following steps to configure a signed certificate:
1. Select Update Certificate in the Secure Communications window.
2. Select Signed certificate and enter the details for the new certificate signing request, as
shown in Figure 3-51. All fields are mandatory except for the Subject Alternative Name.
For the “Country” field, use a two-letter country code. Click Generate Request.
3. When prompted, save the certificate.csr file that contains the certificate signing
request.
Until the signed certificate is installed, the Secure Communications window shows that an
outstanding certificate request exists.
4. Submit the request to the CA to receive a signed certificate. Notify the CA that you need a
certificate (or certificate chain) in base64-encoded Privacy Enhanced Mail (PEM) format.
5. When you receive the signed certificate, select Update Certificate in the Secure
Communications window again.
6. Select Signed Certificate and click the folder icon next to the Signed Certificate input
field of the Update Certificate window, as shown in Figure 3-51 on page 141. Click
Update.
7. You are prompted to confirm the action. Click Yes to proceed. After your certificate is
installed, the GUI session disconnects. Close the browser window and wait approximately
2 minutes before reconnecting to the management GUI.
8. Reconnect to the GUI and select Settings → Security → Secure Communications. The
window that opens should show that you are using a signed certificate, as shown in
Figure 3-52.
Password creation
The password creation options can be customized to employ the following policies:
Minimum password length (6 – 64 characters)
Minimum number of uppercase characters (1 – 3)
Minimum number of lowercase characters (1 – 3)
Minimum number of special characters (1 – 3)
Minimum number of digits (1 – 3)
Note: A new policy does not apply retroactively to existing passwords. However, any
new passwords must meet the current policy setting.
142 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Password creation rules ensure that passwords that were used before do not match the new
password:
Password History checking can be enabled. Zero – 10 previous passwords can be
checked.
Stores the previous password hashes only (no plain text).
– 0 = compare the current password only.
– 10 = check that the new password does not match the current password or the 10
passwords that were used before the current password.
The minimum required password age can be set (0 – 365 days).
A minimum age of 1 means that a user can change a password only once per day, which
prevents a user from cycling through previous passwords.
Note: The password history is not checked when a security admin changes another
user’s password because this function is not supported on IBM FlashSystem 5010.
From the GUI, set the password creation options and password creation rules policies by
selecting Settings → Security and clicking Password Policies, as shown in Figure 3-53.
The security admin can force a user to change their password at any time. The password
expires immediately. If you use the CLI, you can expire individual users. If you use the GUI,
you can reset all user passwords.
Can be used when creating a user to require a password change on first login.
Can be used after changing password policy settings.
– By setting the length of time a user is locked out of the system (0 – 10080 minutes
(which is 7 days). 0 = indefinite).
Disabling the superuser account and session timeouts is available only on platforms with a
dedicated techport.
Note: This feature is not available on IBM FlashSystem 5010(E) and 5030(E).
Disabling the superuser account can be done either from the GUI or CLI by completing the
following steps:
1. Use an explicit option to enable superuser locking, as shown in Example 3-12.
144 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
for more information about the risks associated with each parameter. Are you
sure you wish to continue? (y/yes to confirm) yes
IBM FlashSystem 7200:admin>
A good use case is assuming that some enterprises have policies that all systems should use
remote authentication. So, configure remote authentication, create a remote security admin,
and disable the superuser. Now, no local accounts can log in to the system.
Note: The superuser account is still required for satask actions and recovery actions, for
example, T3/T4 recovery. It is automatically unlocked for recovery and must be manually
relocked afterward.
Session timeouts can be configured for both CLI and GUI sessions as follows:
A configurable CLI timeout of 5 – 240 minutes
A separate configurable GUI timeout of 5 – 240 minutes
Figure 3-54 Creating password policies for password expiration and account lockout
146 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
User roles and groups
User groups are used to determine what tasks the user is authorized to perform. Each user
group is associated with a single role. Roles apply to both local and remote users on the
system and are based on the user group to which the user belongs. A local user can belong
only to a single group, so the role of a local user is defined by the single group to which that
user belongs.
For a list of user roles and their tasks, and a description of a pre-configured user group, see
IBM Documentation and expand Product overview → Technical overview → User roles.
Superuser account
Every system has a default user that is called the superuser. It cannot be deleted or modified,
except for changing the password and SSH key. The superuser is a local user and cannot be
authenticated remotely. The superuser has a SecurityAdmin user role, which has the most
privileges within the system.
Note: The superuser is the only user that may log in to the Service Assistant interface. It is
also the only user that may run sainfo and satask commands through the CLI.
The password for superuser is set during the system setup. The superuser password can be
reset to its default value of passw0rd by using a procedure that is described in IBM
Documentation by expanding Troubleshooting → Resolving a problem → Procedure:
Resetting the superuser password by using the management GUI or CLI.
Note: The superuser password reset procedure uses system internal USB ports. The
system may be configured to disable those ports. If the USB ports are disabled and there
are no users with the SecurityAdmin role and a known password, the superuser password
cannot be reset without replacing the system hardware and deleting the system
configuration.
Local authentication
A local user is a user whose account is managed entirely on the system. A local user belongs
to one user group only, and it must have a password, an SSH public key, or both. Each user
has a username, which must be unique across all users in one system.
Usernames can contain up to 256 printable American Standard Code for Information
Interchange (ASCII) characters. Forbidden characters are the single quotation mark ('), colon
(:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (“). A username
cannot begin or end with a blank space.
Passwords for local users can be up to 64 printable ASCII characters, but cannot begin or end
with a space.
When connecting to the CLI, encryption key authentication is attempted first with the
username and password combination available as a fallback. The SSH key authentication
method is available for CLI and file transfer access only. For GUI access, only the password is
used.
If local authentication is used, user accounts must be created for each system. If you want
access for a user on multiple systems, you must define the user in each system.
Remote authentication
A remote user is authenticated by using identity information that is accessible by using the
Lightweight Directory Access Protocol (LDAP). The LDAP server must be available for the
users to log in to the system. Remote users have their groups defined by the remote
authentication service.
Users that are authenticated by an LDAP server can log in to the management GUI and the
CLI. These users do not need to be configured locally for CLI access, and they do not need
an SSH key that is configured to log in by using the CLI.
148 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If multiple LDAP servers are available, you can configure more than one LDAP server to
improve resiliency. Authentication requests are processed by those LDAP servers that are
marked as preferred unless the connection fails or a user is not found. Requests are
distributed across all preferred servers for load balancing in a round-robin fashion.
Note: All LDAP servers that are configured within the same system must be of the same
type.
If users that are part of a group on the LDAP server are to be authenticated remotely, a user
group with an identical name must exist on the system. The user group name is
case-sensitive. The user group must also be enabled for remote authentication on the system.
A user who is authenticated remotely is granted permissions according to the role that is
assigned to the user group of which the user is a member.
150 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
– Advanced settings:
Speak to the administrator of the LDAP server to ensure that these fields are
completed correctly:
• User Attribute
This LDAP attribute is used to determine the username of remote users. The
attribute must exist in your LDAP schema and must be unique for each of your
users.
This advanced setting defaults to sAMAaccountName for AD and to uid for IBM
Security Directory Server and Other.
• Group Attribute
This LDAP attribute is used to determine the user group memberships of remote
users. The attribute must contain either the distinguished name of a group or a
colon-separated list of group names.
This advanced setting defaults to memberOf for AD and Other, and to ibm-allGroups
for IBM Security Directory Server. For Other LDAP type implementations, you
might need to configure the memberOf overlay if it is not in place.
• Audit Log Attribute
This LDAP is an attribute that is used to determine the identity of remote users.
When an LDAP user performs an audited action, this identity is recorded in the
audit log. This advanced setting defaults to userPrincipalName for AD and to uid for
IBM Security Directory Server and the Other type.
3. Enter the server settings for one or more LDAP servers, as shown in Figure 3-58.
If you set a certificate and you want to remove it, click the red cross next to
Configured.
– Click the plus and minus signs to add or remove LDAP server records. You may define
up to six servers.
Click Finish to save the settings.
4. To verify that LDAP is enabled, select Settings → Security → Remote Authentication,
as shown in Figure 3-60. You may also test the server connection by selecting Global
Actions → Test LDAP connections and verifying that all servers return “CMMVC7075I
The LDAP task completed successfully”.
152 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
You can use the Global Actions menu to disable remote authentication and switch to
local authentication only.
After remote authentication is enabled, the remote user groups must be configured. You can
use the default built-in user groups for remote authentication. However, the name of the
default user groups cannot be changed. If the LDAP server contains a group that you want to
use and you do not want to create this group on the storage system, the name of the group
must be changed on the server side to match the default name. Any user group, whether
default or self-defined, must be enabled for remote authentication before LDAP authentication
can be used for that group.
To create a user group with remote authentication enabled, complete the following steps:
1. Select Access → Users by Group and click Create User Group. Enter the name for the
new group, select the LDAP checkbox, and choose a role for the users in the group, as
shown in Figure 3-61.
2. Enable LDAP for one of the existing groups, select it in the list, select User Group
Actions → Properties in the upper right, and select the LDAP checkbox.
3. When you have at least one user group that is enabled for remote authentication, verify
that you set up your user group on the LDAP server correctly by checking whether the
following conditions are true:
– The name of the user group on the LDAP server matches the one that you modified or
created on the storage system.
– Each user that you want to authenticate remotely is a member of the LDAP user group
that is configured for the system role.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified username in the UPN format (user@domain).
154 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4
This chapter explains the basic view and the configuration procedures that are required to get
your system environment running as quickly as possible by using the GUI. This chapter does
not describe advanced troubleshooting or problem determination and some of the complex
operations (compression and encryption). For more information, see Chapter 13, “Reliability,
availability, and serviceability, monitoring and logging, and troubleshooting” on page 793.
Throughout this chapter, all GUI menu items are introduced in a systematic, logical order as
they appear in the GUI. However, topics that are described more in detail in other chapters of
the book are only referred to here. For example, Storage pools (Chapter 5, “Storage pools” on
page 237), Volumes (Chapter 6, “Volumes” on page 299), Hosts (Chapter 7, “Hosts” on
page 405), and Copy Services (Chapter 10, “Advanced Copy Services” on page 553) are
described in separate chapters.
The GUI is a built-in software component within the IBM Spectrum Virtualize Software.
Multiple users can be logged in to the GUI. However, no locking mechanism exists, so be
aware that if two users change the same object simultaneously, the last action that is entered
from the GUI is the action that takes effect.
Important: Data entries that are made through the GUI are case-sensitive.
You must enable Java Script in your browser. For Mozilla Firefox, JavaScript is enabled by
default and requires no other configuration steps.
156 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: If you log in to the GUI by using the configuration node, you receive another option:
Service Assistant Tool (SAT). Clicking this option takes you to the service assistant instead
of the cluster GUI, as shown in Figure 4-2.
Figure 4-2 Login window of the storage system when it is connected to the configuration node
It is a best practice for each user to have their own unique account. The default user accounts
should be disabled for use or their passwords changed and kept secured for emergency
purposes only. This approach helps to identify personnel working on the systems and track all
important changes that are done by them. The superuser account should be used for initial
configuration only.
After a successful login, the Version 8.4 Welcome window opens and displays the system
dashboard (see Figure 4-3).
Capacity
This section (Figure 4-5) shows the current utilization of attached storage and its usage.
Apart from the usable capacity, it also shows provisioned capacity and capacity savings.
You can select the Compressed Volumes, Deduplicated Volumes, or Thin Provisioned
Volumes options to display a complete list of the options in the Volumes tab.
If the ‘Overprovisioned External Systems’ section appears, you can then click it to see a
list of related managed disks (MDisks) and pools, as shown in Figure 4-6.
158 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
You also see a warning when assigning MDisks to pools if the MDisk is on an
overprovisioned external storage controller.
System Health
This section indicates the status of all critical system components, which are grouped in
three categories: Hardware, logical, and connectivity components, as shown in Figure 4-7.
When you click Expand, each component is listed as a subgroup. You can then go directly
to the section of GUI where the component that you are interested in is managed.
The dashboard in Version 8.4 displays as a welcome page instead of the system window as in
previous versions. This system overview was moved to the Monitoring → System Hardware
menu.
Although the Dashboard window provides key information about system behavior, the System
menu is a preferred starting point to obtain the necessary details about your system
components.
Component Details
Performance indicator
Dynamic System View
Figure 4-10 The task menu on the left side of the GUI
160 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
By reducing the horizontal size of your browser window, the wide task menu shrinks to the
icons only.
In this case, the GUI has two suggested tasks that help with the general administration of the
system: You can directly perform the tasks from this window, or cancel them and run the
procedure later. Other suggested tasks that typically appear after the initial system
configuration are to create a volume and configure a storage pool.
The dynamic IBM Spectrum Virtualize menu contains the following windows:
Dashboard
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
Alerts indication
The left icon in the notification area informs administrators about important alerts in the
systems. Click the icon to list warning messages in yellow and errors in red (see Figure 4-13).
162 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
You can go directly to the Events menu by clicking the View All Events option, as shown in
Figure 4-14.
You can see each event message separately by clicking the Details icon of the specific
message. Then, you can analyze the content and eventually run the suggested fix procedure,
as shown in Figure 4-15.
Similarly, you can analyze the details of running tasks (all of them together in one window or
of a single task). Click View to open the volume format job, as shown in Figure 4-17.
164 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Making selections
Recent updates to the GUI brought improved selection making. You can now select multiple
items more easily. Go to a wanted window, press and hold the Shift or Ctrl key, and make your
selection.
Pressing and holding the Shift key, select the first item in your list that you want, and then
select the last item. All items between the two that you choose are also selected, as shown in
Figure 4-18.
Pressing and holding the Ctrl key, select any items from the entire list. You can select items
that do not appear in sequential order, as shown in Figure 4-19.
You can also select items by using the built-in filtering function. For more information, see
4.3.1, “Content-based organization” on page 166.
Help
If you need help, you can select the (?) button, as shown in Figure 4-20.
For example, in the Dashboard window, you can open help information that is related to the
dashboard-provided information, as shown in Figure 4-21.
The next section describes the structure of the window and how to go to various system
components to manage them more efficiently and quickly.
166 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Table filtering
On most pages, a Filter box is available at the upper right of the window. Use this option if the
list of object entries is too long and you want to search for something specific.
2. Enter the text string that you want to filter and press Enter.
By using this function, you can filter your table based on column names. In our example, a
volume list is displayed that contains the names that include Anton somewhere in the
name. Anton is highlighted in amber, as are any columns that contain this information, as
shown in Figure 4-24. The search option is not case-sensitive.
3. Remove this filtered view by clicking the X icon that displays in the Filter box or by deleting
what you searched for and pressing Enter, as shown in Figure 4-25.
Filtering: This filtering option is available in most menu options of the GUI.
For example, on the Volumes window, complete the following steps to add a column to the
table:
1. Right-click any column headers of the table or select the icon in the upper left of the table
header. A list of all of the available columns displays, as shown in Figure 4-26.
168 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. Select the column that you want to add or remove from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 4-27.
3. You can repeat this process several times to create custom tables to meet your
requirements.
4. Return to the default table view by selecting Restore Default View (the last entry) in the
column selection menu.
Sorting: By clicking a column, you can sort a table based on that column in ascending or
descending order.
170 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4.4.1 System Hardware overview
The System Hardware option on the Monitoring menu provides a general overview. If you
have more than one control enclosure in a cluster, each enclosure has its own I/O group
section (see Figure 4-30).
Figure 4-31 shows how to see more about the System Hardware Enclosure Details.
For example, clicking a node brings up details, such as whether the node is online and which
node is the configuration node, as shown in Figure 4-33. For more information about the
component, see the right side under the Component Details section that shows when a
component is selected.
172 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
By right-clicking and selecting Properties, you see detailed technical parameters, such as
WWWN, Memory, CPU, and field-replaceable unit (FRU) number, as shown in Figure 4-34.
1
2
In an environment with multiple IBM Storage System clusters, you can easily direct the onsite
personnel or technician to the correct device by enabling the identification LED on the front
panel by completing the following steps:
1. Select the appropriate drive and click Turn Identify On, as shown in Figure 4-35.
Alternatively, you can use the command-line interface (CLI) to get the same results. Enter the
following commands in this sequence:
1. Type svctask chenclosure -identify yes 1 (or enter chenclosure -identify yes 1).
2. Type svctask chenclosure -identify no 1 (or enter chenclosure -identify no 1).
You can use the same CLI to obtain results for a specific controller or drive.
To view internal components (components that cannot be seen from the outside), review the
bottom of the GUI underneath where the list of external components is displayed. You can
select any of these components and details display in the right pane, as with the external
components. Figure 4-37 shows the backside of the enclosure.
You can also choose SAS Chain View to view directly attached expansion enclosures, as
shown in Figure 4-38. A useful view of the entire serial-attached Small Computer System
Interface (SCSI) (SAS) chain is displayed, with selectable components that show port
numbers and canister numbers, along with a cable diagram for easy cable tracking.
Select
174 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
You can select any enclosure to get more information, including serial number and model
type, as shown in Figure 4-39, where Expansion Enclosure 1 is selected. You can also see
the Events and Component Details areas at the right side of the window, which shows
information that relates to the enclosure or component that you select.
With directly attached expansion enclosures, the view is condensed to show all expansion
enclosures on the right side, as shown in Figure 4-40. The number of events against each
enclosure and the enclosure status are displayed for quick reference. Each enclosure is
selectable, which brings you to the Expansion Enclosure View window.
To view Easy Tier data and reports in the management GUI, select one of the following paths:
From the management GUI, select Monitoring → Easy Tier Reports.
From the management GUI, select Pools → View Easy Tier Reports.
You can export your Easy Tier stats to a CSV file for further analysis. For more information
about Easy Tier Reports, see Chapter 9, “Advanced features for storage efficiency” on
page 509.
176 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4.4.3 Events
The Events option, which is available in the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them, or export them to an external CSV file. A CSV file can be created from the information
that is shown here. Figure 4-42 provides an example of records in the system Event log.
Run Fix
For the error messages with the highest internal priority, perform corrective actions by running
fix procedures. Click Run Fix (see Figure 4-42), and the fix procedure wizard opens, as
shown in Figure 4-43.
The wizard guides you through the troubleshooting and fixing process from a hardware or
software perspective. If you determine that the problem cannot be fixed without a technician’s
intervention, you can cancel the procedure execution at any time.
For more information about fix procedures, see Chapter 13, “Reliability, availability, and
serviceability, monitoring and logging, and troubleshooting” on page 793.
The performance statistics in the GUI show, by default, the latest 5 minutes of data. To see
details of each sample, click the graph and select the timestamp, as shown in Figure 4-45.
The charts that are shown in Figure 4-45 represent 5 minutes of the data stream. For in-depth
storage monitoring and performance statistics with historical data about your system, use
IBM Spectrum Control or IBM Storage Insights.
You can also obtain a no-charge unsupported version of the Quick Performance Overview
(qperf) from this website.
178 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4.4.5 Background Tasks
Use the Background Tasks window to view and manage current tasks that are running on the
system (see Figure 4-46).
This menu provides an overview of currently running tasks that are triggered by the
administrator. In contrast to the Running jobs and Suggested tasks indication in the middle of
top window, it does not list the suggested tasks that administrators should consider
performing. The overview provides more details than the indicator, as shown in Figure 4-47.
You can switch between each type (group) of operation, but you cannot show them all in one
list (see Figure 4-48).
The Pools menu contains the following items accessible from GUI (see Figure 4-49):
Pools
Volumes by Pool
Internal Storage
External Storage
MDisks by Pool
System Migration
For more information about storage pool configuration and management, see Chapter 5,
“Storage pools” on page 237.
4.6 Volumes
A volume is a logical disk that the system presents to attached hosts. By using GUI
operations, you can create different types of volumes depending on the type of topology that
is configured on your system.
The Volumes menu contains the following items, as shown in Figure 4-50 on page 181:
Volumes
Volumes by Pool
Volumes by Host
Volumes by Host Cluster
Cloud Volumes
180 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-50 Volumes menu
For more information about these tasks and configuration and management process
guidance, see Chapter 6, “Volumes” on page 299.
4.7 Hosts
A host system is a computer that is connected to the system through a Fibre Channel (FC)
interface or an IP network. It is a logical object that represents a list of worldwide port names
(WWPNs) that identify the interfaces that the host uses to communicate with your System. FC
and SAS connections use WWPNs to identify the host interfaces to the systems.
The Hosts menu consists of the following choices, as shown in Figure 4-51:
Hosts
Host Clusters
Ports by Host
Mappings
Volumes by Host
Volumes by Host Cluster
For more information about configuration and management of hosts by using the GUI, see
Chapter 7, “Hosts” on page 405.
More advanced functions allow FlashCopy operations to occur on multiple source and target
volumes. Management operations are coordinated to provide a common, single point-in-time
(PiT) for copying target volumes from their respective source volumes. This technique creates
a consistent copy of data that spans multiple volumes.
The Copy Services menu offers the following operations in the GUI, as shown in Figure 4-52:
FlashCopy
Consistency groups
FlashCopy Mappings
Remote Copy (RC)
Partnerships
Because Copy Services is one of the most important features for resiliency solutions, see
Chapter 10, “Advanced Copy Services” on page 553.
4.9 Access
The Access menu in the GUI maintains who can log in to the system, defines the access
rights to the user, and tracks what was done by each privileged user to the system. It is
logically split into three categories:
Ownership groups
Users by group
Audit log
In this section, we explain how to create, modify, or remove a user, and how to see records in
the audit log.
182 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The Access menu is available from the left pane, as shown in Figure 4-53.
The first time that you start the Ownership Group task, you see the window that is shown in
Figure 4-54.
In our example, no child pool exists, so the GUI guides you to the Pools page to create child
pools.
The system supports several resources that you assign to ownership groups:
Child pools
Volumes
Volume groups
Hosts
Host clusters
Host mappings
Two basic use cases can be applied to using ownership groups on the system:
New objects are created within the ownership group. There also can be other existing
objects on the system that are not in the ownership group.
On a system where these supported objects are already configured, and you want to
migrate these objects to use ownership groups.
When a user group is assigned to an ownership group, the users in that user group retain
their role but are restricted to only those resources within the same ownership group. User
groups can define the access to operations on the system, and the ownership group can
further limit access to individual resources. For example, you can configure a user group with
the Copy Operator role, which limits access of the user to Copy Services functions, such as
FlashCopy and RC operations. Access to individual resources, such as a specific FlashCopy
consistency group, can be further restricted by adding it to an ownership group. When the
user logs on to the management GUI, only resources that they have access to through the
ownership group are displayed. Additionally, only events and commands that are related to
the ownership group in which a user belongs are viewable by those users.
Inheriting ownership
Depending on the type of resource, ownership can be defined explicitly or ownership can be
inherited from the user, user group, or from other parent resources. Objects inherit their
ownership group from other objects whenever possible:
Volumes inherit the ownership group from the child pool that provides capacity for the
volumes.
FlashCopy mappings inherit the ownership group from the volumes that are configured in
the mapping.
Hosts inherit the ownership group from the host cluster they belong to, if applicable.
Host mappings inherit the ownership group from both the host and the volume to which
the host is mapped.
These objects cannot be explicitly moved to a different ownership group without creating
inconsistent ownership.
Ownership groups are also inherited from the user. Objects that are created by an owner
inherit the ownership group of the owner. If the owner is in more than one ownership group
(only possible for remote users), then the owner must choose the group when the object is
created.
Figure 4-55 on page 185 shows how different objects inherit ownership from ownership
groups.
184 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-55 Ownership group inheritance
The following objects have ownership that is assigned explicitly and do not inherit ownership
from other parent resources:
Child pools
Host clusters
Hosts that are not part of a host cluster
Volume groups
FlashCopy consistency groups
User groups
Hosts that are a part of a host cluster
Volumes
Users
Volume-to-host mappings
FlashCopy mappings
Configuring ownership groups
Migrating to ownership groups
Child pools
The following rules apply to child pools that are defined in ownership groups:
Child pools can be assigned to an ownership group when you create a pool or change a
pool.
Users who assign the child pool to the ownership group cannot be defined within that
ownership group.
Resources that are within the child pool inherit the ownership group that is assigned for
the child pool.
Volume groups
Volume groups can be created to manage multiple volumes that are used with Transparent
Cloud Tiering (TCT) support. The following rules apply to volume groups that are defined in
ownership groups:
If the user that is creating the volume group is defined in only one ownership group, the
volume group inherits the ownership group of that user.
If the user is defined in an ownership group but is also defined in multiple user groups, the
volume group inherits the ownership group. The system uses the lowest role that the user
has from the user group. For example, if a user is defined in two user groups with the roles
of Monitor and Copy Operator, the host inherits the Monitor role.
Only users not within an ownership group can assign ownership groups when you create a
new volume group or change an existing volume group.
Volumes can be added to a volume group if both the volume and the volume group are
within the same ownership group or if both are not in an ownership group. There are
situations where a volume group and its volumes can belong to different ownership
groups. Volume ownership can be inherited from the ownership group or from one or more
child pools.
The ownership of a volume group does not affect the ownership of the volumes it contains.
If a volume group and its volumes are owned by different ownership groups, then the
owner of the child pool that contains the volumes can change the volume directly. For
example, the owner of the child pool can change the name of a volume within it. The
owner of the volume group can change the volume group itself and indirectly change the
volume, such as deleting a volume from the volume group. Neither the ownership group of
the child pools or the owner of the volume group can directly manipulate the resources
that are not defined in their ownership group.
186 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
FlashCopy consistency groups
FlashCopy consistency groups can be created to manage multiple FlashCopy mappings. The
following rules apply to FlashCopy consistency groups that are defined in ownership groups:
If the user that is creating the FlashCopy consistency group is in only one ownership
group, the FlashCopy consistency group inherits the ownership group of that user.
If the user is defined in an ownership group but is also defined in multiple user groups, the
FlashCopy consistency group inherits the ownership group. The system uses the lowest
role that the user has from the user group.
Only users not within an ownership group can assign ownership groups when a
FlashCopy consistency is created or changed.
FlashCopy mappings can be added to a consistency group if the volumes in the mapping
and the consistency group are within the same ownership group. You can also add a
FlashCopy mapping to a consistency group if it and all of its dependent resources are not
in an ownership group.
There are situations where a FlashCopy consistency group and its resources can belong
to different ownership groups.
As with volume groups and volumes, the ownership of the consistency group has no
impact on the ownership of the mappings it contains.
User groups
The following rules apply to user groups that are defined in ownership groups:
If the user that is creating the user group is in only one ownership group, the user group
inherits the ownership group of that user.
If the user is with multiple user groups, the user group inherits the ownership group of the
user group with the lowest role.
Only users not within an ownership group can assign an ownership group when a user
group is created or changed.
These resources inherit ownership from the parent resource. A user cannot change the
ownership group of the resource, but can change the ownership group of the parent object.
Users
The following rules apply to users that are defined in ownership groups:
A user inherits the ownership group of the user group to which it belongs.
Users that use Lightweight Directory Access Protocol (LDAP) for remote authentication
can belong to multiple user groups and multiple ownership groups.
Volume-to-host mappings
The following rules apply to volume-to-host mappings that are defined in ownership groups:
Volume-to-host mappings inherit the ownership group of the host or host cluster and
volume in the mapping.
If host or host cluster and volume are within different ownership groups, then the mapping
cannot be assigned an ownership group.
FlashCopy mappings
The following rules apply to FlashCopy mappings that are defined in ownership groups:
FlashCopy mappings inherit the ownership group of both volumes that are defined in the
mapping.
If the volumes are within different ownership groups, then the mapping cannot be assigned
to an ownership group.
Like with FlashCopy consistency groups, it is possible for a consistency group and its
mappings to belong to different ownership groups. However, the ownership of the
consistency group has no impact on the ownership of the mappings that it contains.
188 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Migrating to ownership groups
If you updated your system to a software level that supports ownership groups, you must
reconfigure certain resources if you want to configure ownership groups. An ownership group
defines a subset of users and objects within the system. You can create ownership groups to
further restrict access to specific resources that are defined in the ownership group. Only
users with the Administrator or Security Administrator roles can configure and manage
ownership groups.
To create an ownership group, select Create Ownership Group, as shown in Figure 4-56.
Local users must provide a password, Secure Shell (SSH) key, or both. Local users are
authenticated through the authentication methods that are configured on the system. If the
local user needs access to the management GUI, a password is needed for the user. If the
user requires access to the CLI through SSH, a password or a valid SSH key file is necessary.
Local users must be part of a user group that is defined on the system. User groups define
roles that authorize the users within that group to a specific set of operations on the system.
Figure 4-59 on page 191 shows the newly created User Group.
190 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-59 User Group
The following privileged user group roles exist in IBM Spectrum Virtualize:
Monitor
These users can access all system viewing actions. Monitor role users cannot change the
state of the system or the resources that the system manages. Monitor role users can
access all information-related GUI functions and commands, back up configuration data,
and change their own passwords.
Copy Operator
These users can start and stop all existing FlashCopy, MM, and GM relationships. Copy
Operator role users can run the system commands that Administrator role users can run
that deal with FlashCopy, MM, and GM relationships.
Service
These users can set the time and date on the system, delete dump files, add and delete
nodes, apply service, and shut down the system. Users can also complete the same tasks
as users in the monitor role.
Administrator
These users can manage all functions of the system except for those functions that
manage users, user groups, and authentication. Administrator role users can run the
system commands that the Security Administrator role users can run from the CLI, except
for commands that deal with users, user groups, and authentication.
Registering a user
After you define your group, you can register a user within this group by clicking Create User
(see Figure 4-60).
192 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Deleting a user
To remove a user account, right-click the user in the All Users list and select Delete, as
shown in Figure 4-61.
Attention: When you click Delete, the user account is directly deleted. No other
confirmation request is presented.
194 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-63 Audit log
The following items are also not documented in the audit log:
Commands that fail are not logged.
A result code of 0 (success) or 1 (success in progress) is not logged.
Result object ID of node type (for the addnode command) is not logged.
Views are not logged.
Important: Failed commands are not recorded in the audit log. Commands that are
triggered by IBM Support personnel are recorded with the flag Challenge because they
use challenge-response authentication.
The following options are available for configuration from the Settings menu:
Notifications: The system can use Simple Network Management Protocol (SNMP) traps,
syslog messages, and Call Home emails to notify you and IBM Support Center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
Network: Use the Network window to manage the management IP addresses for the
system, service IP addresses for the nodes, and internet Small Computer Systems
Interface (iSCSI) and FC configurations. The system must support FC or Fibre Channel
over Ethernet (FCoE) connections to your storage area network (SAN).
Security: Use the Security window to configure and manage remote authentication
services.
System: Use the System menu to manage overall system configuration options, such as
licenses, updates, and date and time settings.
Support: Use this option to configure and manage connections, and upload support
packages to the support center.
GUI Preferences: Configure the welcome message that appears after you log in, and
refresh internals and GUI logout timeouts.
4.10.1 Notifications
Your IBM Storage System can use SNMP traps, syslog messages, and Call Home email to
notify you and the IBM Support Center when significant events are detected. Any combination
of these notification methods can be used simultaneously.
Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.
196 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
SNMP notifications
SNMP is a standard protocol for managing networks and exchanging messages. The system
can send SNMP messages that notify personnel about an event. You can use an SNMP
manager to view the SNMP messages that are sent by your storage system.
To view the SNMP configuration, click the Settings icon and select Notification → SNMP
(see Figure 4-65).
In Figure 4-65, you can view and configure an SNMP server to receive various informational,
error, or warning notifications by setting the following information:
IP Address
The address for the SNMP server.
Community
SNMP Community strings are used only by devices that support the SNMPv1 and
SNMPv2c protocols. SNMPv3 uses username and password authentication, along with an
encryption key. By convention, most SNMPv1 to v2c equipment ships from the factory with
a read-only community string set to “public”.
Server Port
The remote port (RPORT) number for the SNMP server. The RPORT number must be a
value of 1 - 65535.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event. You can use the Syslog window to view the
syslog messages that are sent by the system. To view the Syslog configuration, go to the
System pane and click Settings, and select Notification → Syslog (see Figure 4-66 on
page 199). A domain name server (DNS) server is required to use domain names in syslog.
198 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-66 Setting the syslog messages
From this window, you can view and configure a syslog server to receive log messages from
various systems and store them in a central repository by entering the following information:
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Protocol of the transmission protocol
Select UDP or TCP.
Port
Port number of the syslog server.
Event Notifications
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard details about the event.
– The expanded format provides more details about the event.
To create another Syslog Server, select Create Syslog Server, as shown in Figure 4-67.
The syslog messages can be sent in concise message format or expanded message format.
4.10.2 Network
This section describes how to view the network properties of the storage system. The
network information can be obtained by clicking Network, as shown in Figure 4-68 on
page 201.
200 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-68 Accessing network information
Management IP addresses
To view the management IP addresses of IBM Spectrum Virtualize, select Settings →
Network, and click Management IP Addresses. The GUI shows the management IP
address by pointing to the network ports, as shown in Figure 4-69.
Service IP information
To view the Service IP information of your IBM Spectrum Virtualize installation, select
Settings → Network, as shown in Figure 4-68. Click the Service IPs option to view the
properties, as shown in Figure 4-70.
Instead of reaching the management IP address, the service IP address directly connects to
each individual node canister for service operations. You can select a node canister of the
control enclosure from the drop-down list and then click any of the ports that are shown in the
GUI. The service IP address can be configured to support IPv4 or IPv6.
Ethernet Connectivity
Ethernet Connectivity displays the connectivity between nodes that are attached through the
Ethernet network, as shown in Figure 4-71.
Ethernet ports
Ethernet ports for each node are at the rear of the system and used to connect the system to
hosts, external storage systems, and to other systems that are part of RC partnerships.
Depending on the model of your system, supported connection types include FC, when the
ports are FCoE-capable, iSCSI, and iSCSI Extensions for Remote Direct Memory Access
(RDMA) (iSER). iSER connections use either the RDMA over Converged Ethernet (RoCE)
protocol or the internet Wide Area RDMA Protocol (iWARP). The panel indicates whether a
specific port is being used for a specific purpose and traffic.
You can modify how the port is used by selecting Actions. Select either Modify VLAN,
Modify IP Settings, Modify Remote Copy, Modify iSCSI Hosts, Modify Storage Ports, or
Modify Maximum Transmission Unit to change the use of the port. You can also display the
login information for each host that is logged in to a selected node.
To display this information, select Settings → Network → Ethernet Ports and right-click the
node and select IP Login Information. This information can be used to detect connectivity
issues between the system and hosts and to improve the configuration of iSCSI host to
optimize performance. Select Ethernet Ports for an overview from the menu, as shown in
Figure 4-72. For planning, see Chapter 2, “Planning” on page 71.
202 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Priority flow control
Priority flow control (PFC) is an Ethernet protocol that you can use to select the priority of
different types of traffic within the network. With PFC, administrators can reduce network
congestion by slowing or pausing certain classes of traffic on ports, thus providing better
bandwidth for more important traffic. The system supports PFC on various supported
Ethernet-based protocols on three types of traffic classes: system, host attachment, and
storage traffic. You can configure a priority tag for each of these traffic classes. The priority
tag can be any value 0 - 7. You can set identical or different priority tag values to all these
traffic classes. You can also set bandwidth limits to ensure quality of service (QoS) for these
traffic classes by using the Enhanced Transmission Selection (ETS) setting on the network.
When you plan to configure PFC, follow these guidelines and examples.
To use PFC and ETS, ensure that the following tasks are completed:
Ensure that ports support 10 Gb or higher bandwidth to use PFC settings.
Configure a virtual local area network (VLAN) on the system to use PFC capabilities for
the configured IP version.
Ensure that the same VLAN settings are configured on the all entities, including all
switches between the communicating end points.
Configure the QoS values (priority tag values) for host attachment, storage, or system
traffic by running the chsystemethernet command.
To enable priority flow for host attachment traffic on a port, make sure that the host flag is
set to yes on the configured IP on that port.
To enable priority flow for storage traffic on a port, make sure that storage flag is set to yes
on the configured IP on that port.
On the switch, enable the Data Center Bridging Exchange (DCBx). DCBx enables switch
and adapter ports to exchange parameters that describe traffic classes and PFC
capabilities. For these steps, check your switch documentation for details.
For each supported traffic class, configure the same priority tag on the switch. For
example, if you plan to have a priority tag setting of 3 for storage traffic, ensure that the
priority is also set to 3 on the switch for that traffic type.
If you are planning on using the same port for different types of traffic, ensure that the ETS
settings are configured on the network.
iSCSI information
From the iSCSI pane in the Settings menu, you can display and configure parameters for the
system to connect to iSCSI-attached hosts, as shown in Figure 4-74.
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
To change the system name, click the system name and specify the new name.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.
204 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
iSCSI aliases (optional)
An iSCSI alias is a user-defined name that identifies the node to the host. Complete the
following steps to change an iSCSI alias:
a. Click an iSCSI alias.
b. Specify a name for it.
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host starts the iSCSI connection to a target node, this IQN from the target node is visible in
the iSCSI configuration tool on the host.
Internet Storage Name Service (iSNS) and Challenge Handshake Authentication Protocol
(CHAP)
You can specify the IP address for the iSNS. Host systems use the iSNS server to manage
iSCSI targets and for iSCSI discovery.
You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the
specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts that use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system and host definition. You can create an iSCSI host definition
without using CHAP.
If your system supports an FC-NVMe connection between nodes and hosts, you can display
details about each side of the connection. To display node details, select the node from the
drop-down menu and select Show Results. You can also display the host details for the
connection or for all hosts and nodes. Use this window to troubleshoot issues between nodes
and hosts that use FC-NVMe connections.
For these connections, the Status column displays the current state of the connection. The
following states for the connection are possible:
Active
Indicates that the connection between the node and host is being used.
Inactive
Indicates that the connection between the node and host is configured, but no FC-NVMe
operations occurred in the last 5 minutes. Since the system sends periodic heartbeat
message to keep the connection open between the node and the host, it is unusual to see
an inactive state for the connection. However, it can take up to 5 minutes for the state to
change from inactive to active. If the inactive state remains beyond the 5-minute refresh
interval, it can indicate a connection problem between the host and the node. If a
connection problem persists between the host and the node, a reduced node login count
or the status of the host indicates it is degraded, which you can view by selecting Hosts →
Ports by Host in the management GUI. Verify these values in the management GUI, and
view the messages by selecting Monitoring → Events.
206 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-77 shows the NVMe Connectivity menu.
Consider FC-NVMe target limits when you plan and configure the hosts. Include the following
points in your plan:
An NVMe host can connect to four NVMe controllers on each system node. The maximum
per node is four with an extra four in failover.
Zone up to four ports in a single host to detect up to four ports on a node. To allow failover
and avoid outages, zone the same or extra host ports to detect an extra four ports on the
second node in the I/O group.
A single I/O group can contain up to 256 FC-NVMe I/O controllers. The maximum number
of I/O controllers per node is 128 plus an extra 128 in failover. Zone a total maximum of 16
hosts to detect a single I/O group. Also, consider that a single system target port may have
up to 16 NVMe I/O controllers.
When you install and configure attachments between the system and a host that runs the
Linux operating system (OS), follow specific guidelines. For more information about these
guidelines, see Linux specific guidelines.
208 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4.10.4 Security
Use the Security option from the Settings menu (as shown in Figure 4-80) to view and
change security settings, authenticate users, and manage secure connections.
Remote Authentication
In the Remote Authentication pane, you can configure remote authentication with LDAP, as
shown in Figure 4-81. By default, the system has local authentication enabled. When you
configure remote authentication, you do not need to configure users on the system or assign
more passwords. Instead, you can use your passwords and user groups that are defined on
the remote service to simplify user management and access, enforce password policies more
efficiently, and separate user management from storage management.
For more information about how to configure remote logon, see IBM Documentation.
Password Policies
In this window, you can define policies for password management and expiration, as shown in
Figure 4-83.
With password policy support, system administrators can set security requirements that are
related to password creation and expiration, timeout for inactivity, and actions after failed
logon attempts. Password policy support allows administrators to set security rules that are
based on their organization's security guidelines and restrictions.
210 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
– Require passwords to contain special characters.
– Prevent users from reusing recent passwords.
– Require users to change their password on next login under any of these conditions:
• Their password expired.
• An administrator created new accounts with temporary passwords.
Password expiration and account locking rules:
The administrator can create the following rules for password expiration:
– Set the password expiration limit.
– Set a password to expire immediately.
– Set the number of failed login attempts before the account is locked.
– Set the time for locked accounts.
– Automatic logout for inactivity.
– Locking superuser account access.
Note: Systems that support a dedicated technician port can lock the superuser account.
The superuser account is the default user that can complete installation, initial
configuration, and other service-related actions on the system. If the superuser account is
locked, service tasks cannot be completed.
Secure Communications
To enable or manage secure communications, select the Secure Communications window,
as shown in Figure 4-84. Before you create a request for either type of certificate, ensure that
your current browser does not have restrictions about the type of keys that are used for
certificates.
Some browsers limit the use of specific key-types for security and compatibility issues. Select
Update Certificate to add new certificate details, including certificates that were created and
signed by a third-party certificate authority (CA).
212 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-87 Date and Time window
4. Click Save.
Licensed Functions
The base license that is provided with your system includes the use of its basic functions.
However, the extra licenses can be purchased to expand the capabilities of your system.
Administrators are responsible for purchasing extra licenses and configuring the systems
within the license agreement, which includes configuring the settings of each licensed
function on the system.
Differential licensing charges different rates for different types of virtualized storage, which
provides cost-effective management of capacity across multiple tiers of storage. It is based on
the number of storage capacity units (SCUs) that are purchased.
Each SCU corresponds to a different amount of usable capacity based on the type of storage.
Table 4-1 shows the different storage types and the associated SCU ratios.
Flash All flash devices, One SCU equates to 1 TiB of usable Category 1 storage.
other than SCM
drives
214 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
License Drive classes SCU ratio
Enterprise 10 K or 15 K RPM One SCU equates to 1.18 TiB of usable Category 2 storage.
drives
Nearline (NL) NL Serial Advanced One SCU equates to 4.00 TiB of usable Category 3 storage.
Technology
Attachment (SATA)
drives
License settings are initially entered in to a system initialization wizard. They can be changed
later.
3. In the Licensed Functions window, you can view or set the licensing options for the
IBM Storage System for the following elements:
– External Virtualization
You can enter the number of SCU units that are licensed for External Virtualization.
When monitoring External Virtualization license usage, consider the following items:
• The license accounts for usable MDisk capacity. For example, one SCU is used for
one 1 TB Tier0 MDisk when it is assigned to a storage pool, independently of the
amount of the actual data written on the MDisk.
• SCU is used for complete and incomplete chunks of MDisk capacity. For example, if
a combined capacity of all NL Tier MDisks in your system is 5 TB, two SCUs are
needed: one SCU for 4 TB of NL storage, and one SCU for another 4 TB (even if
1 TB is used).
• If your system uses enclosure-based licensing, specify the number of enclosures of
external storage systems that are attached to your IBM Storage System. Data can
be migrated from storage systems to your systems that use the External
Virtualization function within 90 days of purchase of the system without purchase of
a license. After 90 days, any ongoing use of the External Virtualization function
requires a license for each enclosure in each external system.
Note: To monitor license usage, run the lslicense CLI command, as described in
IBM Documentation.
216 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Updating your storage system
For more information about the update procedure that uses the GUI, see Chapter 13,
“Reliability, availability, and serviceability, monitoring and logging, and troubleshooting” on
page 793.
VVOL management is enabled in the System section, as shown in Figure 4-91. The NTP
server must be configured before enabling VVOL management. As a best practice, use the
same NTP server for ESXi and your system.
Restriction: You cannot enable VVOL support until the NTP server is configured in the
system.
Volume protection
Volume protection prevents active volumes or host mappings from being deleted inadvertently
if the system detects recent I/O activity.
Note: This global setting is enabled by default on new systems. You can either set this
value to apply to all volumes that are configured on your system, or control whether the
system-level volume protection is enabled or disabled on specific pools.
To prevent an active volume from being deleted unintentionally, administrators can use the
system-wide setting to enable volume protection. They can also specify a period that the
volume must be idle before it can be deleted. If volume protection is enabled and the period is
not expired, the volume deletion fails even if the -force parameter is used.
If system-level protection is enabled but pool-level protection is not enabled, any volumes
in the pool can be deleted even when the setting is configured at the system level. When
you delete a volume, the system verifies whether it is a part of a host mapping, FlashCopy
mapping, or RC relationship. For a volume that contains these dependencies, the volume
cannot be deleted unless the -force parameter is specified on the corresponding remove
commands. However, the -force parameter does not delete a volume if it has recent I/O
activity and volume protection is enabled. The -force parameter overrides the volume
dependencies, not the volume protection setting.
IP quorum
IBM Spectrum Virtualize also supports an IP quorum application. By using an IP-based
quorum application as the quorum device for the third site, a FICON is not required. Java
applications run on hosts at the third site.
218 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-93 IP Quorum settings
2. When you select Download IPv4 Application, you are prompted whether you want to
download the IP quorum application with or without recovery metadata, as shown in
Figure 4-94. IP quorum applications are used to resolve communication problems
between nodes and store metadata, which restores system configuration during failure
scenarios. If you have a third-site quorum disk that stores recovery metadata, you can
download the IP quorum application without the recovery metadata.
4. After you download the IP quorum application, you must save the application on a
separate host or server.
5. If you change the configuration by adding a node, changing a service IP address, or
changing Secure Sockets Layer (SSL) certificates, you must download and install the IP
quorum application again.
6. On the host, you must use the Java command line to initialize the IP quorum application.
On the server or host on which you plan to run the IP quorum application, create a
separate directory that is dedicated to the IP quorum application.
7. Run the ping command on the host server to verify that it can establish a connection with
the service IP address of each node in the system.
8. Change to the folder where the application is, and run the following command:
java -jar ip_quorum.jar
9. To verify that the IP quorum application is installed and active, select Settings →
System → IP Quorum. The new IP quorum application is displayed in the table of
detected applications. The system automatically selects MDisks for quorum disks.
An IP quorum application can also act as the quorum device for systems that are configured
with a single-site or standard topology that does not have any external storage configured.
The IP quorum mode is set to Standard when the system is configured for standard topology.
The quorum mode of Preferred or Winner is available only if the system topology is not set to
Standard. To change the quorum mode for the IP quorum application, select Settings →
System → IP Quorum and set the mode to Preferred or Winner, or run the chsystem
command. This configuration provides a system tie-break capability, automatically resuming
I/O processing if half of the system's nodes or enclosures are inaccessible.
220 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For specific quorum settings, see Figure 4-96.
On systems that support multiple-site topologies, you can specify which site resumes I/O after
a disruption based on the applications that run on the site or other factors like whether the
environment uses a third site for quorum management. For example, you can specify whether
a selected site is preferred for resuming I/O or if the site automatically “wins” in tie-break
scenarios. If only one site runs critical applications, you can configure this site as preferred.
During a disruption, the system delays processing tie-break operations on other sites that are
not specified as preferred. The designated preferred site becomes more apt to resume I/O,
and critical applications remain online. If the preferred site is the site that is disrupted, the
other site continues to win the tie-breaks and continue I/O.
This feature applies only to IP quorum applications. It does not apply to FC-based third-site
quorum management. In stretched configurations or HyperSwap configurations, an IP
quorum application can be used at the third site as an alternative to third-site quorum disks.
No FC connectivity at the third site is required to use an IP quorum application as the quorum
device. If you have a third-site quorum disk, you must remove the third site before you use an
IP quorum application.
Note: The maximum number of IP quorum applications that can be deployed on a single
system is five. Only one instance of the IP quorum application per host or server is
supported. IP quorum applications on multiple hosts or servers can be configured to
provide redundancy. If you have multiple IBM Spectrum Virtualize systems in your
environment, more than one IP quorum application is allowed per host, but each IP
quorum instance must be dedicated to a single IBM Spectrum Virtualize system within the
environment. In addition, the host or server requires available bandwidth to support
multiple IP quorum instances.
Use the network requirements that are shown in “I/O groups” on page 222 to determine
bandwidth and latency needs in these types of environments. The recommended
configuration remains a single IP quorum application per host or server.
The target port mode on the I/O group indicates the current state of port virtualization:
Enabled: The I/O group contains virtual ports that are available to use.
Disabled: The I/O group does not contain any virtualized ports.
Transitional: The I/O group contains physical FC and virtual ports that are being used. You
cannot change the target port mode directly from Enabled to Disabled states, or vice
versa. The target port mode must be in a transitional state before it can be changed to
Disabled or Enabled states.
The system can be in the transitional state for an indefinite period while the system
configuration is changed. However, system performance can be affected because the
number of paths from the system to the host doubled. To avoid increasing the number of
paths substantially, use zoning or other means to temporarily remove some of the paths
until the state of the target port mode is enabled.
The port virtualization settings of I/O groups are available by selecting Settings → System →
I/O Groups, as shown in Figure 4-97.
You can change the status of the port by right-clicking the I/O group and selecting Change
NPIV Settings, as shown in Figure 4-98.
right click
222 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Transparent Cloud Tiering
TCT is a licensed function that enables volume data to be copied and transferred to cloud
storage. The system supports creating connections to cloud service providers (CSPs) to store
copies of volume data in private or public cloud storage.
With TCT, administrators can move older data to cloud storage to free up capacity on the
system. PiT snapshots of data can be created on the system and then copied and stored on
the cloud storage. An external CSP manages the cloud storage, which reduces storage costs
for the system. Before data can be copied to cloud storage, a connection to the CSP must be
created from the system.
A cloud account is an object on the system that represents a connection to a CSP by using a
particular set of credentials. These credentials differ depending on the type of CSP that is
being specified. Most CSPs require the hostname of the CSP and an associated password,
and some CSPs also require certificates to authenticate users of the cloud storage.
Public clouds use certificates that are signed by well-known CAs. Private CSPs can use a
self-signed certificate or a certificate that is signed by a trusted CA. These credentials are
defined on the CSP and passed to the system through the administrators of the CSP. A cloud
account defines whether the system can successfully communicate and authenticate with the
CSP by using the account credentials.
If the system is authenticated, it can then access cloud storage to copy data to the cloud
storage or restore data that is copied to cloud storage back to the system. The system
supports one cloud account to a single CSP. Migration between providers is not supported.
The system supports connections to various CSPs. Some CSPs require connections over
external networks, and others can be created on a private network.
Each CSP requires different configuration options. The system supports the following CSPs:
IBM Cloud
The system can connect to IBM Cloud, which is a cloud computing platform that combines
platform as a service (PaaS) with infrastructure as a service (IaaS).
OpenStack Swift
OpenStack Swift is a standard cloud computing architecture from which administrators
can manage storage and networking resources in a single private cloud environment.
Standard APIs can be used to build customizable solutions for a private cloud solution.
Amazon Simple Storage Service (Amazon S3)
Amazon S3 provides programmers and storage administrators with flexible and secure
public cloud storage. Amazon S3 is also based on Object Storage standards and provides
a web-based interface to manage, back up, and restore data over the web.
By using this view, you can enable and disable features of your TCT and update the system
information concerning your CSP. This window allows you to set the following options:
CSP
Cloud Object Storage Uniform Resource Locator (URL)
The tenant or the container information that is associated to your Cloud Object Storage
Username of the cloud object account
API key
The container prefix or location of your object
Encryption
Bandwidth
For more information about how to configure and enable TCT, see 10.3, “Transparent Cloud
Tiering” on page 621.
224 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-100 shows how to enable Automatic Configuration for Virtualization.
Support Package
If Support Assistance is configured on your systems, you can automatically or manually
upload new support packages to the IBM Support Center to help analyze and resolve
errors on the system.
The menus are available by selecting Settings → Support → Support package, as shown
in Figure 4-103.
For more information about how the Support menu helps with troubleshooting your system or
how to back up your systems, see Chapter 13, “Reliability, availability, and serviceability,
monitoring and logging, and troubleshooting” on page 793.
226 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-104 shows the GUI Preferences selection window.
Login message
IBM Spectrum Virtualize enables administrators to configure the welcome banner (login
message). This message is a text message that appears in the GUI login window or at the
CLI login prompt.
The content of the welcome message is helpful when you need to notify users about some
important information about the system, such as security warnings or a location description.
To define and enable the welcome message by using the GUI, edit the text area with the
message content and click Save (see Figure 4-105).
The banner message also appears in the CLI login prompt window, as shown in Figure 4-107.
General Settings
With the General Settings menu, you can refresh the GUI cache, set the low graphics mode
option, and enable advanced pools settings.
228 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-108 General GUI Preferences window
A well-chosen name serves both as a label for an object and as a tool for tracking and
managing the object. Choosing a meaningful name is important if you decide to use
configuration backup and restore.
When you choose a name for an object, apply the following naming rules:
Names must begin with a letter.
Important: Do not start names by using an underscore (_) character even though it is
possible. Using an underscore as the first character of a name is a reserved naming
convention that is used by the system configuration restore process.
230 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Object names must be unique within the object type. For example, you can have a volume
that is called ABC and an MDisk called ABC, but you cannot have two volumes that are
called ABC.
The default object name is valid (an object prefix with an integer).
Objects can be renamed to their current names.
To rename the system from the System window, complete the following steps:
1. Select Monitoring → System Hardware - Overview, and click System Actions in the
upper right of the window, as shown in Figure 4-110.
2. The Rename System window opens (see Figure 4-111). Specify a new name for the
system and click Rename.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.
Warning: When you rename your system, the iSCSI name automatically changes
because it includes the system name by default. Therefore, this change needs more
actions on iSCSI-attached hosts.
2. Enter the new name of the node and click Rename (see Figure 4-113).
Warning: Changing the node canister name causes an automatic IQN update and
requires the reconfiguration of all iSCSI-attached hosts.
Adding an enclosure
After the expansion enclosure is properly attached and powered on, complete the following
steps to activate it in the system:
1. In the System window that is available from the Monitoring menu, select SAS Chain
View. Only correctly attached and powered on enclosures appear in the window, as shown
in Figure 4-114 on page 233. The new enclosure is showing as unmanaged, which means
it is not part of the system.
232 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-114 Newly detected expansion enclosure
2. Select the + next to the enclosure that you want to add or click Add Enclosure at the top.
These buttons appear only if there is an unmanaged enclosure that is eligible to be added
to the system. After they are selected, a window opens, on which you need to select the
enclosure you want to add. Expansion enclosures that are directly cabled do not need to
be selected, as shown in Figure 4-115.
Removing an enclosure
The enclosure removal procedure includes its logical detachment from the system by using a
GUI and physically unmounting the systems from the rack. The IBM Storage System guides
you through this process. Complete the following steps:
1. In the System window that is available from the Monitoring menu, select > next to the
enclosure that you want to remove. The Enclosure Details pane opens. You can then click
Enclosure Actions and select Remove, as shown in Figure 4-117.
2. The system prompts you to remove the enclosure. All disk drives in the removed enclosure
must be in the Unused state. Otherwise, the removal process fails (see Figure 4-118 on
page 235).
234 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 4-118 Confirming the removal
3. After the enclosure is logically removed from the system (set to the Unmanaged state), the
system reminds you about the steps that are necessary for physical removal, such as
power off, uncabling, dismantling from the rack, and secure handling (see Figure 4-119).
As part of the enclosure removal process, see your company security policies about how to
handle sensitive data on removed storage devices before they leave the secure data center.
Most companies require data to be encrypted or logically shredded.
3. Select Web Server (Tomcat). Click Restart, and the web server that runs the GUI
restarts. This task is a concurrent action, but the cluster GUI is unavailable while the
server is restarting (the Service Assistant and CLI are not affected). After 5 minutes, check
to see whether GUI access was restored.
236 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5
Storage pools aggregate internal and external capacity and provide the containers in which
you can create volumes. Storage pools make it easier to dynamically allocate resources,
maximize productivity, and reduce costs.
You can configure storage pools through the management GUI, either during initial
configuration or later. Alternatively, you can configure the storage to your own requirements
by using the command-line interface (CLI).
MDisks can either be redundant array of independent disks (RAID) arrays that are created by
using internal storage, such as drives and flash modules, or logical units (LUs) that are
provided by external storage systems. A single storage pool can contain both types of
MDisks, but a single MDisk can be part of only one storage pool. MDisks themselves are not
visible to host systems.
Figure 5-1 provides an overview of how storage pools, MDisks, and volumes are related.
All MDisks in a pool are split into chunks of the same size, which are called extents. Volumes
are created from the set of available extents in the pool. The extent size is a property of the
storage pool and cannot be changed after the pool is created. The choice of extent size
affects the total amount of storage that can be managed by the system.
It is possible to add MDisks to an existing pool to provide more usable capacity in the form of
extents. The system automatically balances volume extents between the MDisks to provide
the best performance to the volumes. It is also possible to remove extents from the pool by
deleting an MDisk. The system automatically migrates extents that are in use by volumes to
other MDisks in the same pool to make sure that the data on the extents is preserved.
A storage pool represents a failure domain. If one or more MDisks in a pool become
inaccessible, all volumes (except for image mode volumes) in that pool are affected. Volumes
in other pools are unaffected.
The system supports standard pools and Data Reduction Pools (DRPs). Both support parent
pools and child pools.
238 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Child pools are created from existing capacity that is assigned to a parent pool instead of
being created directly from MDisks. When the child pool is created from a standard pool, the
capacity for a child pool is reserved from the parent pool. This capacity is no longer reported
as available capacity of the parent pool. In terms of volume creation and management, child
pools are similar to parent pools.
DRPs use a set of techniques that can be used to reduce the amount of usable capacity that
is required to store data, such as compression and deduplication. Data reduction can
increase storage efficiency and performance, and reduce storage costs, especially for flash
storage. DRPs automatically reclaim capacity that is no longer needed by host systems. This
reclaimed capacity is given back to the pool as usable capacity and can be reused by other
volumes. Child pools that are created from DRPs are quotaless and can use the entire parent
pool capacity.
For more information about DRP planning and implementation, see Chapter 9, “Advanced
features for storage efficiency” on page 509 and Introduction and Implementation of Data
Reduction Pools and Deduplication, SG24-8430.
You manage storage pools either in the Pools window of the GUI or by using the CLI. To
access the Pools pane, select Pools → Pools, as shown in Figure 5-2.
The window lists all storage pools and their major parameters. If a storage pool has child
pools, they are also shown.
Example 5-1 The lsmdiskgrp output (some columns are not shown)
IBM_IBM FlashSystem_7200:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
0 NVMe-Pool0 online 13 76 6.71TB 2048 634.00GB
2 FCM-Pool online 1 71 178.81TB 2048 170.04TB
Both alternatives open the dialog box that is shown in Figure 5-4.
2. Select the Data reduction check box to create a DRP. Leaving it clear creates a standard
storage pool.
Note: DRPs require careful planning and sizing. Limitations and performance
characteristics of DRPs are different from standard pools.
240 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
A standard storage pool that is created by using the GUI has a default extent size of 1 GB.
DRPs have a default extent size of 4 GB. The size of the extents is selected at creation
time and cannot be changed later. The extent size controls the maximum total storage
capacity that is manageable per system (across all pools). For DRPs, the extent size also
controls the maximum capacity after reduction in the pool itself.
For more information about the differences between standard pools and DRPs and for
extent size planning, see Chapter 2, “Planning” on page 71 and Chapter 9, “Advanced
features for storage efficiency” on page 509.
Note: Do not create DRPs with small extent sizes. For more information, see this IBM
Support alert.
When creating a standard pool, you cannot change the extent size by using the GUI by
default. If you want to specify a different extent size, enable this option by selecting
Settings → GUI Preferences → General and checking Advanced pool settings, as
shown in Figure 5-5. Click Save.
Figure 5-6 Creating a standard pool with Advanced pool settings selected
If an encryption license is installed and enabled, you can select whether the storage pool
is encrypted, as shown in Figure 5-7. The encryption setting of a storage pool is selected
at creation time and cannot be changed later. By default, if encryption is enabled,
encryption is selected. For more information about encryption and encrypted storage
pools, see Chapter 12, “Encryption” on page 735.
242 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Naming rules: When you choose a name for a pool, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume that is named ABC and a storage pool that is called ABC, but not two
storage pools that are both called ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed at a later stage.
The new pool is created and is included in the list of storage pools with zero bytes, as shown
in Figure 5-8.
To perform this task by using the CLI, run the mkmdiskgrp command. The only required
parameter is the extent size, which is specified by the -ext parameter and must have one of
the following values: 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, or 8192 (MB). To create a
DRP, specify -datareduction yes. The minimum extent size of DRPs is 1024, and attempting
to use a smaller extent size sets the extent size to 1024.
In Example 5-2, the command creates a DRP that is named Pool0 with no MDisks in it.
Arrays are assigned to storage pools at creation time. Arrays cannot exist outside of a storage
pool and they cannot be moved between storage pools. It is only possible to destroy an array
by removing it from a pool and re-creating it within a new pool.
MDisks are managed by using the MDisks by Pools window. To access the MDisks by Pools
window, select Pools → MDisks by Pools, as shown in Figure 5-9.
The window lists all the MDisks that are available in the system under the storage pool to
which they belong. Unassigned MDisks are listed separately at the top. Both arrays and
external MDisks are listed. For more information about operations with array MDisks, see 5.2,
“Working with internal drives and arrays” on page 257. To implement a solution with external
MDisks, see 5.3, “Working with external controllers and MDisks” on page 286.
To list all MDisks that are visible by the system by using the CLI, run the lsmdisk command
without any parameters. If required, you can filter output to include only external or only array
type MDisks.
244 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Create Child Pool window
To create a child storage pool, click Create Child Pool. For more information about child
storage pools and a detailed description of this wizard, see 5.1.4, “Child pools” on page 252.
It is not possible to create a child pool from an empty pool.
Rename window
To modify the name of a storage pool, click Rename. Enter the new name and click Rename
in the dialog window.
To do this task by using the CLI, run the chmdiskgrp command. Example 5-3 shows how to
rename Pool2 to StandardStoragePool. If successful, the command returns no output.
Note: The warning is generated only the first time that the threshold is exceeded by the
used capacity in the storage pool.
To modify the threshold, select Modify Threshold and enter the new value. The default
threshold is 80%. To disable warnings, set the threshold to 0%.
The threshold is visible in the pool properties and indicated by a red bar, as shown in
Figure 5-11.
Example 5-4 shows the warning threshold set to 750 GB for FCM-Pool.
Example 5-4 Changing the warning threshold level by using the CLI
IBM_IBM FlashSystem 7200:superuser>chmdiskgrp -warning 750 -unit gb FCM-Pool
IBM_IBM FlashSystem 7200:superuser>
Easy Tier migrates storage only at a slow rate, which might not keep up with changes to the
compression ratio within the tier. This situation might result in the tier running out of space,
which can cause a loss of access to data until the condition is resolved.
Therefore, the user might specify the maximum overallocation ratio for pools that contain
self-compressing arrays to prevent out-of-space scenarios, as shown in Figure 5-12.
The value acts as a multiplier of the physically available space in self-compressing arrays.
The allowed values are a percentage 100% (default) - 400%, or off. The default setting
prevents overallocation of new pools. Setting the value to off disables this feature.
On the CLI, run the chmdiskgrp command with the -etfcmoverallocationmax parameter to
set a percentage or use off to disable the limit.
For more information and a more detailed explanation, see Chapter 9, “Advanced features for
storage efficiency” on page 509.
246 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Add Storage to Pool window
This action starts the configuration wizard, which assigns storage to the pool, as shown in
Figure 5-13.
If Internal Storage is chosen, the system guides you through array MDisk creation by using
internal drives. If External Storage is selected, the system guides you through the selection
of external storage MDisks. If no external storage is attached or the External Virtualization
license is zero, the External Storage option is not shown. You can add internal and external
storage for a single pool in the configuration dialog.
Figure 5-14 Add Storage dialog with internal and external storage selection
You can use storage pool throttles to avoid overwhelming the back-end storage. Only parent
pools support throttles because only parent pools contain MDisks from internal or external
back-end storage. For volumes in child pools, the throttle of the parent pool is applied.
You can define a throttle for input/output operations per second (IOPS), bandwidth, or both,
as shown in Figure 5-15:
IOPS limit indicates the limit of configured IOPS (for both reads and writes combined).
Bandwidth limit indicates the bandwidth limit in megabytes per second (MBps). You can
also specify the limit in gigabits per second (Gbps) or terabytes per second (TBps).
If more than one throttle applies to an I/O operation, the lowest and most stringent throttle is
used. For example, if a throttle of 100 MBps is defined on a pool and a throttle of 200 MBps is
defined on a volume of that pool, the I/O operations are limited to 100 MBps.
The throttle limit is a per node limit. For example, if a throttle limit is set for a volume at
100 IOPS, each node on the system that has access to the volume allows 100 IOPS for that
volume. Any I/O operation that exceeds the throttle limit is queued at the receiving nodes. The
multipath policies on the host determine how many nodes receive I/O operations and the
effective throttle limit.
If a throttle exists for the storage pool, the dialog checkbox that is shown in Figure 5-15 also
shows the Remove button that is used to delete the throttle.
248 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To set a storage pool throttle by using the CLI, run the mkthrottle command. Example 5-5
shows a storage pool throttle, named iops_bw_limit, that is set to 3 megabits per second
(Mbps) and 1000 IOPS on Pool0.
To remove a throttle by using the CLI, run the rmthrottle command. The command uses the
throttle ID or throttle name as an argument, as shown in Example 5-6. The command returns
no feedback if it runs successfully.
To see a list of created throttles by using the CLI, run the lsthrottle command. When you
run the command without arguments, it displays a list of all throttles on the system. To list only
storage pool throttles, specify the -filtervalue throttle_type=mdiskgrp parameter.
To list storage pool resources by using the CLI, run the lsmdisk command. You can filter the
output to display MDisk objects that belong only to a single MDisk group (storage pool), as
shown in Example 5-7.
If there are volumes in the pool, the Delete option is inactive and cannot be selected. Delete
the volumes or migrate them to another storage pool before proceeding. For more information
about volume migration and volume mirroring, see Chapter 6, “Volumes” on page 299.
To delete a storage pool by using the CLI, run the rmmdiskgrp command.
250 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Be extremely careful when you run the rmmdiskgrp command with the -force
parameter. Unlike the GUI, it does not prevent you from deleting a storage pool with
volumes. This command deletes all volumes and host mappings on a storage pool, and
they cannot be recovered.
To display detailed information about the properties by using the CLI, run the lsmdiskgrp
command with a storage pool name or ID as a parameter, as shown in Example 5-8.
Unlike a parent pool, a child pool does not contain MDisks. Its capacity is provided by the
parent pool. A child pool from a standard parent pool is the child_thick type. The capacity of a
child pool from a standard pool is set at creation time, but can be modified later
nondisruptively. The capacity must be a multiple of the parent pool extent size and smaller
than the free capacity of the parent pool. Capacity that is assigned to a child pool of the
child_thick type is taken away from the capacity of the parent pool.
A child pool from a data reduction parent pool is the child_quotaless type. It is not possible to
set the capacity for a child pool of the child_quotaless type. A child pool of the child_quotaless
type can use the whole capacity of the parent pool due to the nature of DRPs.
Creating a child pool within another child pool also is not possible.
Child pools of the child_thick type are useful when the capacity that is allocated to a specific
set of volumes must be controlled. For example, child pools of the child_thick type can be
used with VMware vSphere Virtual Volumes (VVOLs). Storage administrators can restrict the
access of VMware administrators to only a part of the storage pool and prevent volume
creation from affecting the rest of the parent storage pool.
Ownership groups can be used to restrict access to storage resources to a specific set of
users, as described in Chapter 11, “Ownership groups” on page 723.
Child pools of the child_thick type also can be useful when strict control over thin-provisioned
volume expansion is needed. For example, you might create a child pool with no volumes in it
to act as an emergency set of extents so that if the parent pool ever runs out of free extents,
you can use the ones from the child pool.
On systems with encryption enabled, child pools of the child_thick type can be created to
migrate existing volumes in a non-encrypted pool to encrypted child pools. When you create a
child pool of the child_thick type after encryption is enabled, an encryption key is created for
the child pool even when the parent pool is not encrypted. You can then use volume mirroring
to migrate the volumes from the non-encrypted parent pool to the encrypted child pool.
Encrypted child pools of the quotaless type can be created only if the parent pool is
encrypted. The data reduction child pool inherits an encryption key from the parent pool.
Child pools inherit most properties from their parent pools, and these properties cannot be
changed. The inherited properties include:
Extent size
Easy Tier setting
Also, a child data reduction pool inherits the encryption setting and encryption key from a
parent data reduction pool.
252 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-19 Creating a child pool
2. When the dialog box opens, enter the name of the child pool and click Create. Figure 5-20
shows the dialog for pool type child_quotaless.
3. Figure 5-21 shows the Create Child Pool for Pool0 dialog box. For pool type child_thick,
enter the pool capacity.
To create a child pool by using the CLI, run the mkmdiskgrp command. You must specify the
parent pool for your new child pool and its size for pool type child_thick, as shown in
Example 5-9. The size is in megabytes by default (unless the -unit parameter is used) and
must be a multiple of the parent pool’s extent size. In this case, it is 100 * 1024 MB = 100 GB.
2. Select Resize to increase or decrease the capacity of the child storage pool type
child_thick, as shown in Figure 5-24 on page 255. Enter the new pool capacity and click
Resize.
254 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-24 Resizing a child pool
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a
child pool must be larger than the capacity that is used by its volumes.
When the child pool is shrunk, the system resets the warning threshold and issues a
warning if the threshold is reached.
To rename and resize child pool by using the CLI, run the chmdiskgrp command.
Example 5-10 renames the child pool Pool0_child0 to Pool0_child_new and reduces its size
to 44 GB. If successful, the command returns no feedback.
Deleting a child pool is a task that is like deleting a parent pool. As with a parent pool, the
Delete action is disabled if the child pool contains volumes, as shown in Figure 5-25.
After you delete a child pool type child_thick, the extents that it occupied return to the parent
pool as free capacity.
To delete a child pool by using the CLI, run the rmmdiskgrp command.
The system supports migration of volumes between child pools within the same parent pool
or migration of a volume between a child pool and its parent pool. Migrations between a
source and target child pool with different parent pools are not supported. However, you can
migrate the volume from the source child pool to its parent pool. Then, the volume can be
migrated from the parent pool to the parent pool of the target child pool. Finally, the volume
can be migrated from the target parent pool to the target child pool.
During a volume migration within a parent pool (between a child and its parent or between
children with the same parent), there is no data movement, but there are extent
reassignments.
Volume migration between a child storage pool and its parent storage pool can be performed
by going to the window page and clicking Volumes. Right-click a volume and select it to
migrate it into a suitable pool.
In the example in Figure 5-27, the volume child_volume was created in child pool
Pool0_child0. The child pools appear exactly like the parent pools in the Volumes by Pool
window.
For more information about the CLI commands for migrating volumes to and from child pools,
see Chapter 6, “Volumes” on page 299.
256 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5.1.5 Encrypted storage pools
The system supports two types of encryption: hardware encryption and software encryption.
This pane gives an overview of the internal drives in the system. To display all drives that are
managed in the system, including all I/O groups and expansion enclosures, click All Internal
Storage in the Drive Class filter.
You can find information about the capacity allocation of each drive class in the upper right, as
shown in Figure 5-28 on page 257:
Assigned to MDisks Shows the storage capacity of the selected drive class that
is assigned to MDisks.
Assigned to Spares Shows the storage capacity of the selected drive class that
is used for spare drives.
Available Shows the storage capacity of the selected drive class that
is not yet assigned to either MDisks or Spares.
Total Written Capacity Limit Shows the total amount of storage capacity of the drives in
the selected class.
If All Internal Storage is selected under the Drive Class filter, the values that are shown refer
to the entire internal storage.
The percentage bar indicates how much of the total written capacity limit is assigned to
MDisks and spares. MDisk capacity is represented by the solid portion, and spare capacity by
the shaded portion of the bar.
To list all internal drives that are available in the system, run the lsdrive command. If needed,
you can filter output to list only drives that belong to particular enclosure, that have specific
capacity, or by other attributes. For an example, see Example 5-11.
Example 5-11 The lsdrive output (some lines and columns are not shown)
IBM_IBM FlashSystem 7200:superuser>lsdrive
id status error_sequence_number use tech_type capacity mdisk_id
0 online member tier0_flash 20TB 32
1 online member tier0_flash 744.21GB 0
2 online member tier0_flash 744.21GB 0
3 online member tier0_flash 20TB 32
4 online member tier0_flash 20TB 32
5 online member tier0_flash 20TB 32
<...>
The drive list shows the Status of each drive. A drive can be Online, which means that the
drive is fully accessible by both nodes in the I/O group. A Degraded drive is only accessible by
one of the two nodes. A drive status of Offline indicates that the drive is not accessible by
any of the nodes, for example, because it was physically removed from the enclosure or it is
unresponsive or failing.
The drive Use attribute describes the role that it plays in the system. The values and meanings
are:
Unused The system has access to the drive but was not told to take ownership
of it. Most actions on the drive are not permitted. This state is a safe
state for newly added hardware.
Candidate The drive is owned by the system, and is not part of the RAID
configuration. It is available to be used in an array MDisk.
258 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Spare The drive is a hot spare protecting nondistributed (traditional) RAID
arrays. If any member of such an array fails, a spare drive is taken and
becomes a Member for rebuilding the array.
Member The drive is part of a RAID array.
Failed The drive is owned by the system and was diagnosed as faulty. It is
waiting for a service action.
The Use attribute can change to different values, but not all changes are valid, as shown in
Figure 5-29.
The system automatically sets the Use to Member when it creates a RAID array. Changing Use
from Member to Failed is possible only if the array does not depend on the drive, and
additional confirmation is required when taking a drive offline when no spare is available.
Changing a Candidate drive to Failed is possible only by using the CLI.
Note: To start configuring arrays in a new system, all Unused drives must be configured as
Candidates. The Initial Setup or Assign Storage wizards do that automatically.
A number of actions can be performed on internal drives. To perform any action, select one or
more drives and right-click the selection, as shown in Figure 5-30. Alternatively, select the
drives and click Actions.
The actions that are available in the drop-down menu depend on the status and usage of the
drive or drives that are selected. Some actions can be performed only on drives in a certain
state, and some are possible only when a single drive is selected.
Figure 5-32 shows the message that appears if the action results in a degraded array status.
The system prevents you from taking the drive offline if taking the drive offline results in a loss
of access to data.
If a spare is available and the drive is taken offline, the associated MDisk remains Online and
the RAID array starts a rebuild by using a suitable spare. If no spare is available and the drive
is taken offline, the status of the associated MDisk becomes Degraded. The status of the
storage pool to which the MDisk belongs becomes Degraded too.
260 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To take a drive offline by using the CLI, run the chdrive command, as shown in
Example 5-12. This command returns no feedback. Use the -allowdegraded parameter to set
a member drive offline even if no suitable spare is available.
The system prevents you from taking a drive offline if the RAID array depends on that drive
and doing so would result in a loss of access to data, as shown in Figure 5-34.
Figure 5-34 Taking a drive offline fails if it would result in a loss of access to data
Action: Mark as
Select Mark as to change the use that is assigned to the drive, as shown in Figure 5-35. The
list of available options depends on the current drive use and state. For more information, see
the allowed state transitions that are shown in Figure 5-35.
Note: Marking a compressed drive to the Candidate role causes the drive to perform a
format. The format must complete before the drive goes online and is available for use.
Action: Identify
Select Identify to turn on the light-emitting diode (LED) light of the enclosure slot of the
selected drive. With this action, you can easily find a drive that must be replaced or that you
want to troubleshoot. A dialog box opens so that you can confirm that the LED was turned on,
as shown in Figure 5-36.
Your action makes an amber LED flash (turn on and off continuously) for the drive that you
want to identify.
Click Turn LED Off when you are finished. The LED returns to its initial state.
On the CLI, run the chenclosureslot command to turn on the LED. Example 5-14 shows the
commands to find the enclosure and slot for drive 1 and to turn on and off the identification
LED of slot 3 in enclosure 1.
Example 5-14 Changing a slot LED to identification mode by using the CLI
IBM_IBM FlashSystem 7200:superuser>lsdrive 1
id 21
<...>
enclosure_id 1
slot_id 4
<...>
IBM_IBM FlashSystem 7200:superuser>chenclosureslot -identify yes -slot 4 1
IBM_IBM FlashSystem 7200:superuser>lsenclosureslot -slot 4 1
enclosure_id 1
slot_id 4
fault_LED slow_flashing
262 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
powered yes
drive_present yes
drive_id 1
IBM_IBM FlashSystem 7200:superuser>chenclosureslot -identify no -slot 4 1
Action: Upgrade
Select Upgrade to update the drive firmware, as shown in Figure 5-37. You can choose to
update an individual drive, selected drives, or all the drives in the system.
For information about updating the drive firmware, see Chapter 13, “Reliability, availability,
and serviceability, monitoring and logging, and troubleshooting” on page 793.
All listed volumes go offline if all selected drives go offline concurrently. This situation does
not mean that volumes go offline if a single drive or two of the three drives go offline.
Whether there are dependent volumes depends on the redundancy of the RAID array at a
certain point. The redundancy is based on the RAID level, state of the array, and state of the
other member drives in the array. For example, it takes three or more drives going offline
concurrently in a healthy RAID 6 array to have dependent volumes.
Note: A lack of dependent volumes does not imply that there are no volumes that use the
drive. Volume dependency shows the list of volumes that become unavailable if the drive or
the set of selected drives becomes unavailable.
You can get the same information by running the lsdependentvdisks command. Use the
parameter -drive with the list of drive IDs that you are checking, separated with a colon (:), as
shown in Example 5-15.
264 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Action: Properties
Select Properties to view more information about the drive, as shown in Figure 5-39.
You can find a short description of each drive property by hovering your cursor over it and
clicking [?]. You can also display drive slot details by clicking the Drive Slot tab.
To get all available information about the particular drive, run the lsdrive command with the
drive ID as the parameter. To get slot information, run the lsenclosureslot command.
RAID 0 does not provide any redundancy. A single drive failure in a RAID 0 array causes
data loss.
In a TRAID approach, data is spread among up to 16 drives in an array. There are separate
spare drives that do not belong to an array, and they can potentially protect multiple arrays.
When one of the drives within the array fails, the system rebuilds the array by using a spare
drive.
For example, in RAID 10 all data is read from the mirrored copy and then written to a spare
drive. The spare becomes a member of the array when the rebuild starts. After the rebuild is
complete and the failed drive is replaced, a member exchange is performed to add the
replacement drive to the array and restore the spare to its original state so it can act as a hot
spare again for another drive failure in the future.
During a rebuild of a TRAID array, writes are submitted to a single spare drive, which can
become a bottleneck and might impact I/O performance. With increasing drive capacity, the
rebuild time increases significantly. Additionally, the probability of a second failure during the
rebuild process also becomes more likely. Outside of any rebuild activity, the spare drives are
idle and do not process I/O requests for the system.
Using this approach, DRAID reduces the rebuild time, the impact on I/O performance during
the rebuild, and the probability of a second failure during the rebuild. Like TRAID, a DRAID 6
array can tolerate two drive failures and survive. If another drive fails in the same array before
the array is rebuilt, the MDisk and the storage pool go offline. In other words, DRAID has the
same redundancy characteristics as TRAID.
A rebuild after a drive failure reconstructs the data on the failed drive and distributes it across
all drives in the array by using a rebuild area. After the failed drive is replaced, a copyback
process copies the data to the replacement drive and to free the rebuild area so that it can be
used for another drive failure in the future.
Table 5-1 on page 267 shows the summary of supported drives, array types, and RAID levels
266 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: DRAID 1 is supported only on IBM FlashSystem 7200, IBM FlashSystem 9200, or
newer platforms.
Table 5-1 Summary of supported drives, array types, and RAID levels
Drive type Non-DRAID DRAID
Note: DRAID 1 is not recommended for FCM drives larger than 8 TB. You cannot use the
GUI to create DRAID 1 arrays on XL FCM drives (80 TB).
Understanding DRAID 6
Figure 5-40 shows an example of a DRAID 6 with 10 disks. The capacity on the drives is
divided into many packs. The reserved spare capacity (marked in yellow) is equivalent to two
spare drives, but the capacity is distributed across all of the drives (depending on the pack
number) to form two rebuild areas. The data is striped like a TRAID array, but the number of
drives in the array can be larger than the stripe width.
Figure 5-40 DRAID 6 (for simplification, not all packs are shown)
Figure 5-41 Single drive failure with DRAID 6 (for simplification, not all packs are shown)
After the rebuild completes, the array can sustain two more drive failures even before drive 3
is replaced. If no rebuild area is available to perform a rebuild after another drive failure, the
array becomes Degraded until a rebuild area is available again and the rebuild can start.
After drive 3 is replaced, a copyback process copies the data from the occupied rebuild area
to the replacement drive to empty the rebuild area and make sure that it can be used again for
a new rebuild.
DRAID addresses the main disadvantages of TRAID while providing the same redundancy
characteristics:
In a drive failure, data is read from many drives and written to many drives. This process
minimizes the impact on performance during the rebuild process. Also, it reduces rebuild
time. Depending on the DRAID configuration and drive sizes, the rebuild process can be
up to 10 times faster.
Spare space is distributed throughout the array, which means more drives are processing
I/O and no dedicated spare drives are idling.
268 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
DRAIDs use all the node CPU cores to improve performance, especially in configurations
with few arrays.
Here is the minimum number of drives that are needed to build a DRAID array:
Two drives for a DRAID 1 array
Six drives for a DRAID 6 array
Four drives for a DRAID 5 array
Understanding DRAID 1
DRAID 1 can contain only 2 - 6 drives initially and can be expanded up to 16 drives of the
same capacity. DRAID 1 arrays consist of two mirrored strips that are distributed across all
member drives. Unlike DRAID 5 and 6, DRAID 1 does not contain any parity strips.
Figure 5-42 shows an example of a DRAID that is configured as a DRAID 1 with two member
drives and no rebuild area. Both of the drives in the array are active.
1. (Minimum) Two active drives and stripe width
2. Pack, with a depth of two strips
Figure 5-44 on page 271 shows the rebuild-in-place process, with the new data being copied
directly into the replaced drive.
1. Active drive
2. Data being copied directly into the replaced drive
270 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-44 Rebuild-in-place process on DRAID 1
DRAID 1 array with three or more drives and a single rebuild area
The rebuild process starts if a member drive fails in a DRAID 1 array with three or more
drives. To recover data, the data is read from multiple drives. The recovered data is written to
the rebuild areas, which are distributed across all of the drives in the array. All drives are
involved in the rebuild process, which reduces the rebuild time. After the drive is replaced, the
copyback process starts. Data is copied from the rebuild area to the original location. SSDs,
SAS flash drives, NVMe flash drives, NVMe FCM drives, SCMs with maximum capacity of
8 TB, and hard disk drives (HDDs) (up to 8 TB) support DRAID 1 with 3 - 16 members. An
array of FCM XL drives (80 TB) is limited to nine drives and cannot be created through the
GUI.
Figure 5-45 Distributed RAID 1 array with five members and a single rebuild area
Figure 5-46 shows that drive 2 failed, which triggers the rebuild process from all drives to the
rebuild areas.
Figure 5-46 Single drive failure in distributed RAID 1 triggers a rebuild area
Figure 5-47 on page 273 shows the copyback process after a drive replacement.
272 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-47 Copyback process from a rebuild area to the original location.
Note: As a best practice, use DRAID 6 whenever possible. DRAID technology dramatically
reduces rebuild times, decreases the exposure of volumes to the extra load of recovering
redundancy, and improves performance. For six drives or less, DRAID 1 is the
recommended DRAID type for all supporting platforms.
To create a RAID array from internal storage, select Pools → Pools, then Actions, and then
Add Storage, or right-click the storage pool to which you want to add arrays and select Add
Storage, as shown in Figure 5-48.
This action opens the configuration box that is shown in Figure 5-50. If any of the drives have
the Unused role, reconfigure them as Candidates to be included into configuration.
If Internal-Storage is chosen, then the system guides you through array MDisk creation. If
External-Storage is selected, the system guides you through the selection of external
storage. Select the pool from the drop-down menu if no pool is selected yet. The summary
view at the right pane shows the Current Usable Capacity of the selected pool. After you
define either internal or external storage or both, click Add storage.
Internal storage
Select Define Array, and choose the drive class for the array from the drop-down menu. Only
drive classes for which candidate drives exist are displayed. The system automatically
recommends a RAID type and level based on the available candidate drives.
If you are adding storage to a pool that already has storage that is assigned to the pool, the
existing storage configuration is considered for the recommendation. The system aims to
achieve a balanced configuration, so some properties are inherited from existing arrays in the
pool for a specific drive class.
274 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
It is not possible to add RAID arrays that are different from existing arrays in a pool by using
the GUI. Select Advanced to adjust the number of spares or rebuild areas, the stripe width,
and the array width before the array is created. Depending on the adjustments that are made,
the system might select a different RAID type and level. The summary view at the right pane
can be expanded to preview the details of the arrays that are going to be created.
Note: It is not possible to change the RAID level or stripe width of an existing array. You
also cannot change the drive count of a traditional array. If you must change these
properties, you must delete the array MDisk and re-create it with the required settings.
In Figure 5-51, the dialog box recommends that you create one DRAID 6 with all fifteen 10 K
enterprise drives. The summary view reflects the new usable capacity based on your
selection.
Figure 5-52 Advanced selection for rebuild areas, stripe width, and array width
The stripe width indicates the number of strips of data that can be written at one time when
data is rebuilt after a drive fails. This value is also referred to as the redundancy unit width.
A stripe, which can also be referred to as a redundancy unit, is the smallest amount of data
that can be addressed. The DRAID strip size is 256 KB. By default, the system recommends
DRAID 6 when possible.
In Figure 5-53 on page 277, if the system has multiple drive classes (for example, flash and
enterprise drives), use the plus symbol to create an extra array from other drive classes to
take advantage from Easy Tier. The plus symbol is displayed only if multiple drive classes are
on the system. For more information about Easy Tier, see Chapter 9, “Advanced features for
storage efficiency” on page 509.
276 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-53 Creating arrays from different drive classes.
If the pool has an existing DRAID 6 array of 16 drives, you cannot add a two-drive RAID 1
array to the same pool from the same drive class because this configuration creates an
imbalanced storage pool. You can still add any array of any configuration to an existing pool
by using the CLI if the platform supports the RAID level.
When you are satisfied with the configuration, click Add Storage. The RAID arrays are
created, added as array mode MDisks to the pool, and initialized in the background.
If you used self-compressing drives to create the array, the system might prompt you to
modify the overallocation limit of the pool. For more information, see “Easy Tier
Overallocation Limit window” on page 246.
You can monitor the progress of the initialization by selecting the corresponding task under
Running Tasks in the upper right of the GUI, as shown in Figure 5-54. The array is available
for I/O during this process, so you do not need to wait for it to complete.
To get the recommended array configuration by using the CLI, run the lsdriveclass
command to list the available drive classes, and then use lsarrayrecommendation
commands, as shown in Example 5-16. The recommendations are listed in the order of
preference.
To create the recommended DRAID 6 array, specify the RAID level, drive class, number of
drives, stripe width, number of rebuild areas, and the storage pool. The system automatically
chooses drives for the array from the available drives in the class. In Example 5-17, you
create a DRAID 6 array out of 10 drives of class 0 by using a stripe width of 9 and a single
rebuild area, and you add it to Pool2.
278 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
There are default values for the stripe width and the number of rebuild areas, which depend
on the RAID level and the drive count. In this example, you had to specify the stripe width
because for DRAID 6 it is 12 by default. The drive count value must equal or be greater than
the sum of the stripe width and the number of rebuild areas.
To create a RAID 10 MDisk instead, you must specify a list of drives that you want to add as
members, the RAID level, and the storage pool name or ID to which you want to add this
array.
Example 5-18 creates a RAID 10 array and adds it to Pool2. It also designates a spare drive.
Note: Do not forget to designate some of the drives as spares when creating traditional
arrays. Spare drives are required to perform a rebuild immediately after a drive failure.
The storage pool must exist. To create a storage pool, see 5.1.1, “Creating storage pools” on
page 240. To check the array initialization progress by using the CLI, run the
lsarrayinitprogress command.
To select an action, select Pools → MDisks by Pools, select the array (MDisk), and click
Actions. Alternatively, right-click the array, as shown in Figure 5-56.
The CLI command for this operation is charray, as shown in Example 5-19. No feedback is
returned.
Swap Drive
To replace a drive in the array with another drive, select Swap Drive. The other drive must
have the Candidate or Spare role. Use this action to perform proactive drive replacement or
replace a drive that has not failed but is expected to fail soon, for example, as indicated by an
error message in the event log.
Figure 5-57 shows the dialog box that opens. Select the member drive that you want to
replace and the replacement drive, and click Swap.
The exchange of the drives runs in the background. The volumes on the affected MDisk
remain accessible during the process.
Swapping a drive in a traditional array performs a concurrent member exchange, which does
not reduce the redundancy of the array. The data of the old member is copied to the new
member, and after the process is complete, the old member is removed from the array.
In a DRAID, the system immediately removes the old member from the array and performs a
rebuild. After the rebuild completes, a copyback is initiated to copy the data to the new
member drive. This process is non-disruptive, but reduces the redundancy of the array during
the rebuild process.
You can run the charraymember command to do this task. Example 5-20 shows the
replacement of array member ID 7 that was assigned to drive ID 12 with drive ID 17. The
-immediate parameter is required for DRAIDs to acknowledge that a rebuild will start.
Example 5-20 Replacing an array member by using the CLI (some columns are not shown)
IBM_IBM FlashSystem 7200:superuser>lsarraymember 16
mdisk_id mdisk_name member_id drive_id new_drive_id spare_protection
16 Distributed_array 6 18 1
16 Distributed_array 7 12 1
16 Distributed_array 8 15 1
<...>
280 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
IBM_IBM FlashSystem 7200:superuser>lsdrive
id status error_sequence_number use tech_type capacity mdisk_id
16 online member tier_enterprise 558.4GB 16
17 online spare tier_enterprise 558.4GB
18 online member tier_enterprise 558.4GB 16
<...>
IBM_IBM FlashSystem 7200:superuser>charraymember -immediate -member 7 -newdrive 17
Distributed_array
IBM_IBM FlashSystem 7200:superuser>
If the number of rebuild areas that are available does not meet the configured goal, an error is
logged in the event log, as shown in Figure 5-58. This error can be fixed by replacing failed
drives in the DRAID array.
Note: This option does not change the actual number of rebuild areas or spares that are
available to the array, but specifies only at which point a warning event is generated.
Setting the goal to 0 does not prevent the array from rebuilding.
On the CLI, this task is performed with the charray command (see Example 5-21).
Example 5-21 Adjusting array goals by running the charray command (some columns are not shown)
IBM_IBM FlashSystem 7200:superuser>lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name distributed
0 mdisk0 online 0 mdiskgrp0 no
16 Distributed_array online 1 mdiskgrp1 yes
IBM_IBM FlashSystem 7200:superuser>charray -sparegoal 2 mdisk0
IBM__IBM FlashSystem 7200:superuser>charray -rebuildareasgoal 2 Distributed_array
Candidate drives of a drive class that is compatible with the drive class of the array must be
available in the system or an error message is shown and the array cannot be expanded. A
drive class is compatible with another one if its characteristics, such as capacity and
performance, are an exact match or are superior. In most cases, drives of the same class
should be used to expand an array.
The dialog box that is shown in Figure 5-59 shows an overview of the size of the array, the
number of available candidate drives in the selected drive class, and the new array capacity
after the expansion. The drive class and the number of drives to add can be modified as
required and the projected new array capacity is updated. To add more rebuild areas to the
array, click Advanced Settings and modify the number of extra spares.
Clicking Expand starts a background process that adds the selected number of drives to the
array. As part of the expansion, the system automatically migrates data for optimal
performance for the new expanded configuration.
You can monitor the progress of the expansion by clicking the Running Tasks icon in the
upper right of the GUI or by selecting Monitoring → Background tasks as shown in
Figure 5-60.
Note: When you expand a thin-provisioned NVMe array, the physical capacity is not
immediately available, and the availability of new physical capacity is not tracked with
logical expansion progress.
282 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
On the CLI, this task is performed by running the expandarray command. To get a list of
compatible drive classes, run the lscompatibledriveclasses command, as shown in
Example 5-22.
Note: The expandarray command uses the total drive count after the expansion as a
parameter, including both the number of new drives and the number of drives in the array
before the expansion. The same is true for the number of rebuild areas.
Delete
Select Delete to remove the array from the storage pool and delete it. An array MDisk does
not exist outside of a storage pool. Therefore, an array cannot be removed from the pool
without being deleted. All drives that belong to the deleted array take on the Candidate role.
If there are no volumes that use extents from this array, the command runs immediately
without extra confirmation. If there are volumes that use extents from this array, you are
prompted to confirm the action, as shown in Figure 5-61.
Confirming the deletion starts a background process that migrates used extents on the MDisk
to other MDisks in the same storage pool. After that process completes, the array is removed
from the storage pool and deleted.
To delete the array with the CLI, run the rmarray command. The -force parameter is required
if volume extents must be migrated to other MDisks in a storage pool.
To monitor the progress of the migration, use the Running Tasks section in the GUI or the
lsmigrate command on the CLI. The MDisk continues to exist until the migration completes.
Dependent Volumes
A volume depends on an MDisk if the MDisk becoming unavailable results in a loss of access
or a loss of data for that volume. Use this option before you do maintenance operations to
confirm which volumes (if any) will be affected.
If an MDisk in a storage pool goes offline, the entire storage pool goes offline, which means
all volumes in a storage pool depend on each MDisk in the same pool, even if the MDisk does
not have extents for each of the volumes. Clicking the Dependent Volumes Action menu of
an MDisk lists the volumes that depend on that MDisk, as shown in Figure 5-62.
You can get the same information by running the lsdependentvdisks command, as shown in
Example 5-23.
Example 5-23 Listing virtual disks that depend on a MDisk by using the CLI
IBM_IBM FlashSystem 7200:superuser>lsdependentvdisks -mdisk mdisk21
vdisk_id vdisk_name
48 vdisk0
49 vdisk1
50 vdisk2
<...>
284 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Drives
To see information about the member drives that are included in the array, select Drives, as
shown in Figure 5-63.
You can get the same information by running the lsarraymember command. Provide an array
name or ID as the parameter to filter the output from the array. If you run the command
without arguments, the command lists all members of all configured arrays.
Properties
This section shows all the available array MDisk parameters: its state, capacity, RAID level,
and others.
To get a list of all configured arrays, run the lsarray command with the array name or ID as
the parameter to get more information about the array, as shown in Example 5-24.
A key feature of the system is its ability to consolidate disk controllers from various vendors
into storage pools. The storage administrator can manage and provision storage to
applications from a single user interface and use a common set of advanced functions across
all of the storage systems under the control of the system.
This concept is called External Virtualization, which makes your storage environment more
flexible, cost-effective, and easy to manage. External Virtualization is a licensed function.
For more information about how to configure external storage systems, see 2.9, “Back-end
storage configuration” on page 88.
System layers
A system layer affects how the system interacts with other external IBM Storwize or
IBM FlashSystem family systems. A system is in either the storage layer (default) or the
replication layer.
In the storage layer, the system can provide external storage for a replication-layer system,
but it cannot use another Storwize or IBM FlashSystem family system that is configured with
the storage-layer external storage.
In the replication layer, the system cannot provide external storage for a replication-layer
system, but the system can use another Storwize or IBM FlashSystem family system that is
configured with storage-layer external storage.
You get a warning that your system is in the storage layer if you try to add an external iSCSI
storage controller by using the GUI. You are prompted to convert the system to the replication
layer automatically.
Note: Before you change the system layer, the following conditions must be met:
No host object can be configured with worldwide port names (WWPNs) from a Storwize
or IBM FlashSystem family system.
No system partnerships can be defined.
No Storwize or IBM FlashSystem family system can be visible on the storage area
network (SAN) fabric.
To switch the system layer, you can also run the chsystem CLI command, as shown in
Example 5-25 on page 287. If the command runs successfully, it returns no output.
286 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 5-25 Changing the system layer
IBM_IBM FlashSystem 7200:superuser>lssystem | grep layer
layer storage
IBM_IBM FlashSystem 7200:superuser>chsystem -layer replication
IBM_IBM FlashSystem 7200:superuser>
For more information about layers and how to change them, go to IBM FlashSystem 9200
documentation and select Product overview → Technical overview → System layers.
If the external controller is not detected, ensure that the system is cabled and zoned into the
same SAN as the external storage system. Check that layers are set correctly on both
virtualizing and virtualized systems if they belong to the IBM Storwize or IBM FlashSystem
family.
After the problem is corrected, rescan the FC network immediately by selecting Pools →
External Storage, and then selecting Actions → Discover Storage, as shown in
Figure 5-64.
This action runs the detectmdisk command. It returns no output. Although it might appear
that the command completed, some extra time might be required for it to run. The command
is asynchronous and returns a prompt while the command continues to run in the
background.
To start virtualizing an iSCSI back-end controller, you must follow the steps in
IBM FlashSystem 9200 documentation to perform configuration steps that are specific to your
back-end storage controller. You can see find the steps by selecting Configuring →
Configuring and servicing storage systems → External storage system configuration
with iSCSI connections.
For more information about configuring the system to virtualize a back-end storage controller
with iSCSI, see iSCSI Implementation and Best Practices on IBM Storwize Storage Systems,
SG24-8327.
Depending on the type of back-end system, it might be detected as one or more controller
objects.
If the External Storage window does not appear in the Pools windows, the virtualization
licenses are not configured. To use the system’s virtualization functions, you must order the
correct External Virtualization licenses. You can configure the licenses by selecting
Settings → System → Licensed Functions. For assistance with licensing questions or to
purchase any of these licenses, contact your IBM account team or IBM Business Partner.
The External Storage window lists the external controllers that are connected to the system
and all the external MDisks that are detected by the system. The MDisks are organized by the
external storage system that presents them. Toggle the sign to the left of the controller icon to
show or hide the MDisks that are associated with the controller.
If you configured logical unit names on your external storage systems, it is not possible for the
system to determine these names because they are local to the external storage system.
However, you can use the LU unique identifiers (UIDs), or external storage system worldwide
node names (WWNNs) and LU number to identify each device.
To list all visible external storage controllers with CLI, run the lscontroller command, as
shown in Example 5-26.
Example 5-26 Listing controllers by using the CLI (some columns are not shown)
IBM_IBM FlashSystem 7200:superuser>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
0 controller0 2076 IBM 2145
1 controller1 2076 IBM 2145
2 controller2 2076 IBM 2145
<...>
288 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5.3.2 Actions for external storage controllers
You can perform many actions on external storage controllers. Some actions are available for
external iSCSI controllers only.
To select any action, select Pools → External Storage and right-click the controller, as
shown in Figure 5-66. Alternatively, select the controller and click Actions.
Discover Storage
When you create or remove LUs on an external storage system, the change might not be
detected immediately. In this case, click Discover Storage so that the system can rescan the
FC or iSCSI network. In general, the system automatically detects disks when they appear on
the network. However, some FC controllers do not send the required SCSI primitives that are
necessary to automatically discover the new disks.
The rescan process discovers any new MDisks that were added to the system and
rebalances MDisk access across the available ports. It also detects any loss of availability of
the controller ports.
Rename
To modify the name of an external controller to simplify administration tasks, click Rename.
The naming rules are the same as for storage pools, and they can be found in 5.1.1, “Creating
storage pools” on page 240.
To rename a storage controller by using the CLI, run the chcontroller command.
For more information about the CLI commands and detailed instructions, see iSCSI
Implementation and Best Practices on IBM Storwize Storage Systems, SG24-8327.
To change the controller site assignment by using the CLI, run the chcontroller command.
Example 5-27 shows that controller0 was renamed and reassigned to a different site.
290 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Listing external MDisks
You can manage external MDisks by using the External Storage window, which is accessed
by selecting Pools → External Storage, as shown in Figure 5-68.
To list all MDisks that are visible by the system by using the CLI, run the lsmdisk command
without any parameters. If required, you can filter output to include only external or only array
type MDisks.
Figure 5-69 shows how to add selected MDisk to an existing storage pool. Click Assign
under the Actions menu or right-click the MDisk and select Assign.
After you click Assign, a dialog box opens, as shown in Figure 5-70. Select the target pool,
MDisk storage tier, and external encryption setting.
When you add MDisks to pools, you must assign them to the correct storage tiers. It is
important to set the tiers correctly if you plan to use the Easy Tier feature. Using an incorrect
tier can mean that the Easy Tier algorithm might make wrong decisions and thus affect
system performance. For more information about storage tiers, see Chapter 9, “Advanced
features for storage efficiency” on page 509.
Select the Externally encrypted checkbox if your back-end storage performs data
encryption. For more information about encryption, see Chapter 12, “Encryption” on
page 735.
Note: If the external storage LUs that are presented to the system contain data that must
be retained, do not use the Assign option to add the MDisks to a pool. This option
destroys the data on the LU. Instead, use the Import option to create an image mode
MDisk. For more information, see Chapter 8, “Storage migration” on page 485.
To see the external MDisks that are assigned to a pool within the system, select Pools →
MDisks by Pools.
When a new MDisk is added to a pool that already contains MDisks and volumes, the Easy
Tier feature automatically balances volume extents between the MDisks in the pool as a
background process. The goal of this process is to distribute extents in a way that provides
the best performance to the volumes. It does not attempt to balance the amount of data
evenly between all MDisks.
The data migration decisions that Easy Tier makes between tiers of storage (inter-tier) or
within a single tier (intra-tier) are based on the I/O activity that is measured. Therefore, when
you add an MDisk to a pool, extent migrations are not necessarily performed immediately. No
migration of extents occurs until there is sufficient I/O activity to trigger it.
If Easy Tier is turned off, no extent migration is performed. Only newly allocated extents are
written to a new MDisk.
For more information about the Easy Tier feature, see Chapter 9, “Advanced features for
storage efficiency” on page 509.
To assign an external MDisk to a storage pool by using the CLI, run the addmdisk command.
You must specify the MDisk name or ID, MDisk tier, and target storage pool, as shown in
Example 5-28. The command returns no feedback.
To choose an action, select Pools → External Storage or Pools → MDisks by Pools, select
the external MDisk, and click Actions, as shown in Figure 5-71 on page 293. Alternatively,
right-click the external MDisk.
292 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-71 Actions for MDisks
Discover Storage
This option is available even if no MDisks are selected. By running it, you cause the system to
rescan the iSCSI and FC network for these purposes:
Find any new MDisks that might have been added.
Rebalance MDisk access across all available controller device ports.
Assign
This action is available only for unmanaged MDisks. Select Assign to open the dialog box
that is explained in “Assigning MDisks to pools” on page 291.
Modify Tier
To modify the tier to which the external MDisk is assigned, select Modify Tier, as shown in
Figure 5-72. This setting is adjustable because the system cannot always detect the tiers that
are associated with external storage automatically, unlike with internal arrays.
For more information about storage tiers and their importance, see Chapter 9, “Advanced
features for storage efficiency” on page 509.
Modify Encryption
To modify the encryption setting for the MDisk, select Modify Encryption. This option is
available only when encryption is enabled.
If the external MDisk is already encrypted by the external storage system, change the
encryption state of the MDisk to Externally encrypted. This setting stops the system from
encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
For more information about encryption, encrypted storage pools, and self-encrypting MDisks,
see Chapter 12, “Encryption” on page 735.
To perform this task by using the CLI, run the chmdisk command, as shown in Example 5-30.
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk enables
you to preserve the existing data on the MDisk. You can migrate the data to a new volume or
keep the data on the external system.
MDisks are imported for storage migration. The system provides a migration wizard to help
with this process, which is described in Chapter 8, “Storage migration” on page 485.
Note: The wizard is the preferred method to migrate data from legacy storage to the
system. When an MDisk is imported, the data on the original LU is not modified. The
system acts as a pass-through, and the extents of the imported MDisk do not contribute to
storage pools.
294 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 5-73 Importing an unmanaged MDisk
The MDisk is imported and listed as an image mode MDisk in the temporary migration
pool, as shown in Figure 5-74.
A corresponding image mode volume is now available in the same migration pool, as
shown in Figure 5-75.
The image mode volume can then be mapped to the original host. The data is still
physically present on the MDisk of the original external storage controller and no
automatic migration process is running. The original host sees no difference and its
applications can continue to run. The image mode volume is now under the control of the
system and it can optionally be migrated to another storage pool or be converted from
image mode to a striped virtualized volume. You can use the Volume Migration wizard or
perform the tasks manually.
The data migration begins automatically after the MDisk is imported successfully as an
image mode volume. You can check the migration progress by clicking the task under
Running Tasks, as shown in Figure 5-77.
After the migration completes, the volume is available in the chosen destination pool. This
volume is no longer an image mode volume. It is now virtualized by the system.
All data is migrated off the source MDisk, and the MDisk has switched its mode, as shown
in Figure 5-78.
The MDisk can be removed from the migration pool. It returns to the list of external MDisks
as Unmanaged. The MDisk can now be used as a regular managed MDisk in a storage pool,
or it can be decommissioned.
Alternatively, importing and migrating external MDisks to another pool can be done by
selecting Pools → System Migration to start the system migration wizard. For more
information, see Chapter 8, “Storage migration” on page 485.
Include
The system can exclude an MDisk from its storage pool if it has multiple I/O failures or has
persistent connection errors. Exclusion ensures that there is no excessive error recovery that
might impact other parts of the system. If an MDisk is automatically excluded, run the DMP to
resolve any connection and I/O failure errors.
296 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If no error event is associated with the MDisk in the log and the external problem is corrected,
click Include to add the excluded MDisk back to the storage pool.
The includemdisk command performs the same task. The command needs the MDisk name
or ID to be provided as a parameter, as shown in Example 5-31.
Remove
In some cases, you might want to remove external MDisks from their storage pool. To remove
the MDisk from the storage pool, click Remove. After the MDisk is removed, it goes back to
the Unmanaged state. If there are no volumes in the storage pool to which this MDisk is
allocated, the command runs immediately without extra confirmation. If there are volumes in
the pool, you are prompted to confirm the action, as shown in Figure 5-79.
Confirming the action starts a migration of volumes to extents on that MDisk to other MDisks
in the pool. During this background process, the MDisk remains a part of the storage pool.
Only when the migration completes is the MDisk removed from the storage pool and returns
to Unmanaged mode.
Ensure that you have enough available capacity remaining in the storage pool to allocate the
data being migrated from the removed MDisk, or this command fails.
Important: The MDisk that you are removing must remain accessible to the system while
all data is copied to other MDisks in the same storage pool. If the MDisk is unmapped
before the migration finishes, all volumes in the storage pool go offline and remain in this
state until the removed MDisk is connected again.
To remove an MDisk from a storage pool by using the CLI, run the rmmdisk command. You
must use the -force parameter if you must migrate volume extents to other MDisks in a
storage pool.
The command fails if you do not have enough available capacity remaining in the storage pool
to allocate the data that you are migrating from the removed array.
Dependent Volumes
A volume depends on an MDisk if the MDisk becoming unavailable results in a loss of access
or a loss of data for that volume. Use this option before you do maintenance operations to
confirm which volumes (if any) are affected. Selecting an MDisk and clicking Dependent
Volumes lists the volumes that depend on that MDisk.
To know the usable capacity that is available to the system or to a pool when overprovisioned
storage is used, you must account for the usable capacity of each provisioning group. To
show a summary of overprovisioned external storage, including controllers, MDisks, and
provisioning groups, click View Provisioning Groups, as shown in Figure 5-80.
For more information, see 9.6, “Overprovisioning and data reduction on external storage” on
page 548.
298 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6
Chapter 6. Volumes
In IBM Spectrum Virtualize, a volume is storage space that is provisioned out of a storage
pool and presented to a host as a Small Computer System Interface (SCSI) logical unit (LU),
that is, a logical disk.
This chapter describes how to create and provision volumes on IBM Spectrum Virtualize
systems. The first part of this chapter provides a brief overview of IBM Spectrum Virtualize
volumes, the classes of volumes that are available, and the available volume customization
options.
The second part of this chapter describes how to create, modify, and map volumes by using
the GUI.
The third part of this chapter provides an introduction to volume manipulation from the
command-line interface (CLI).
Note: Volumes are composed of extents that are allocated from a storage pool. Storage
pools group managed disks (MDisks), which are redundant arrays of independent disks
(RAIDs) that are configured by using internal storage, or LUs that are presented to and
virtualized by an IBM Spectrum Virtualize system. Each MDisk is divided into sequentially
numbered extents (zero-based indexing). The extent size is a property of a storage pool,
and is used for all MDisks that make up the storage pool.
MDisks are internal objects that are used for storage management. They are not directly
visible to or used by host systems.
Every volume is presented to hosts by an I/O group. One of nodes within that group is defined
as a preferred node, that is, a node that by default serves I/O requests to that volume. When
a host requests an I/O operation to a volume, the multipath driver on the host identifies the
preferred node for the volume and by default uses only paths to this node for I/O requests.
VVOLs change the approach to VMware virtual machines (VMs) disk configuration from “The
VM disk is a file on a VMware Virtual Machine File System (VMFS) volume” to one-to-one
mapping between VM disks and storage volumes. VVOLs can be managed by the VMware
infrastructure so that the storage system administrator can delegate VM disk management to
VMware infrastructure specialists, which greatly simplifies storage allocation for virtual
infrastructure and reduces the storage management team’s effort that is required to support
VMware infrastructure administrators.
The downside of using VVOLs is multiplication of the number of volumes that are presented
by a storage system because typically there are multiple VM disks that are configured on
every VMFS volume. As excessive proliferation of volumes that is presented to Elastic Sky X
Integrated (ESXi) clusters can have a negative impact on performance. Therefore, it is a best
practice to carefully plan a storage system configuration before production deployment and
include in the assessment the projected system growth.
300 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: If there are too many logical unit numbers (LUNs) that are presented to a sufficiently
large ESXi cluster, I/O requests that are simultaneously generated by ESXi hosts might
exceed the storage system command queue. Such overflow leads to I/O request retries,
which reduce storage system performance as perceived by the connected hosts.
To provide storage users adequate service, all parameters must be correctly set. Importantly,
the various parameters might be interdependent, that is, setting one of them might affect other
properties of the volume.
The volume parameters and their interdependencies are covered in the following sections.
Attention: By default, striped volume copies are striped across all MDisks in the
storage pool. If some of the MDisks are smaller than others, the extents on the smaller
MDisks are used up before the larger MDisks run out of extents. Manually specifying
the stripe set in this case might result in the volume copy not being created.
If you are unsure whether sufficient free space is available to create a striped volume
copy, use one of the following approaches:
Check the free space on each MDisk in the storage pool by running the
lsfreeextents command, and ensure that each MDisk that is included in the
manually specified stripe set has enough free extents.
Allow the system to automatically create the volume copy by not supplying a specific
stripe set.
A sequential volume contains a volume copy with extents that are allocated sequentially
on one MDisk.
An image mode volume is a special type of volume that has a direct one-to-one mapping to
one (image mode) MDisk.
Note: The striping effect occurs when multiple logical volumes that are defined on a set of
physical storage devices (MDisks) store their metadata or file system transaction log on the
same physical device (MDisk).
Because of the way the file systems work, system metadata disk regions are typically busy.
For example, in a journaling file system, a write to a file might require two or more writes to
the file system journal: At minimum, one to make a note of the intended file system update,
and one marking the successful completion of the file write.
If multiple volumes (each with their own file system) are defined on the same set of
MDisks, and all (or most) of them store their metadata on the same MDisk, a
disproportionately large I/O load is generated on this MDisk, which can result in suboptimal
performance of the storage system. Pseudo-randomly allocating the first MDisk for new
volume extent allocation minimizes the probability that multiple file systems that are
created on these volumes place their metadata regions on the same physical MDisk.
Note: Some file systems allow specifying different logical disks for data and metadata
storage. When taking advantage of this file system feature, you may allocate differently
configured volumes that are dedicated to data and metadata storage.
Note: An MDisk extent maps to exactly one volume extent. For volumes with two copies,
one volume extent maps to two MDisk extents (one for each volume copy).
Figure 6-2 on page 303 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of the
MDisks: A, B, or C. The mapping table stores the details of this indirection.
302 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-2 Simple view of block virtualization
Several of the MDisk extents are unused, that is, no volume extent maps to them. These
unused extents are available for volume creation, migration, and expansion.
The default and most common type of volumes in IBM Spectrum Virtualize are managed
mode volumes. Managed mode volumes are allocated from a set of MDisk belonging to a
storage pool, and they can be subjected to the full set of virtualization functions. In particular,
they offer full flexibility in mapping between logical volume representation (a continuous set of
logical blocks) and the physical storage that is used to store these blocks. This function
requires that physical storage (MDisks) is fully managed by IBM Spectrum Virtualize, which
means that the LUs that are presented to IBM Spectrum Virtualize by the back-end storage
systems do not contain any data when they are added to the storage pool.
Image mode volumes enable IBM Spectrum Virtualize to work with LUs that were previously
directly mapped to hosts, which are often required when IBM Spectrum Virtualize is
introduced into a storage environment. In such scenario, image mode volumes are used to
enable seamless migration of data and a smooth transition to virtualized storage.
The image mode creates one-to-one mapping of logical block addresses (LBAs) between a
volume and a single MDisk (a LU that is presented by the virtualized storage). Image mode
volumes have a minimum size of one block (512 bytes) and always occupy at least one
extent. An image mode MDisk cannot be used as a quorum disk and no IBM Spectrum
Virtualize system metadata extents are allocated from it. All the IBM Spectrum Virtualize copy
services functions can be applied to image mode disks.
An image mode volume is mapped to only one image mode MDisk, and it is mapped to the
entirety of this MDisk. Therefore, the image mode volume capacity is equal to the size of the
corresponding image mode MDisk. If the size of the (image mode) MDisk is not a multiple of
the MDisk group’s extent size, the last extent is marked as partial (not filled).
When you create an image mode volume, you map it to an MDisk that must be in unmanaged
mode and must not be a member of a storage pool. As the image mode volume is configured,
the MDisk is made a member of the specified storage pool. It is a best practice to use a
dedicated pool for image mode MDisks with a name indicating its role, such as Storage
Pool_IMG_xxx.
An image mode volume can be migrated to a managed mode volume, which is a standard
procedure that is used to perform non-disruptive migration of the organization's SAN to an
environment managed by or based on IBM Spectrum Virtualize systems. After the data is
migrated off the managed image volume, the space it used on the source storage system can
be reclaimed. After all data is migrated off the storage system, it can be decommissioned or
used as a back-end storage system that is managed by the IBM Spectrum Virtualize system
(see 2.9, “Back-end storage configuration” on page 88).
IBM Spectrum Virtualize also supports the reverse process in which a managed mode
volume can be migrated to an image mode volume. During the migration, the volume is
identified in the system as being in managed mode. Its mode changes to “image” only after
the process completes.
304 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6.2.3 VSize
Each volume has two associated values that describe its size: real capacity and virtual
capacity.
The real (physical) capacity is the size of storage space that is allocated to the volume
from the storage pool. It determines how many MDisk extents are allocated to form the
volume. The real capacity is used to store the user data, and in the case of
thin-provisioned volumes, the metadata of the volume.
The virtual capacity is capacity that is reported to the host, but also any other
IBM Spectrum Virtualize components or functions (for example, IBM FlashCopy, cache,
and Remote Copy (RC)) that operate based on a volume size.
In a standard-provisioned volume, the real and virtual capacities are the same. In a
thin-provisioned volume, the real capacity can be as little as a few percent of virtual capacity.
The volume size can be specified in units down to 512-byte blocks (see Figure 6-4). The real
capacity can be specified as an absolute value or as a percentage of the virtual capacity.
For example, a basic volume of 512 bytes that is created in a pool with the default extent size
(1024 mebibytes (MiB)) uses 1024 MiB of the pool space because a whole extent must be
allocated to provide the space for the volume.
In practice, this rounding up of volume size to the whole number of extents has little impact on
storage use efficiency unless the storage system serves many small volumes. For more
information about storage pools and extents, see Chapter 5, “Storage pools” on page 237.
6.2.4 Performance
The basic metrics of volume performance are the number of IOPS the volume can provide,
the time to service an I/O request (average, median, and first percentile), and the bandwidth
of the data that is served to a host.
Volume performance is defined by the pool or pools that are used to create the volume. The
pool determines the media bus (Non-Volatile Memory Express (NVMe) or serial-attached
SCSI (SAS)); media type (IBM FlashCore Module (FCM) drives, solid-state drives (SSDs), or
hard disk drives (HDDs)); redundant array of independent disks (RAID) level and number of
drives per RAID array; and the possibility for the Easy Tier function to optimize the
performance of a volume. However, volumes that are configured in the same storage pool or
pools might still have different performance characteristics, depending on the storage
resiliency, efficiency, security, and allocation policy configuration settings of a volume.
Volume copies are identified in the GUI by a copy ID, which can have value 0 or 1. Copies of
the volume can be split, which provides a point-in-time (PiT) copy of a volume. An overview of
volume mirroring is shown in Figure 6-5 on page 307.
306 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-5 Volume mirroring overview
A copy can be added to a volume with a single copy or removed from a volume with two
copies. Internal safety mechanisms prevent accidental removal of the only remaining copy of
a volume.
A newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”, and the volume is immediately available for use.
The synchronization process updates the secondary copy until it is fully synchronized, that is,
data that is stored on the secondary copy matches the data that is on the primary copy. This
update is done at the synchronization rate that is defined when the volume is created, but can
be modified after volume creation. The synchronization status for mirrored volumes is
recorded on the storage system quorum disk.
If a mirrored volume is created by using the format parameter, both copies are formatted in
parallel. The volume comes online when both operations are complete with the copies in
sync.
If it is known that MDisk space (which is used for creating volume copies) is formatted or if the
user does not require read stability, a no synchronization option can be used that declares
the copies as synchronized even when they are not.
Creating volume with more than one copy is beneficial in multiple scenarios. For example:
Improving volume resilience by protecting it from a single back-end storage system failure
(requires each volume copy to be configured on a different back-end storage system).
Providing concurrent maintenance of a storage system that does not natively support
concurrent maintenance (for volumes on external virtualized storage).
Providing an alternative method of data migration with improved availability
characteristics. While a volume is being migrated by using the data migration feature, it is
vulnerable to failures on both the source and target storage pool. Volume mirroring
provides an alternative migration method that is not affected by the destination volume
pool availability.
Note: When migrating volumes to a Data Reduction Pool (DRP), volume mirroring is
the only migration method because DRPs do not support migrate commands.
Typically, each volume copy is allocated from a different storage pool. Although not required,
using different pools that are backed by different back-end storage for each volume copy is
the typical configuration because it markedly increases volume resiliency.
If one of the mirrored volume copies becomes temporarily unavailable (for example, because
the storage system that provides its pool is unavailable), the volume remains accessible to
hosts. The storage system remembers which areas of the volume were modified after the loss
of access to a volume copy and resynchronizes only these areas when both copies are
available.
Note: Volume mirroring is not a disaster recovery (DR) solution because both copies are
accessed by the same node pair and addressable by only a single cluster. However, if
correctly planned, it can improve availability.
The storage system tracks the synchronization status of volume copies by dividing the volume
into 256 kibibyte (KiB) grains and maintaining a bitmap of stale grains (on the quorum disk),
mapping 1 bit to one grain of the volume space. If the mirrored volume needs
resynchronization, the system copies to the out-of-sync volume copy only these grains that
were written to (changed) since the synchronization was lost. This approach is known as an
incremental synchronization, and it minimizes the time that is required to synchronize the
volume copies.
Important: Mirrored volumes can be taken offline if no quorum disk is available. This
behavior occurs because the synchronization status of mirrored volumes is recorded on
the quorum disk.
A volume with more than one copy can be checked to see whether all of the copies are
identical or consistent. If a medium error is encountered while it is reading from one copy, a
check is repaired by using data from the other copy. This consistency check is performed
asynchronously with host I/O.
Because mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, 1 MiB of
bitmap space supports up to 2 TiB of mirrored volumes. The default size of the bitmap space
is 20 MiB, which allows a configuration of up to 40 TiB of mirrored volumes. If all 512 MiB of
variable bitmap space is allocated to mirrored volumes, 1 PiB of mirrored volumes can be
supported.
Table 6-1 on page 309 lists the bitmap space configuration options.
308 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Table 6-1 Bitmap space default configuration
Copy service Minimum Default Maximum Minimuma capacity when
allocated allocated allocated using the default values
bitmap bitmap bitmap
space space space
The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.
For non-mirrored volumes, only one volume copy exists, so no choice exists for the read
source, and all reads are directed to the single volume copy.
Figure 6-6 Data flow for write I/O processing in a mirrored volume
As shown in Figure 6-6, the writes are sent by the host to the preferred node for the volume
(1). Then, the data is mirrored to the cache of the partner node in the I/O group (2), and
acknowledgment of the write operation is sent to the host (3). The preferred node then
destages the written data to all volume copies (4). The example that is shown in Figure 6-7 on
page 311 shows a case with destaging to a mirrored volume, that is, one with two physical
data copies.
With Version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is written once, and then it is directly destaged from
the controller to the disk system.
Figure 6-7 on page 311 shows the data flow in a stretched environment.
310 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA
Note: Storage efficiency options might require more licenses and hardware components
depending on the model and configuration of your storage system.
Implementation of DRPs requires careful planning and sizing. Before configuring the first
space-efficient volume on a storage system, see the relevant sections in Chapter 2,
“Planning” on page 71 and Chapter 9, “Advanced features for storage efficiency” on
page 509.
DRPs use multithreading and hardware acceleration (where available) to provide storage
efficiency functions on IBM Spectrum Virtualize storage systems. When you consider using
storage efficiency options, remember that they increase the number of I/O operations that the
storage system must realize compared to accessing a basic volume. Space-efficient volumes
require the storage system to both to write the data that is sent by the host and the metadata
that is required to maintain a space-efficient volume.
Note: FCM drives include compression hardware, so it provides data set size reduction
with no performance penalty.
For more information about the storage efficiency functions of IBM Spectrum Virtualize, see
Chapter 5, “Storage pools” on page 237 and Introduction and Implementation of Data
Reduction Pools and Deduplication, SG24-8430.
A thin-provisioned volume has virtual capacity larger than physical capacity. Thin-provisioning
is the base technology for all space-efficient volumes. When a thin-provisioned volume is
created, a small amount of the real capacity is used for initial metadata. This metadata holds
a mapping of a set of continuous LBAs in the volume to a grain on a physically allocated
extent.
Note: If you use of thin-provisioned volumes, then it is recommended to monitor closely the
available space in the pool that contains these volumes. If a thin-provisioned volume does
not have enough real capacity for a write operation, the volume is taken offline and an error
is logged. There is limited ability to recover with UNMAP. Also, consider creating a fully
allocated sacrificial emergency space volume.
The grain size is defined when the volume is created and cannot be changed afterward. The
grain size can be 32 KiB, 64 KiB, 128 KiB, or 256 KiB. The default grain size is 256 KiB, which
is the preferred option. However, the following factors must be considered when deciding on
the grain size:
A smaller grain size helps to save space. If a 16 KiB write I/O requires a new physical grain
to be allocated, the used space is 50% of a 32 KiB grain, but just over 6% of 256 KiB grain.
If no subsequent writes to other blocks of the grain occur, the volume provisioning is less
efficient for volumes with larger grain.
A smaller grain size requires more metadata I/O to be performed, which increases the
load on the physical back-end storage systems.
When a thin-provisioned volume is a FlashCopy source or target volume, specify the same
grain size for FlashCopy and the thin-provisioned volume configuration. Use 256 KiB grain
to maximize performance.
The grain size affects the maximum size of the thin-provisioned volume. For 32 KiB size,
the volume size cannot exceed 260 TiB.
312 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-8 Conceptual diagram of a thin-provisioned volume
Thin-provisioned volumes use metadata to enable capacity savings, and each grain of user
data requires metadata to be stored. Therefore, the I/O rates that are obtained from
thin-provisioned volumes are lower than the I/O rates that are obtained from
standard-provisioned volumes.
When a write request comes from a host, the block address for which the write is requested is
checked against the mapping table. If the write is directed to a block that maps to a grain with
physical storage that is allocated by a previous write, then physical storage was allocated for
this LBA and can be used to service the request. Otherwise, a new physical grain is allocated
to store the data, and the mapping table is updated to record that allocation.
The metadata storage that is used is never greater than 0.1% of the user data. The resource
usage is independent of the virtual capacity of the volume.
The real capacity of a thin-provisioned volume can be changed if the volume is not in image
mode. Thin-provisioned volumes use the grains of real capacity that is provided in ascending
order as new data is written to the volume. If the user initially assigns too much real capacity
to the volume, the real capacity can be reduced to free storage for other uses.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
Thin-provisioned volumes can be used as volumes that are assigned to the host by
FlashCopy to implement thin-provisioned FlashCopy targets. When creating a mirrored
volume, a thin-provisioned volume can be created as a second volume copy, whether the
primary copy is a standard or thin-provisioned volume.
Deduplicated volumes
Deduplication is a specialized data set reduction technique. However, in contrast to the
standard file-compression tools that work on single files or sets of files, deduplication is a
technique that is applied on a block level to larger scale data sets, such as a file system or
volume. In IBM Spectrum Virtualize, deduplication can be enabled for thin-provisioned and
compressed volumes that are created in DRPs.
Deduplication works by identifying repeating chunks in the data that is written to the storage
system. Pattern matching looks for a known data patterns (for example, “all ones”), and the
data signature-based algorithm calculates a signature for each data chunk (by using a hash
function) and checks whether the calculated signature is present in the deduplication
database.
If a known pattern or a signature match is found, the data chunk is replaced by a reference to
a stored chunk, which reduces storage space that is required for storing the data. Conversely,
if no match is found, the data chunk is stored without modification, and its signature is added
to the deduplication database.
To maximize the space that is available for the deduplication database, the system distributes
it between all nodes in the I/O groups that contain deduplicated volumes. Each node holds
a distinct portion of the records that are stored in the database. If nodes are removed or
added to the system, the database is redistributed between the anodes to ensure optimal use
of available resources.
Depending on the data type that is stored on the volume, the capacity savings can be
significant. Examples of use cases that typically benefit from deduplication are virtual
environments with multiple VMs running the same operating system (OS), and backup
servers. In both cases, it is expected, that multiple copies of identical files exist, such as
components of the standard OS or applications that are used in the organization.
314 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: If data is encrypted by the host, you should expect no benefit from deduplication
because the same cleartext (for example, a standard OS library file) encrypted with
different keys results in different output, making deduplication impossible.
When planning the use of deduplicated volumes, be aware of update and performance
considerations and the following software and hardware requirements:
Code level V8.1.2 or higher is needed for DRPs.
Code level V8.1.3 or higher is needed for deduplication.
Tip: Code level 8.3.1 is needed for the best performance in DRP pools.
Nodes must have at least 32 GB to support deduplication. Nodes that have more than
64 GB can use a bigger deduplication fingerprint database, which might lead to better
deduplication.
You must run supported hardware. For more information about the valid hardware and
features combinations, go to IBM FlashSystem 9200 documentation, select your system,
and read the “Planning for deduplicated volumes” section by expanding Planning →
Storage configuration planning.
Compressed volumes
A volume that is created in a DRP can be compressed. Data that is written to the volume is
compressed before committing it to back-end storage, which reduces the physical capacity
that is required to store the data. Because enabling compression does not incur an extra
metadata handling penalty, in most cases it is a best practice to enable compression on
thin-provisioned volumes.
Notes:
When a volume is backed by FCM drives that compress data at line speed, the volume
should be configured with compression that is turned on. IBM Spectrum Virtualize is
tightly integrated with the storage controller and uses knowledge of both the logical and
physical space.
You can use the management GUI or the CLI to run the built-in compression estimation
tool. This tool can be used to determine the capacity savings that are possible for
existing data on the system by using compression.
Another benefit of data compression for volumes that are backed by flash-based
storage is the reduction of write amplification, which has a beneficial effect on media
longevity.
However, this approach affects the management of the real capacity of volumes with enabled
capacity savings. File system deletion frees space at the file system level, but physical data
blocks that are allocated by the storage for the file still take up the real capacity of a volume.
To address this issue, file systems added support for the SCSI UNMAP command, which can be
run after file deletion. It informs the storage system that physical blocks that are used by the
removed file should be marked as no longer in use so that they can be freed. Modern OSs run
SCSI UNMAP commands only to storage that advertises support for this feature.
Version 8.1.0 and later releases support the SCSI UNMAP command on IBM Spectrum
Virtualize systems, which enables hosts to notify the storage controller of capacity that is no
longer required and may be reused or deallocated, which might improve capacity savings.
Note: For volumes that are outside DRPs, the complete stack from the OS down to
back-end storage controller must support UNMAP to enable the capacity reclamation.
SCSI UNMAP is passed only to specific back-end storage controllers.
Before enabling SCSI UNMAP, see SCSI Unmap support in IBM Spectrum Virtualize
systems.
Analyze your storage stack to optimally balance the advantages and costs of data
reclamation.
6.2.8 Encryption
IBM Spectrum Virtualize systems can be configured to enable data-at-rest encryption. This
function is realized in hardware (self-encrypting drives or in SAS controller for drives that do
not support self-encryption and are connected through the SAS bus) or in software (external
virtualized storage).
For more information about creating and managing encrypted volumes, see Chapter 12,
“Encryption” on page 735.
316 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The cache setting of a volume can have the following values:
readwrite All read and write I/O operations that are performed by the volume are
stored in cache. This mode is the default cache mode for all volumes.
readonly Read only I/O operations that are performed on the volume are stored
in cache. Writes to the volume are not cached.
disabled No I/O operations on the volume are stored in cache. I/Os are passed
directly to the back-end storage controller rather than being held in the
node’s cache.
Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as IBM Spectrum Virtualize
image mode volumes. However, using IBM Spectrum Virtualize Copy Services rather than the
underlying disk controller copy services provides better results.
Note: Disabling the volume cache is a prerequisite for using native copy services on image
mode volumes that are defined on storage systems that are virtualized by IBM Spectrum
Virtualize. Contact IBM Support before turning off the cache for volumes in your production
environment to avoid performance degradation.
The limit can be set in terms of number of IOPS or bandwidth (megabytes per second
(MBps), gigabytes per second (GBps), or terabytes per second (TBps)). By default, I/O
throttling is disabled, but each volume can have up to two throttles that are defined: one for
bandwidth and one for IOPS.
When deciding between using IOPS or bandwidth as the I/O governing throttle, consider the
disk access profile of the application that is the primary volume user. Database applications
generally issue large amounts of I/O operations, but transfer a relatively small amount of data.
In this case, setting an I/O governing throttle that is based on bandwidth might not achieve
much. A throttle that is based on IOPS is better suited for this use case.
Conversely, a video streaming or editing application issues a small amount of I/O but transfers
large amounts of data. Therefore, it is better to use a bandwidth throttle for the volume in this
case.
An I/O governing rate of 0 does not mean that zero IOPS or bandwidth can be achieved for
this volume; rather, it means that no throttle is set for this volume.
For more information about how to configure I/O throttle on a volume, see 6.5.4, “I/O
throttling” on page 339.
There are two levels at which the volume protection must be enabled to be effective: system
level and pool level. Both levels must be enabled for protection to be active on a pool. The
pool-level protection depends on the system-level setting to ensure that protection is applied
consistently for volumes within that pool. If system-level protection is enabled, but pool-level
protection is not enabled, any volumes in the pool can be deleted.
When you enable volume protection at the system level, you specify a period in minutes that
the volume must be idle before it can be deleted. If volume protection is enabled and the
period is not expired, the volume deletion fails even if the -force parameter is used. The
following CLI commands and the corresponding GUI activities are affected by the volume
protection setting:
rmvdisk
rmvdiskcopy
rmvvolume
rmvdiskhostmap
rmvolumehostclustermap
rmmdiskgrp
rmhostiogrp
rmhost
rmhostcluster
rmhostport
mkrcrelationship
Volume protection can be set from the GUI (new in V.8.3.1, see 6.5.5, “Volume protection” on
page 344) and CLI (see 6.6.9, “Volume protection” on page 390).
Secure data deletion effectively erases or overwrites all traces of existing data from a data
storage device. The original data on that device becomes inaccessible and cannot be
reconstructed. You can securely delete data on individual drives and on a boot drive of a
control enclosure. The methods and commands that are used to securely delete data enable
the system to be used in compliance with European Regulation EU2019/424.
318 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For more information about configuring VVOLs with IBM Spectrum Virtualize, see Configuring
VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize, SG24-8328.
Depending on the type and scale of the failure that the solution must survive, the sites can be
two places in the same data center room (one end of the IBM Spectrum system) or buildings
in different cities on different tectonic plates and powered from independent grids (the other
end of the IBM Spectrum system).
Note: Multi-site topologies of IBM Spectrum Virtualize use two sites as storage component
locations (nodes and back-end storage). The third site is used as a location for a
tie-breaker component that prevents split-brain scenarios if the storage system
components lose communication with each other.
The Create Volumes menu provides the following options, depending on the configured
system topology:
With standard topology, the available options are Basic, Mirrored, and Custom.
With HyperSwap topology, the options are Basic, HyperSwap, and Custom.
The HyperSwap function provides HA volumes that are accessible through two sites up to
300 km (186.4 miles) apart. A fully independent copy of the data is maintained at each site.
Note: The determining factor for HyperSwap configuration validity is the time that it takes
to send the data between the sites. Therefore, while estimating the distance, consider the
fact that the distance between the sites that is measured along the data path is longer than
the geographic distance. Additionally, each device on the data path that adds latency
increases the effective distance between the sites.
When data is written by hosts at either site, both copies are synchronously updated before the
write operation completion is reported to the host. The HyperSwap function automatically
optimizes itself to minimize data that is transmitted between sites and to minimize host read
and write latency.
The HyperSwap volume configuration is possible only after the IBM Spectrum Virtualize
system is configured in the HyperSwap topology. After this topology change, the GUI
presents an option to create a HyperSwap volume and creates them by running the mkvolume
command instead of the mkvdisk command. The GUI continues to use the mkvdisk command
when all other classes of volumes are created.
For more information, see IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and
VMware Implementation, SG24-8317.
The GUI simplifies the HyperSwap volume creation process by asking about required volume
parameters only and automatically configuring all the underlying volumes, FlashCopy maps,
and volume replications relationships.
320 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6.5 Operations on volumes
This section describes how to perform operations on volumes by using the GUI. The following
operations can be performed on a volume:
Volumes can be created and deleted.
Volumes can have their characteristics modified, including:
– Size (expanding or shrinking)
– Number of copies (adding or removing a copy)
– I/O throttling
– Protection
Volumes can be migrated at run time to another MDisk or storage pool.
A PiT volume snapshot can be created by using FlashCopy. Multiple snapshots and quick
restore from snapshots (reverse FlashCopy) are supported.
Volumes can be mapped to (and unmapped from) hosts.
Note: With Version 7.4 and later, it is possible to prevent accidental deletion of volumes if
they recently performed any I/O operations. This feature is called volume protection, and it
prevents active volumes or host mappings from being deleted inadvertently. This process
is done by using a global system setting. For more information, see 6.6.9, “Volume
protection” on page 390 and the “Changing volume protection settings” topic in
IBM Documentation.
A list of volumes, their state, capacity, and associated storage pools are displayed.
2. To create a volume, click Create Volumes, as shown in Figure 6-11.
The Create Volumes tab opens the Create Volumes window, which shows the available
creation methods.
Note: The volume classes that are displayed in the Create Volumes window depend on the
topology of the system.
322 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The Create Volumes window for standard topology is shown in Figure 6-12.
To create a basic volume, click Basic, as shown in Figure 6-13 on page 325. This action
opens the Basic volume menu, where you can define the following parameters:
Pool: The pool in which the volume is created (drop-down menu).
Quantity: Number of volumes to be created (numeric up or down).
Capacity: Size of the volume in specified units (drop-down menu).
Capacity Savings (drop-down menu):
– None
– Thin-provisioned
– Compressed
Name: Name of the volume (cannot start with a number).
I/O group.
The Basic Volume creation window is shown in Figure 6-13 on page 325.
324 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-13 Create Volumes window
Define and consistently use a suitable volume naming convention to facilitate easy
identification. For example, a volume name can contain the name of the pool or some tag that
identifies the underlying storage subsystem, the host or cluster name that the volume is
mapped to, and the content of this volume, such as the name of the applications that use the
volume.
When all of the characteristics of the basic volume are defined, it can be created by selecting
one of the following options:
Create
Create and Map
In the example, the Create option was selected. The volume-to-host mapping can be
performed later, as described in 6.5.8, “Mapping a volume to a host” on page 358.
When the operation completes, the volume is seen in the Volumes window in the state
“Online (formatting)”, as shown in Figure 6-14.
By default, the GUI does not show any details about the commands it runs to complete a task.
However, while a command runs you can click View more details to see the underlying CLI
commands that are run to create the volume and a report of completion of the operation, as
shown in Figure 6-15.
326 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Consider the following points:
Standard-provisioned volumes are automatically formatted through the quick
initialization process after the volume is created. This process makes
standard-provisioned volumes available for use immediately.
Quick initialization requires a small amount of I/O to complete, and limits the number of
volumes that can be initialized at the same time. Some volume actions, such as moving,
expanding, shrinking, or adding a volume copy, are disabled when the specified volume
is initializing. Those actions become available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume. The quick initialization process can also
be disabled for performance testing so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.
For more information, see IBM FlashSystem 9200 documentation and expand Product
overview → Technical overview → Volumes → Standard-provisioned volumes.
328 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
A mirrored volume is displayed in the GUI as configured in the pool in which it has its primary.
In this example, volume itso-mirrored00-Pool0-Pool1 is displayed as configured in Pool0
because it has its primary copy in Pool0.
Note: When creating a mirrored volume by using this menu, you are not required to specify
the Mirrored Sync rate (it defaults to 2 MBps). The synchronization rate can be customized
by using the Custom menu.
Note: Consider the compression guidelines in Chapter 9, “Advanced features for storage
efficiency” on page 509 before creating the first compressed volume copy on a system.
Use these windows to customize your Custom volume as wanted, and then commit these
changes by clicking Create.
You can mix and match settings on different windows to achieve the final volume configuration
that meets your requirements.
330 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-20 Volume Location window
If you click Define another volume, the GUI displays a subpane in which you can define the
configuration of another volume, as shown in Figure 6-22.
This way, you can create volumes with different characteristics in a single invocation of the
volume creation wizard.
332 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Warning threshold: Whether a warning message is sent and at what percentage of filled
virtual capacity. Defaults to Enabled, with a warning threshold set at 80%.
Thin-Provisioned Grain Size: You can define the grain size for the thin-provisioned volume.
Defaults to 256 KiB.
Important: If you do not use the autoexpand feature, the volume goes offline if it
receives a write request after all real capacity is allocated.
The default grain size is 256 KiB. The optimum choice of grain size depends on the
volume use type. Consider the following points:
If you are not going to use the thin-provisioned volume as a FlashCopy source or
target volume, use 256 KiB to maximize performance.
If you are going to use the thin-provisioned volume as a FlashCopy source or target
volume, specify the same grain size for the volume and for the FlashCopy function.
If you plan to use Easy Tier with thin-provisioned volumes, see the IBM Support
article Performance Problem When Using Easy Tier With Thin Provisioned
Volumes.
Compressed window
If you choose to create a compressed volume, a Compressed window is displayed, as shown
in Figure 6-24.
Note: Consider the compression guidelines in Chapter 9, “Advanced features for storage
efficiency” on page 509 before creating the first compressed volume copy on a system.
A list of volumes, their state, capacity, and associated storage pools, is displayed.
2. Click Create Volumes, as shown in Figure 6-27 on page 335.
334 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-27 Create Volumes button
The Create Volumes tab opens the Create Volumes window, which displays available creation
methods.
Note: The volume classes that are displayed in the Create Volumes window depend on the
topology of the system.
The Create Volumes window for the HyperSwap topology is shown in Figure 6-28.
The notable difference between HyperSwap volume and basic volume creation is that
HyperSwap volume creation includes specifying storage pool names at each site. The system
uses its topology awareness to map storage pools to sites, which ensure that the data is
correctly mirrored across locations.
336 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
As shown in Figure 6-29, a single volume is created with volume copies in sites site1 and
site2. This volume is in an active-active (MM) relationship with extra resilience that is provided
by two change volumes.
The Pool column shows the value “Multiple”, which indicates that a volume is a HyperSwap
volume. A volume copy at each site is visible, and the change volumes that are used by the
technology are not displayed in this GUI view.
Note: For volumes in multi-site topologies, the asterisk (*) does not indicate the primary
copy, but the local volume copy that is used for data reads.
A single mkvolume command can create a HyperSwap volume. Up to IBM Spectrum Virtualize
V7.5, this process required careful planning and running the following sequence of
commands:
1. mkvdisk master_vdisk
2. mkvdisk aux_vdisk
3. mkvdisk master_change_volume
4. mkvdisk aux_change_volume
5. mkrcrelationship –activeactive
6. chrcrelationship -masterchange
7. chrcrelationship -auxchange
8. addvdiskacces
Note: IBM Spectrum Virtualize Version 8.4 extends HyperSwap support to hosts that are
attached through NVMe over Fabrics (NVMe-oF) through FC. The standard protocol
mechanism Asymmetric Namespace Access (ANA), which is analogous to SCSI
Asymmetric Logical Unit Access (ALUA), is used to provide this function to hosts that are
attached through NVMe-oF.
338 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6.5.4 I/O throttling
This section describes how to use I/O throttling on a volume.
3. After the Edit Throttle task completes successfully, the Edit Throttle window opens again.
You can now set the throttle based on the different metrics, modify the throttle, or close the
window without performing further actions by clicking Close.
340 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Listing volume throttles
To view volume throttles, select Volumes → Volumes, and then select Actions → View All
Throttles, as shown in Figure 6-33.
You can view other throttles by selecting a different throttle type in the drop-down menu, as
shown in Figure 6-35.
342 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Modifying or removing a volume throttle
To remove a volume throttle, complete the following steps:
1. From the Volumes menu, select the volume that is attached the throttle that you want to
remove. Select Actions → Edit Throttle, as shown in Figure 6-36.
3. To remove the throttle completely, click Remove for the throttle that you want to remove,
as shown in Figure 6-38.
After the Edit Throttle task completes successfully, the Edit Throttle window opens again. You
can now set the throttle based on the different metrics, modify the throttle, or close the
window without performing any action by clicking Close.
344 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-39 Volume Protection configuration
In this view, you can configure system-wide volume protection (enabled by default), set the
minimum inactivity period that is required to allow volume deletion (protection duration), and
configure volume protection for each configured pool (enabled by default). In the example,
volume protection is enabled with the 15-minute minimum inactivity period and is turned on
for all configured pools.
Shrinking
To shrink a volume, complete the following steps:
1. Ensure that you have a current and verified backup of any in-use data that is stored on the
volume that you intend to shrink.
3. Specify either Shrink by or Final size (the other choice is calculated automatically), as
shown in Figure 6-41 on page 347.
346 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-41 Specifying the size of the shrunk volume
Note: The storage system reduces the volume capacity by removing one or more
arbitrarily selected extents. Do not shrink a volume that contains data that is being used
unless you have a current and verified backup of the data.
Note: Version 8.4 introduces the ability to shrink a volume while it is formatting.
348 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Expanding
To expand a volume, complete the following steps:
1. From the Volumes menu, select the volume that you want to expand. Select Actions →
Expand…, as shown in Figure 6-44.
3. After the operation completes (including the formatting of the extra space), you can see
the volume with the new size by selecting Volumes → Volumes, as shown in Figure 6-46.
Note: Expanding a volume is not sufficient to increase the available space that is visible
to the host. The host must become aware of the changed volume size at the OS level,
for example, through a bus rescan. More operations at the logical volume manager
(LVM) or file system levels might be needed before more space is visible to applications
running on the host.
Note: Version 8.4 introduces the ability to expand a volume while it is formatting.
350 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Modifying capacity savings
This action is available only for space-efficient volumes. To modify capacity savings options
for a volume, complete the following steps:
1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Modify Capacity Savings…, as shown in Figure 6-47.
3. For volumes that are configured in a DRP, it is possible to enable deduplication, as shown
in Figure 6-49.
After you configure the capacity savings options of a volume, click Modify to apply them.
When the operation completes, you are returned to the Volumes view.
352 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Modifying the mirror sync rate
This action is available only for mirrored volumes. To modify the mirror sync rate of a volume,
complete the following steps:
1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Modify Mirror Sync Rate…, as shown in Figure 6-50.
When the operation completes, you are returned to the Volumes view.
354 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. Select the cache mode that you want for the volume from the drop-down list and click OK,
as shown in Figure 6-53.
When the operation completes, you are returned to the Volumes view.
A UDID is a nonnegative integer that is used in the creation of the OpenVMS device name.
All fibre-attached volumes have an allocation class of $1$, followed by the letters DGA, and
then followed by the UDID. All storage unit LUNs that you assign to an OpenVMS system
need an UDID so that the OS can detect and name the device. LUN 0 must also have a UDID,
but the system displays LUN 0 as $1$GGA<UDID>, not as $1$DGA<UDID>. For more
information about fibre-attached storage devices, see Guidelines for OpenVMS Cluster
Configurations.
2. Specify the UDID for the volume and click Modify, as shown in Figure 6-55 on page 357.
356 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-55 Setting the volume UDID
When the operation completes, you are returned to the Volumes view.
When the operation completes, you are returned to the Volumes view.
358 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-58 Volume mapping menu item
Tip: An alternative way of opening the Actions menu is to highlight (select) a volume
and right-click.
2. The Create Mapping window opens. In this window, select whether to create a mapping to
a host or host cluster. The list of objects of the appropriate type is displayed. Select to
which hosts or host clusters the volume should be mapped.
You can either allow the storage system to assign the SCIS LUN ID to the volume by
selecting the System Assign option, or select Self Assign and provide the LUN ID
yourself. Click Next to proceed to the next step.
360 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. A summary window opens and shows all the volume mappings for the selected host. The
new mapping is highlighted, as shown in Figure 6-60. Review the future configuration
state and click Map Volumes to map the volume.
4. After the task completes, the wizard returns to the Volumes window. You can list the
volumes that are mapped to the host by selecting Hosts → Mappings, as shown in
Figure 6-61.
To see volumes that are mapped to clusters instead of hosts, change the value that is
shown in the upper left (see Figure 6-62) from Private Mappings to Shared Mappings.
Note: You can use the filter to display only the hosts or volumes that you want to see.
The host can now access the mapped volume. For more information about discovering the
volumes on the host, see Chapter 7, “Hosts” on page 405.
To remove the volume to host mapping, in the Hosts → Mappings view, select the volume or
volumes, right-click, and click Unmap Volumes, as shown in Figure 6-63.
362 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
In the Delete Mapping window, enter the number of volumes that you intend to unmap, as
shown in Figure 6-64. This action is as a security measure that minimizes changes that result
from an accidental unmap of an invalid volume.
Note: Removing volume to host mapping makes the volume unavailable to the host. Make
sure that the host is prepared for the operation. An improperly run volume unmap
operation might cause data unavailability or loss.
Click Unmap to complete the operation. Volume mapping is removed and is no longer
displayed in the volume map view, as shown in Figure 6-65.
Note: Certain conditions prevent the changing of I/O group dynamically with NDVM for a
volume. If the volume is using data reduction in a DRP, or if a volume is a member of a
FlashCopy map and is in an RC relationship, the first command in the sequence,
addvdiskaccess, fails.
If there are no host mappings for the volumes, then the operation immediately displays the
target I/O group selection dialog box, as shown in Figure 6-67.
2. Select the new I/O group and preferred node and click Move to move the volume to the
new I/O group and preferred node. This GUI action runs the following commands:
– addvdiskaccess -iogrp {new i/o group} {volume}
Adds the specified I/O group to the set of I/O groups in which the volume can be made
accessible to hosts.
– movevdisk -iogrp {new i/o group} {volume}
Moves the preferred node of the volume to the new (target) caching I/O group.
– rmvdiskaccess -iogrp {old i/o group} {volume}
Removes the old (source) I/O group from the set of I/O groups in which the volume can
be made accessible to hosts.
364 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
In the likely case where the volume is mapped to a host, the GUI detects the host mapping
and starts a wizard, as shown in Figure 6-68, to ensure that the correct steps are performed
in the correct order. You are required to configure zoning between the host and the new I/O
group and ensure that all hosts to which the volume is mapped discovers new paths to the
volume. The steps that are required to modify the I/O group of a mapped volume are shown
below.
Figure 6-68 Modify I/O Group for a mapped volume wizard: Welcome
1. Verify that all hosts that use the volume are zoned to the target I/O group, and click Next to
proceed to the new I/O group selection window, as shown in Figure 6-69.
Figure 6-69 Modify I/O Group for a mapped volume wizard: I/O group selection window
Figure 6-70 Modify I/O Group for a mapped volume wizard: First stage completed (details)
3. Click Close to proceed to the validation window, as shown in Figure 6-71 on page 367.
Click Need Help to see information about how to prepare the host for the volume move.
After the host is ready for the volume path change, check the box confirming that path
validation was performed on the host.
366 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-71 Modify I/O Group for a mapped volume wizard: Validation
Note: Failure to ensure that the host discovered the new paths to all the volumes might
result in this process being disruptive and cause the host to lose access to the moved
volume or volumes.
4. After validation is complete and the acknowledgment box is checked, click Apply and
Next, as shown in Figure 6-72.
Figure 6-72 Modify I/O Group for a mapped volume wizard: Second stage completes (details)
Figure 6-73 Modify I/O Group for a mapped volume wizard: Operation complete
There are two ways to perform volume migration: by using the volume migration feature and
by creating a volume copy.
Note: You cannot move a volume copy that is compressed to an I/O group that contains a
node that does not support compressed volumes.
368 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To migrate a volume to another storage pool, complete the following steps:
1. In the Volumes menu, highlight the volume that you want to migrate. Select Actions →
Migrate to Another Pool…, as shown in Figure 6-74.
3. Select the new target storage pool and click Migrate, as shown in Figure 6-75. The Select
a Target Pool window displays the list of all pools that are a valid migration copy target for
the selected volume copy.
4. You are returned to the Volumes view. The time that it takes for the migration process to
complete depends on the size of the volume. The status of the migration can be monitored
by selecting Monitoring → Background Tasks, as shown in Figure 6-76.
After the migration task completes, the completed migration task is visible in the Recently
Completed Task window of the Background Tasks menu, as shown in Figure 6-77 on
page 371.
370 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-77 Volume migration complete
In the Volumes → Volumes menu, the volume copy is now displayed in the target storage
pool, as shown in Figure 6-78.
The volume copy is now migrated without any host or application downtime to the new
storage pool.
Another way to migrate single-copy volumes to another pool is to use the volume copy
feature, as described in “Volume migration by adding a volume copy” on page 372.
Note: Migrating a volume between storage pools with different extent sizes is not
supported. If you must migrate a volume to a storage pool with a different extent size, use
the volume migration by adding a volume copy method.
The easiest way to migrate volumes is to use the migration feature that is described in 6.5.10,
“Migrating a volume to another storage pool” on page 368. However, in some use cases, the
preferred or only method of volume migration is to create a copy of the volume in the target
storage pool and then remove the old copy.
Note: You can specify storage efficiency characteristics of the new volume copy differently
than the ones of the primary copy. For example, you can make a thin-provisioned copy of a
standard-provisioned volume.
This volume migration option can be used only for single-copy volumes. If you need to
move a copy of a mirrored volume by using this method, you must delete one of the volume
copies first and then create a copy in the target storage pool. This process causes a
temporary loss of redundancy while the volume copies synchronize.
To migrate a volume by using the volume copy feature, complete the following steps:
1. Select the volume that you want to move, and select Actions → Add Volume Copy, as
shown in Figure 6-79.
372 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. Create a second copy of your volume in the target storage pool, as shown in Figure 6-80.
You can modify the capacity savings options for the new volume copy. In our example, a
compressed copy of the volume is created in target pool Pool2. The Deduplication option
is not available if either of the volume copies is not in a DRP. Click Add to proceed.
Figure 6-82 Setting the volume copy in the target storage pool as the primary copy
4. Split or delete the volume copy in the source pool, as shown in Figure 6-83.
5. Confirm the removal of the volume copy, as shown in Figure 6-84 on page 375.
374 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 6-84 Confirming the deletion of a volume copy
6. The Volumes view now shows that the volume has a single copy in the target pool, as
shown in Figure 6-85.
Migrating volumes by using the volume copy feature requires more user interaction, but might
be a preferred option for particular use cases. One such example is migrating a volume from
a tier 1 storage pool to a lower performance tier 2 storage pool.
First, the volume copy feature can be used to create a copy in the tier 2 pool (steps 1 and 2).
All reads are still performed in the tier 1 pool to the primary copy. After the volume copies are
synchronized (step 3), all writes are destaged to both pools, but the reads are still done only
from the primary copy.
To test the performance of the volume in the new pool, switch the roles of the volume copies
to make the new copy the primary (step 4). If the performance is acceptable, the volume copy
in tier 1 can be split or deleted. If the tier 2 pool shows unsatisfactory performance, switch the
primary volume copy to one that is backed by tier 1 storage.
With this method, you can migrate between storage tiers with a fast and secure back-out
option.
For more information about how to set up CLI access, see Appendix C, “Command-line
interface setup” on page 925.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
376 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
You must know the following information before you start to create the volume:
In which storage pool the volume will have its extents.
From which I/O group the volume will be accessed.
Which IBM Spectrum Virtualize node will be the preferred node for the volume.
Size of the volume.
Name of the volume.
Type of the volume.
Whether this volume is to be managed by IBM Easy Tier to optimize its performance.
When you are ready to create your striped volume, run the mkvdisk command. The command
that is shown in Example 6-2 creates a 10 GB striped volume within the storage pool Pool0
and assigns it to the I/O group io_grp0. Its preferred node is node 1. The volume is given ID 8
by the system.
To verify the results, run the lsvdisk command and provide the volume ID as the command
parameter, as shown in Example 6-3.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction
0.00MB
378 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The required tasks to create a volume are complete.
Disk size: When the -rsize parameter is used to specify the real physical capacity of
a thin-provisioned volume, the following options are available to specify the physical
capacity: disk_size, disk_size_percentage, and auto.
Use the disk_size_percentage option to define initial real capacity by using a percentage
of the disk’s virtual capacity that is defined by the -size parameter. This option takes as
a parameter an integer, or an integer that is immediately followed by the percent (%)
symbol.
Use the disk_size option to directly specify the real physical capacity by specifying its size
in the units that are defined by using the -unit parameter (the default unit is MB). The
-rsize value can be greater than, equal to, or less than the size of the volume.
The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.
When an image mode volume is created, it directly maps to the thus far unmanaged MDisk
from which it is created. Therefore, except for a thin-provisioned image mode volume, the
volume’s LBA x equals MDisk LBA x.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.
Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks.
The remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Running the mkvdisk command to create an image mode volume is shown in Example 6-5.
As shown in this example, an image mode volume that is named Image_Volume_A is created
that uses the mdisk25 MDisk. The MDisk is moved to the storage pool ITSO_Pool1, and the
volume is owned by the I/O group io_grp0.
If you run the lsvdisk command, it shows a volume that is named Image_Volume_A with the
type image, as shown in Example 6-6.
Volume mirroring can be also used as an alternative method of migrating volumes between
storage pools.
To create a copy of a volume, run the addvdiskcopy command. This command creates a copy
of the chosen volume in the specified storage pool, which changes a non-mirrored volume
into a mirrored one.
380 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The following scenario shows how to create a copy of a volume in a different storage pool. As
shown in Example 6-7, the volume initially has a single copy with copy_id 0 that is provisioned
in pool Pool0.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
Example 6-8 shows adding the second volume copy by running the addvdiskcopy command.
During the synchronization process, you can see the status by running the
lsvdisksyncprogress command.
As shown in Example 6-9 on page 383, the first time that the status is checked, the
synchronization progress is at 48%, and the estimated completion time is 201018232305. The
estimated completion time is displayed in the YYMMDDHHMMSS format. In our example, it is 2020,
Oct-18 20:23:05. When the command is run again, the progress status is at 100%, and the
synchronization is complete.
382 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 6-9 Synchronization
IBM_Storwize:ITSO:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
2 vdisk0 1 0 201018202305
IBM_Storwize:ITSO:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
2 vdisk0 1 100
As shown in Example 6-10, the new volume copy (copy_id 1) was added and appears in the
output of the lsvdisk command.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
copy_id 1
status online
sync yes
auto_delete no
primary no
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
384 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
When adding a volume copy, you can define it with different parameters than the original
volume copy. For example, you can create a thin-provisioned copy of a standard-provisioned
volume to migrate a thick-provisioned volume to a thin-provisioned volume. The migration can
be also done in the opposite direction.
Volume copy mirror parameters: To change the parameters of a volume copy, you must
delete the volume copy and redefine it with the new values.
Example 6-12 shows a shortened lsvdisk output for an decompressed volume with a single
volume copy.
Example 6-13 adds a compressed copy with the -autodelete flag set.
Example 6-14 shows the lsvdisk output with another compressed volume (copy 1) and
volume copy 0 being set to auto_delete yes.
copy_id 0
status online
sync yes
auto_delete yes
primary yes
...
copy_id 1
status online
sync no
auto_delete no
primary no
...
When copy 1 is synchronized, copy 0 is deleted. You can monitor the progress of volume copy
synchronization by running the lsvdisksyncprogress command.
If the copy that you are splitting is not synchronized, you must use the -force parameter. If
you are attempting to remove the only synchronized copy of the source volume, the command
fails. However, you can run the command when either copy of the source volume is offline.
386 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 6-15 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a volume that is named SPLIT_VOL from a copy with ID 1 of the volume that is
named VOLUME_WITH_MIRRORED_COPY.
As you can see in Example 6-16, the new volume is created as an independent volume.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
388 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Tips: Changing the I/O group with which this volume is associated requires a flush of the
cache within the nodes in the current I/O group to ensure that all data is written to disk. I/O
must be suspended at the host level before you perform this operation.
If the volume has a mapping to any hosts, it is impossible to move the volume to an I/O
group that does not include any of those hosts.
This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O group.
If the -force parameter is used and the system cannot destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.
If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.
If any RC, IBM FlashCopy, or host mappings still exist for the target of rmvdisk command, the
delete fails unless the -force flag is specified. This flag causes the deletion of the volume and
any volume to host mappings and copy mappings.
If the volume is being migrated to image mode, the delete fails unless the -force flag is
specified. Using the -force flag halts the migration and then deletes the volume.
If the command succeeds (without the -force flag) for an image mode volume, the write
cache data is flushed to the storage before the volume is removed. Therefore, the underlying
LU is consistent with the disk state from the point of view of the host that uses the image
mode volume (crash-consistent file system). If the -force flag is used, consistency is not
ensured, that is, the data that the host believes to be written might not be present on the LU.
If any non-destaged data exists in the fast write cache for the target of rmvdisk command, the
deletion of the volume fails unless the -force flag is specified, in which case, any
non-destaged data in the fast write cache is deleted.
Example 6-17 shows how to run the rmvdisk command to delete a volume from your
IBM Spectrum Virtualize configuration.
This command deletes the volume_A volume from the IBM Spectrum Virtualize configuration.
If the volume is assigned to a host, you must use the -force flag to delete the volume, as
shown in Example 6-18.
To set the time interval for which the volume must be idle before it can be deleted from the
system, run the chsystem command. This setting affects the following commands:
rmvdisk
rmvolume
rmvdiskcopy
rmvdiskhostmap
rmmdiskgrp
rmhostiogrp
rmhost
rmhostport
These commands fail unless the volume was idle for the specified interval or the -force
parameter was used.
To enable volume protection by setting the required inactivity interval, run the following
command:
svctask chsystem -vdiskprotectionenabled yes -vdiskprotectiontime 60
Assuming that your OS supports expansion, you can run the expandvdisksize command to
increase the capacity of a volume, as shown in Example 6-19.
This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it
a total size of 40 GB.
To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 6-20 on page 391. This command changes the real size of the volume_B volume to a
real capacity of 55 GB. The capacity of the volume is unchanged.
390 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 6-20 The lsvdisk command
IBM_Storwize:ITSO:superuser>lsvdisk volume_B
id 26
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes
Important: If a volume is expanded, its type becomes striped, even if it was previously
sequential or in image mode.
If not enough extents are available to expand your volume to the specified size, the
following error message is displayed:
CMMVC5860E The action failed because there were not enough extents in the
storage pool.
392 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When the host bus adapter (HBA) on the host scans for devices that are attached to it, the
HBA discovers all of the volumes that are mapped to its FC ports and their SCSI identifiers
(SCSI LUN IDs).
For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you do not
specify a SCSI LUN ID when mapping a volume to the host, the storage system automatically
assigns the next available SCSI LUN ID based on any mappings that exist with that host.
Note: The SCSI-3 standard requires LUN 0 to exist on every SCSI target. This LUN must
implement a number of standard commands, including Report LUNs. However, this LUN
does not have to provide any storage capacity.
Example 6-21 shows how to map volumes volume_B and volume_C to the defined host
Almaden by running the mkvdiskhostmap command.
Example 6-22 shows the output of the lshostvdiskmap command, which shows that the
volumes are mapped to the host.
Certain HBA device drivers stop when they find a gap in the sequence of SCSI LUN IDs, as
shown in the following examples:
Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
Volume 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.
In the output of the command, you can see that only one volume (volume_A) is mapped to the
host Siam. The volume is mapped with SCSI LUN ID 0.
Specifying the flag before the hostname: Although the -delim flag normally comes at
the end of the command string, you must specify this flag before the hostname in this case.
Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
You can also run the lshostclustervolumemap command to show the volumes that are
mapped to a specific host cluster, as shown in Example 6-25.
394 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
This command shows the list of hosts to which the volume volume_B is mapped.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
This command unmaps the volume that is called volume_D from the host that is called Tiger.
You can also run the rmvolumehostclustermap command to delete a volume mapping from a
host cluster, as shown in Example 6-28.
This command unmaps the volume that is called UNCOMPRESSED_VOL from the host cluster that
is called vmware_cluster.
Note: Removing a volume that is mapped to the host makes the volume unavailable for I/O
operations. Ensure that the host is prepared for this situation before removing a volume
mapping.
The command that is shown in Example 6-29 moves volume_C to the storage pool that is
named STGPool_DS5000-1.
Note: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.
You can use the optional threads parameter to control priority of the migration process. The
default is 4, which is the highest priority setting. However, if you want the process to take a
lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 6-30.
IBM_2145:ITSO_CLUSTER:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
To migrate a fully managed volume to an image mode volume, the following rules apply:
Cloud snapshots must not be enabled on the source volume.
The destination MDisk must be greater than or equal to the size of the volume.
The MDisk that is specified as the target must be in an unmanaged state.
396 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.
Example 6-31 shows running the migratetoimage command to migrate the data from
volume_A onto mdisk10, and to put the MDisk mdisk10 into the STGPool_IMAGE storage pool.
You can use this command to shrink the physical capacity of a volume or to reduce the virtual
capacity of a thin-provisioned volume without altering the physical capacity that is assigned to
the volume. To change the volume size, use the following parameters:
For a standard-provisioned volume, use the -size parameter.
For a thin-provisioned volume’s real capacity, use the -rsize parameter.
For a thin-provisioned volume’s virtual capacity, use the -size parameter.
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled.
If the volume contains data that is being used, do not shrink the volume without backing up
the data first. The system reduces the capacity of the volume by removing arbitrarily chosen
extents, or extents from those sets that are allocated to the volume. You cannot control which
extents are removed. Therefore, you cannot assume that it is unused space that is removed.
Image mode volumes cannot be reduced in size. To reduce their size, first they must be
migrated to fully managed mode.
Before the shrinkvdisksize command is used on a mirrored volume, all copies of the volume
must be synchronized.
Important: Consider the following guidelines when you are shrinking a disk:
If the volume contains data or host-accessible metadata (for example, an empty
physical volume of an LVM), do not shrink the disk.
This command can shrink a FlashCopy target volume to the same capacity as the
source.
Before you shrink a volume, validate that the volume is not mapped to any host objects.
You can determine the exact capacity of the source or master volume by running the
svcinfo lsvdisk -bytes vdiskname command.
Example 6-32 shows running the shrinkvdisksize command to reduce the size of volume
volume_D from a total size of 80 GB by 44 GB to the new total size of 36 GB.
If you want to know more about these MDisks, you can run the lsmdisk command and provide
the MDisk ID that is listed in the output of the lsvdiskmember command as a parameter.
398 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
0,A_MIRRORED_VOL_1,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
00002,0,1,empty,0,no,0,0,Pool0,no,yes,0,A_MIRRORED_VOL_1,
2,VOLUME_WITH_MIRRORED_COPY,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498
00000000000004,0,1,empty,0,no,0,0,Pool0,no,yes,2,VOLUME_WITH_MIRRORED_COPY,
3,THIN_PROVISION_VOL_1,0,io_grp0,online,0,Pool0,100.00GB,striped,,,,,6005076400F58004980000
0000000005,0,1,empty,1,no,0,0,Pool0,no,yes,3,THIN_PROVISION_VOL_1,
6,MIRRORED_SYNC_RATE_16,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004980000
0000000008,0,1,empty,0,no,0,0,Pool0,no,yes,6,MIRRORED_SYNC_RATE_16,
7,THIN_PROVISION_MIRRORED_VOL,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004
9800000000000009,0,1,empty,1,no,0,0,Pool0,no,yes,7,THIN_PROVISION_MIRRORED_VOL,
8,Tiger,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F580049800000000000010,0,1,e
mpty,0,no,0,0,Pool0,no,yes,8,Tiger,
9,UNCOMPRESSED_VOL,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
00011,0,1,empty,0,no,1,0,Pool0,no,yes,9,UNCOMPRESSED_VOL,
12,vdisk0_restore,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004980000000000
000E,0,1,empty,0,no,0,0,Pool0,no,yes,12,vdisk0_restore,
13,vdisk0_restore1,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
0000F,0,1,empty,0,no,0,0,Pool0,no,yes,13,vdisk0_restore1,
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction0.00MB
400 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To learn more about these storage pools, run the lsmdiskgrp command as described in
Chapter 5, “Storage pools” on page 237.
Before you trace a volume, you must unequivocally map a logical device that is seen by the
host to a volume that is presented by the storage system. The best volume characteristic for
this purpose is the volume ID. This ID is available to the OS in the Vendor Specified Identifier
field of page 0x80 or 0x83 (vital product data (VPD)), which the storage device sends in
response to SCSI INQUIRY command from the host.
In practice, the ID can be obtained from the multipath driver in the OS. After you know the
volume ID, you can use it to identify the physical location of data.
Note: For sequential and image mode volumes, a volume copy is mapped to exactly one
MDisk. This configuration usually is not used for striped volumes unless the volume size is
lesser than the extent sizes. Therefore, a single striped volume uses multiple MDisks in a
typical case.
For example, on a Linux host running a native multipath driver, you can use the output of the
command multipath -ll to find the volume ID, as shown in Example 6-37.
Note: the volume ID that is shown in the output of multipath -ll is generated by the Linux
scsi_id. For systems that provide the VPD by using page 0x83 (such as IBM Spectrum
Virtualize devices), the ID that is obtained from the VPD page is prefixed by the number 3,
which is the Network Address Authority (NAA) type identifier. Therefore, the volume NAA
identifier (that is, the volume ID that is obtained by running the SCSI INQUIRY command)
starts at the second displayed digit. In Example 6-37, the volume ID starts with digit 6.
Look for the VDisk unique identifier (UID) that matches volume UID that was identified and
note the volume name (or ID) for a volume with this UID.
2. To list the MDisks that contain extents that are allocated to the specified volume, run the
lsvdiskmember vdiskname command, as shown in Example 6-39.
3. For each of the MDisk IDs that were obtained in step 2, run the lsmdisk mdiskID
command to discover the MDisk controller and LUN information. Example 6-40 shows the
output for mdisk0. The output displays the back-end storage controller name and the
controller LUN ID to help you to track back to a LUN within the disk subsystem.
402 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd
You can identify the back-end storage that is presenting the LUN by using the value of the
controller_name field that was returned for the MDisk.
On the back-end storage, you can identify which physical disks make up the LUN that was
presented to the Storage Virtualize system by using the volume ID that is displayed in the UID
field.
Chapter 7. Hosts
This chapter describes the host configuration procedures that are required to attach
supported hosts to the storage systems and documents the available ways of host
attachment, including Non-Volatile Memory Express (NVMe) over Fabric (NVMe-oF), Fibre
Channel Small Computer System Interface (SCSI) (FC-SCSI), serial-attached SCSI (SAS),
and internet Small Computer Systems Interface (iSCSI).
This chapter also explains host clustering representation in the storage system and N_Port ID
Virtualization (NPIV) support for a host-to-storage system communication.
The ability to consolidate storage for attached open systems hosts provides the following
benefits:
Easier storage management.
Increased utilization rate of the installed storage capacity.
Advanced Copy Services functions that are offered by storage systems, which are
independent from host and external storage (if external storage virtualization is used)
vendors.
Only a multipath driver is required for attached hosts. You do not need a specialized
storage vendor-specific driver.
Hosts can be connected to the storage systems by using any of the following protocols:
Fibre Channel Protocol (FCP)
Fibre Channel over Ethernet (FCoE)
iSCSI
SAS
iSCSI Extensions for Remote Direct Memory Access (RDMA) (iSER)
Starting with IBM FlashSystem 5100 and IBM Spectrum Virtualize V8.2 and later, NVMeoF
using Fibre Channel (FC-NVMe) or NVMeoF or Fibre Channel (FC)) is supported.
Hosts that connect to the storage systems by using fabric switches that use the FC, FCoE, or
FC-NVMe protocols must be zoned correctly, as described in Chapter 2, “Planning” on
page 71. N-Port ID Virtualization support (supported from IBM Spectrum Virtualize V7.7
onwards) plays a central role because it is required for FC-NVMe connectivity.
Hosts that connect to the systems by using the iSCSI protocol must be configured correctly,
as described in Chapter 2, “Planning” on page 71.
Note: Certain host operating systems (OSs) can be directly connected to the
IBM FlashSystem storage systems without FC fabric switches. For more information, see
the IBM System Storage Interoperation Center (SSIC).
For correct volumes representation and access through several access paths from the host
side, you must install a multipathing driver in the connected host. IBM FlashSystem family
storage systems are supported by several OS-native multipathing drivers. Multipathing drivers
also serve the following purposes:
Protection from fabric paths failures, including port failures on IBM Spectrum Virtualize
system nodes.
Protection from a host bus adapter (HBA) failure (if two HBAs are used).
406 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Protection from fabric failures if the host is connected through two HBAs to two separate
fabrics.
To provide load balancing across the host HBAs.
For more information about the various host multipath driver solutions that are native to OSs
and versions that are supported, and support an IBM FlashSystem system, see the SSIC.
For more information about how to attach various supported host OSs to the systems, see the
“Host Attachment” section of IBM Documentation.
If your host OS is not mentioned in the SSIC, you can ask your IBM representative to submit
a special request for support by contacting your IBM Business Partner, account manager, or
IBM Support.
On the storage system, a host is represented by host objects, which must be configured by
using the GUI or command-line interface (CLI) and contain the necessary credentials for
host-to-storage communications. A real world host receives access to the storage capacity
through a host object that is configured on a storage system and a storage space that is
mapped to the host object in the form of a volume.
IBM Spectrum Virtualize V 8.4 supports configuring the following host objects:
Host
Host cluster (Supported since Version 7.7.1 and later)
Each host object has attributes that should be configured and provide the status of the host
as it is visible by storage system.
A host cluster object groups multiple hosts that are working as a cluster. A host cluster object
is treated as a single entity so that multiple hosts can access the same volumes with a single
shared mapping.
Volumes that are mapped to a host cluster are assigned to all members of the host cluster
with the same SCSI ID.
A typical use case for a host cluster object is to group all the hosts that are the members of a
host OS-based cluster, such as IBM PowerHA®, and Microsoft Cluster Server, and present it
as a single entity sharing access to the volumes and with improved and simplified control. The
following commands deal with host and host cluster objects:
Commands that provide information about defined hosts and host clusters (start with ls
(list)):
– lshostcluster
– lshostclustermember
– lshostclustervolumemap
– lshost
– lshostiogrp
For more information about each command, see IBM Documentation and select
Command-line interface → Host commands. The instructions to perform basic tasks on
hosts and host clusters are provided in 7.6, “Performing hosts operations by using CLI” on
page 461.
408 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The limitation in Version 8.2 of being able to attach only one SCSI or NVMe per I/O group was
removed in Version 8.3. You can now run SCSI and NVMe in parallel. The limit of 512 host
objects per I/O group remains in place. When you run SCSI and NVMe in parallel, there are
limits for each protocol, as shown in Table 7-1.
496 16 512 a
a. IBM Storwize V5100 systems have a maximum of 256 hosts.
Note: Although the specifications that are shown in Table 7-1 are the maximum amount of
each type of host attachment that you can have, if you do not have the maximum NVMe
Host Objects defined, then these objects do not reduce the number of total host objects
that you can have. For example, if you had 10 NVMe host objects that are defined, you
might have up to 502 SCSI host objects defined. The only hard limit is 16 NVMe host
objects per I/O group. You should be diligent when planning a parallel SCSI and NVMe
deployment as described in Chapter 2, “Planning” on page 71 because it can be
resource-intensive, especially with large deployments (many hosts). This situation occurs
because NVMe is more sensitive to delays than SCSI. It is a best practice to check the
Configuration Limits page for your product and 7.3, “NVMe over Fibre Channel” on
page 408.
To avoid any potential interoperability problems, a volume can be mapped to a host only by
using one protocol. IBM FlashCopy, volume mirroring, Remote Copy (RC), and Data
Reduction Pools (DRPs) are all supported by NVMe-oF. Starting with Version 8.3.1, there is
support for stretched cluster configurations, and starting with Version 8.4, HyperSwap is
supported for NVMe-oF attached hosts.
Note: In Version 8.4, HyperSwap and Non-disruptive Volume Move (NDVM) support is
available for FC-NVMe hosts because IBM Spectrum Virtualize is using Asymmetric
Namespace Access (ANA) reporting. The following features are available for FC-NVMe
attached hosts:
Sites can be defined to facilitate awareness of HyperSwap volume site properties.
It is possible to map HyperSwap volumes by using multiple I/O groups on the same and
different sites.
Hosts can use I/O through a non-optimized path even if the primary site is available.
The ability to fail over to the secondary site if the primary site is down.
For more information about NVMe, see IBM Storage and the NVM Express Revolution,
REDP-5437.
Note: IBM Flash System 5010 can have only one I/O group and does not support
clustering with other IBM Flash System 5010 control enclosures.
This model has a pair of distinct control modules that are known as nodes that share
active/active access for any specific volume within the same I/O group. Each of these nodes
has their own FC worldwide node name (WWNN). Ports from each node’s network adapter or
HBA have a set of worldwide port names (WWPNs) that are presented to the fabric.
Traditionally, if one node fails or is removed for some reason, the paths that are presented for
volumes from that node to a host go offline. In this case, it is up to the native OS multipathing
software to fail over from using both sets of WWPNs to only those nodes and paths that
remain online.
Although this scenario is the one that multipathing software is designed for, occasionally it
can be problematic, particularly if paths are not seen as coming back online for some reason
and also relies on correct configuration and implementation of specific multipath driver.
Starting with Version 7.7, the implementation of NPIV mode is available on the storage
systems.
When NPIV mode is enabled on the storage systems, target ports (also known as host attach
ports) dedicated only to host communication become available, which efficiently separates
internode communication and host I/O. You can move host attachment ports between the
nodes in the same I/O group transparently for the host and make sure that host-dedicated
ports do not come online until they are ready to service I/O, which improves host behavior
when storage nodes leave or join the storage cluster for any reason. If one node in I/O group
is offline, moving host attach ports to the online node in the same I/O group masks the path
failures that are caused by the offline node from hosts, and the multipathing driver does not
need to perform any path recovery.
When NPIV is enabled on the storage system, each physical WWPN reports up to four virtual
WWPNs, as listed in Table 7-2.
Primary port The WWPN that communicates with back-end storage. It can be used
for node to node traffic (local or remote).
Primary SCSI host attach The WWPN that communicates with hosts. It is a target port only. It is
port the primary port, so it is based on this local node’s WWNN.
410 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
NPIV port Port description
Failover SCSI host attach A standby WWPN that communicates with hosts that is brought online
port only if the partner node within the I/O group goes offline. This WWPN
is the same as the primary host attach WWPN of the partner node.
Primary NVMe host attach The WWPN that communicates with hosts. It is a target port only. This
port WWPN is the primary port, so it is based on this local node’s WWNN.
Failover NVMe host attach A standby WWPN that communicates with hosts that is brought online
port only if the partner node within the I/O group goes offline. This WWPN
is the same as the primary host attach WWPN of the partner node.
Figure 7-1 shows the five WWPNs that are associated with a port when NPIV is enabled.
Figure 7-1 Allocation of NPIV virtual WWPN ports per physical port
Note: Figure 7-2 shows only two ports per node in detail, but the same situation applies for
all physical ports. The effect is the same for NVMe ports because they use the same NPIV
structure, but with the NVMe topology instead of regular SCSI.
Figure 7-2 Allocation of NPIV virtual WWPN ports per physical port after a node failure
Since Version 7.7 and later, this process happens automatically when NPIV is enabled at a
system level in the storage systems. This failover happens only between the two nodes in the
same I/O group.
A transitional state enables migration of hosts from previous non-NPIV enabled systems to
enabled NPIV systems, which enables a transition state as hosts are rezoned to the primary
host attach WWPNs.
The process to enable NPIV on a new system is slightly different than on an existing system.
For more information, see IBM Documentation.
Note: NPIV is supported for FC-based communication only. It is not supported for the
FCoE or iSCSI protocols.
412 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.4.1 NPIV prerequisites
Consider the following key points for NPIV enablement:
The system must be running Version 7.7 or later.
A Version 7.7 or later system with NPIV enabled as back-end storage for a system that is
earlier than Version 7.7 is not supported.
Both nodes within an I/O group should have identical hardware to enable failover to work
as expected.
The FC switches to which the system ports are attached must support NPIV and have this
feature enabled.
Node connectivity should be done according to “Zoning requirements for N_Port ID
virtualization” at IBM Documentation. Both nodes in one I/O group should have their
equivalent ports connected to their equivalent fabrics (switch), for example, port 1 of
node1 should be on the same fabric as port 1 of the node2.
7.4.2 Verifying the NPIV mode state for a new system installation
New systems with IBM Spectrum Virtualize V7.7 or later are NPIV-enabled by default. You
verify whether NPIV is enabled by completing the following steps, and if necessary turn on
NPIV (see step 2):
1. Run the lsiogrp command to list the I/O groups that are present in the system, as shown
in Example 7-1.
Example 7-1 shows that in our example that we have one full I/O group with ID 0, two
nodes in it, and 10 virtual disks (VDisks). The other I/O groups are empty.
2. Run the lsiogrp <id> | grep fctargetportmode command for the specific I/O group ID to
display the fctargetportmode setting. If this setting is enabled, as shown in Example 7-2,
NPIV host target port mode is enabled. If NPIV mode is disabled, the fctargetportmode
parameter reports as disabled.
Example 7-2 Checking the NPIV mode by viewing the fctargetportmode field
IBM_IBM FlashSystem:FS9100:superuser>lsiogrp 0|grep fctargetportmode
fctargetportmode enabled
To enable NPIV mode on a storage system, it is necessary to complete the following actions:
1. Audit your SAN fabric layout and zoning rules because NPIV usage has strict
requirements. Ensure that equivalent ports are on the same fabric and in the same zone.
2. Check the path count between your hosts and the IBM Spectrum Virtualize system to
ensure that the number of paths is half of the usual supported maximum.
3. Run the lstargetportfc command to discover the primary host attach WWPNs (virtual
WWPNs), as shown in bold in Example 7-5. Those virtualized ports are not enabled for
host I/O communication yet (see the host_io_permitted column).
Example 7-5 Running the lstargetportfc command to get the primary host WWPNs (virtual WWPNs)
IBM_IBM FlashSystem:FS9100:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted virtualized
protocol
1 500507680140A288 500507680100A288 1 1 1 010A00 yes no scsi
2 500507680142A288 500507680100A288 1 1 000000 no yes scsi
3 500507680144A288 500507680100A288 1 1 000000 no yes nvme
4 500507680130A288 500507680100A288 2 1 1 010400 yes no scsi
5 500507680132A288 500507680100A288 2 1 000000 no yes scsi
6 500507680134A288 500507680100A288 2 1 000000 no yes nvme
7 500507680110A288 500507680100A288 3 1 1 010500 yes no scsi
414 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
8 500507680112A288 500507680100A288 3 1 000000 no yes scsi
9 500507680114A288 500507680100A288 3 1 000000 no yes nvme
10 500507680120A288 500507680100A288 4 1 1 010A00 yes no scsi
11 500507680122A288 500507680100A288 4 1 000000 no yes scsi
12 500507680124A288 500507680100A288 4 1 000000 no yes nvme
...
58 500507680C140009 500507680C000009 4 2 2 010900 yes no scsi
59 500507680C180009 500507680C000009 4 2 000000 no yes scsi
60 500507680C1C0009 500507680C000009 4 2 000000 no yes nvme
4. To enable virtualized ports for host I/O communication and still keep access to the hosts
that are using hardware-defined ports (not in bold in Example 7-5 on page 414), you must
enable transitional mode for NPIV on the system (see Example 7-6).
Alternatively, to activate NPIV in transitional mode by using the GUI, go to the GUI and
select Settings → System → I/O Groups, as shown in Figure 7-3.
Then, check the current NPIV setting by viewing the NPIV column, which shows “disabled”
if NPIV is not enabled. Select the I/O group on which you want to enable NPIV and select
Actions → Change NPIV Settings, as shown in Figure 7-4.
Example 7-7 Host attach WWPNs (virtual WWPNs) permitting host traffic
IBM_IBM FlashSystem:FS9100:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted virtualized
protocol
1 500507680140A288 500507680100A288 1 1 1 010A00 yes no scsi
2 500507680142A288 500507680100A288 1 1 1 010A02 yes yes scsi
3 500507680144A288 500507680100A288 1 1 1 010A01 yes yes nvme
4 500507680130A288 500507680100A288 2 1 1 010400 yes no scsi
5 500507680132A288 500507680100A288 2 1 1 010401 yes yes scsi
6 500507680134A288 500507680100A288 2 1 1 010402 yes yes nvme
7 500507680110A288 500507680100A288 3 1 1 010500 yes no scsi
8 500507680112A288 500507680100A288 3 1 1 010501 yes yes scsi
9 500507680114A288 500507680100A288 3 1 1 010502 yes yes nvme
...
58 500507680C140009 500507680C000009 4 2 2 010900 yes no scsi
59 500507680C180009 500507680C000009 4 2 2 010901 yes yes scsi
60 500507680C1C0009 500507680C000009 4 2 2 010902 yes yes nvme
6. Add the primary host attach ports (virtual WWPNs) to the host zones, but do not remove
the IBM FlashSystem WWPNs that are in the zones. Example 7-8 shows a host zone to
the primary port WWPNs of the IBM FlashSystem nodes.
Example 7-9 shows that we added the primary host attach ports (virtual WWPNs) to our
example host zone so that we can change the host without disrupting its availability.
Example 7-9 Transitional host zone (added host attach ports are in bold)
zone: WINDOWS_HOST_01_IBM_FS9100
10:00:00:05:1e:0f:81:cc
50:05:07:68:01:40:A2:88
50:05:07:68:0C:11:00:09
50:05:07:68:01:42:A2:88
50:05:07:68:0C:15:00:09
7. With the transitional zoning active in the fabrics, ensure that the host is using the new
NPIV ports for host I/O. Example 7-10 on page 417 shows the pathing for our host before
and after adding the new host attach ports by using the old IBM Subsystem Device Driver
(SDD) Device Specific Module (SDDDSM) multipathing driver. The select count increases
on the new paths and stops on the old paths.
416 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 7-10 Host device pathing: Before and after
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 1
Total Devices : 1
8. After all hosts are rezoned and the pathing is validated, change the system NPIV to
enabled mode by running the command that is shown in Example 7-11.
NPIV is enabled on the system, and the hosts use the virtualized WWPNs for I/O. To
complete the NPIV implementation, you can modify the host zones to remove the old primary
attach port WWPNs. Example 7-12 shows the final zone with the host HBA and the
IBM FlashSystem virtual WWPNs.
Note: If any hosts are still configured to use the physical ports on the system, the system
prevents you from changing fctargetportmode from transitional to enabled and shows
the following error:
CMMVC8019E Task could interrupt I/O and force flag not set.
418 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.5.1 Creating hosts
This section describes how to create FC, iSCSI, and NVMe connected host objects by using a
GUI. It is assumed that hosts are prepared for attachment and that the host WWPNs, iSCSI
initiator names, or NVMe Qualified Names (NQNs) are known. For more information, see the
“Host Attachment” section of IBM Documentation.
2. To create a host, click Add Host. If you want to create an FC host, go to “Creating Fibre
Channel host objects”. To create an iSCSI host, go to “Creating iSCSI host objects” on
page 429. To create an NVMe host, go to “Creating NVMe host objects” on page 430.
420 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. Enter a hostname and click the Host Port menu to get a list of all discovered WWPNs (see
Figure 7-8).
3. Select one or more WWPNs for your host from the list. The host WWPNs should be visible
on IBM FlashSystem storage if the hosts were zoned and presented to the storage system
correctly. If the hosts do not appear in the list, scan for new paths as required on the
respective OS and click the Rescan icon next to the WWPN box. If they still do not appear,
check the SAN zoning, make sure that hosts are connected and running, and then repeat
the scanning.
Creating offline hosts: If you want to create hosts that are offline or not connected at
the moment, it is also possible to enter the WWPNs manually. Enter them into the Host
Ports field to add them to the list.
4. If you want to add more ports to your Host, choose several WWPNs from the list to add all
ports that belong to the specific host.
6. If you set up object-based access control (OBAC) as described in Chapter 11, “Ownership
groups” on page 723, then select the Advanced section and choose the ownership group
that you want the host to be a part of from the Ownership Group menu, as shown in
Figure 7-10 on page 423.
422 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-10 Adding a host to an ownership group
Note: If the host cluster object was created, then the Host Clusters list appears in the
Advanced section, as shown in Figure 7-11. Use this list to add a host to the cluster.
After defining the FC hosts, you can create volumes and map them to the created hosts,
which is described in Chapter 6, “Volumes” on page 299.
First, the iSCSI configuration should be checked and modified in accordance with the planned
configuration, and the Ethernet ports must be configured to enable iSCSI communication.
2. In the iSCSI Configuration window, you can modify the system name, node names, and
provide an optional iSCSI Alias for each node, if needed (see Figure 7-13 on page 425).
424 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-13 iSCSI Configuration modification
3. The interface shows an Apply Changes prompt to apply any changes that are made before
continuing.
In the lower left of the configuration window, it is possible to configure internet Storage
Name Service (iSNS) addresses and Challenge Handshake Authentication Protocol
(CHAP) if they are needed in your environment.
Note: The authentication of hosts is optional. By default, it is disabled. The user can
choose to enable CHAP or CHAP authentication, which involves sharing a CHAP
secret between the cluster and the host. If the correct key is not provided by the host,
the IBM FlashSystem system does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.
5. Select the port to set the iSCSI IP information. Select Actions → Modify IP Settings. The
dialog box that is shown in Figure 7-15 opens.
6. After the IP address is configured for a port, click Modify to enable the configuration.
7. You can see that iSCSI is enabled for host I/O on the required interfaces by the presence
of “yes” in the Host Attach column (see Figure 7-16 on page 427).
426 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-16 Ports that are configured for the iSCSI connection
8. Repeat the above steps to configure all Ethernet ports that are planned for host
communication.
9. The iSCSI host connection is enabled after setting the IP address by default. There are
several actions that can be done with already configured ports, as shown in Figure 7-17.
For example, to disable any interfaces that you do not want to be used for host
connections and might be used for replication only, select the configured port, and then
select Actions (or right mouse button while hovering over the chosen port) →
Modify iSCSI Hosts.
11.As a best practice, it is always good to isolate iSCSI traffic in a separate subnet. It is also
possible to set a virtual local area network (VLAN) for the iSCSI traffic. To enable the
VLAN, select Actions → Modify VLAN, as shown in Figure 7-19. The system informs you
that at least two ports will be affected by change. To see the details about the effect, click
2 ports affected (see Figure 7-20 on page 429). Make any necessary changes and click
Modify.
428 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-20 VLAN settings: Details
The system is now configured and ready for iSCSI host use. Note the initiator iSCSI Qualified
Name (IQN) names of storage node canisters (see Figure 7-13 on page 425) because they
are necessary to configure access from host to the storage. For more information about
creating volumes and mapping them to a host, see Chapter 6, “Volumes” on page 299.
Note: Check the iSCSI configuration and make the modifications before creating iSCSI
host objects (configuring the hosts) because in some modifications it might be
necessary to redefine the host object or change the configuration on the host.
2. Enter CHAP authentication credentials (if using it), then the hostname into the Name field,
and then the iSCSI initiator name into the iSCSI host IQN field. Click the plus sign (+) if
you want to add more initiator names to the host.
3. If you are connecting to an HP-UX or TPGS host, click the Host type field (you might need
to scroll down the window), and then select the correct host type. For our VMware Elastic
Sky X (ESX) host, we select VVOL. However, you can select Generic if you are not using
VMware vSphere Virtual Volumes (VVOLs).
4. Click Save to complete the host object creation.
5. Repeat the above steps for every iSCSI host that must be created. Figure 7-22 shows the
Hosts view window after creating the FC host and iSCSI host.
430 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: To see whether your hosts and IBM FlashSystem system are compatible, see the
SSIC.
3. Click Save. Your host appears in the defined host list, as shown in Figure 7-25.
Note: As shown in Figure 7-25, it is possible to add the hosts that are not yet connected
to the system or are offline by using their known NQN. In this case, their status is
Offline until they are connected or turned on.
4. The storage system I/O group NQN must be configured on the host so that it can access
the mapped capacity. Also, you can use automatic discovery from the host to find the NQN
of the I/O group if the connection and zoning is done correctly. To discover the I/O group
NQN, run the lsiogrp command, as shown in Example 7-13 on page 433.
432 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Example 7-13 The lsiogrp command
IBM_IBM FlashSystem:GLTLoaner:superuser>lsiogrp 0
id 0
name io_grp0
node_count 2
vdisk_count 8
host_count 1
flash_copy_total_memory 20.0MB
flash_copy_free_memory 20.0MB
remote_copy_total_memory 20.0MB
remote_copy_free_memory 20.0MB
mirroring_total_memory 20.0MB
mirroring_free_memory 20.0MB
raid_total_memory 350.0MB
raid_free_memory 310.2MB
maintenance no
compression_active no
accessible_vdisk_count 8
compression_supported yes
max_enclosures 20
encryption_supported yes
flash_copy_maximum_memory 2048.0MB
site_id
site_name
fctargetportmode enabled
compression_total_memory 0.0MB
deduplication_supported yes
deduplication_active no
nqn nqn.1986-03.com.ibm:nvme:2145.000002042140049E
You can now configure your NVMe host to use the IBM SAN Volume Controller (SVC) as a
target.
Note: For more information about a compatibility matrix and supported hardware, see
IBM Documentation and the SSIC.
The host cluster object is useful for hosts that are clustered on OS levels. Examples are
Microsoft Clustering Server, IBM PowerHA®, Red Hat Cluster Suite, and VMware ESX. By
defining a host cluster object, a user can map one or more volumes to this host cluster object.
As a result, the volume or set of volumes are mapped, and access is shared by all individual
host objects that are included into the host cluster object. Note that each of the volumes is
mapped by using the same SCSI ID to each host that is part of the host cluster by running a
single command.
Note: For example, SCSI IDs 0 - 100 for individual host assignment and SCSI IDs that are
greater than 100 can be used for host cluster. By using such a policy, specific volumes are
not shared, and common volumes for the host cluster can be shared. For example, the
boot volume of each host can be kept private while data and application volumes can be
shared.
2. Click Create Host Cluster to open the wizard that is shown in Figure 7-27.
3. Enter a cluster name, and if applicable choose the ownership group that the hosts are a
part of. Then, you can select the individual hosts that you want in the cluster object by
pressing the Ctrl or Shift keys and selecting them, as shown in Figure 7-28 on page 435.
Click Next after you are done.
434 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-28 Host Cluster details definition
4. A summary opens in which you can confirm that you selected the correct hosts. Click
Make Host Cluster (see Figure 7-29).
5. After the task completes, the cluster that was created can be seen in the Host Clusters
view (see Figure 7-30).
From the Host Clusters view, many options are available to manage and configure the host
cluster. These options are accessed by selecting a cluster and clicking Actions (see
Figure 7-31).
436 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.5.3 Actions on hosts
This section covers host administration, including host modification, host mappings, and
deleting hosts. The basic host creation process is described in 7.5.1, “Creating hosts” on
page 419.
Select Hosts → Hosts view and right-click one of the existing hosts, or expand the Actions
menu. You see a list of actions that can be performed on a host, as shown in Figure 7-32.
Renaming a host
To rename a host, complete the following steps:
1. Select the host, right-click it, and select Rename.
2. Enter a new name and click Rename (see Figure 7-33). If you click Reset, the changes
are reset to the original hostname.
The Assign to Host Cluster action is active only if you select a host that does not belong to
the cluster and if at least one host cluster object exists. If there are no host cluster objects that
are configured, you must create one.
Note: To select multiple objects, press and hold the Ctrl key and click each host that
you need, or press and hold the Shift key and click the first and the last objects that
must be selected.
2. Select the existing cluster to which you want to add the host, as shown in Figure 7-34, and
click Next.
3. Your storage system checks for SCSI ID conflicts. In a host cluster, all hosts must have the
same SCSI IDs for a mapped volume. For example, a single volume cannot be mapped
with SCSI ID 0 to one host and with SCSI ID 1 to another host.
If no SCSI ID conflict is detected, the system provides a list of configuration settings for
you to verify, as shown in Figure 7-35 on page 439. Click Assign to complete the
operation. When the operation completes, the host is included in all existing host cluster
volume mappings.
438 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-35 Assign host to host cluster confirmation
If a host already has private volume mappings that use SCSI IDs that are used in host
cluster shared mappings, a SCSI ID conflict is raised, as shown in Figure 7-36. In this
case, you cannot assign this host to the host cluster. First, you must resolve the ID conflict
by removing the private host volume mappings or by changing the assigned SCSI IDs for
conflicting mappings.
In this window, you can verify a list of hosts to be removed and make a choice about what to
do with the volume mappings of the hosts that are deleted. They can be removed, or retained
and converted from shared to private mappings. Click Remove Hosts to complete the
operation.
Note: Host cluster shared mappings are not shown in this view. Only host private
mappings are listed. To modify the share host cluster mappings, use another GUI view,
as described 7.5.4, “Actions on host clusters” on page 448.
440 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. To remove volume mappings, select the ones that need to be deleted, and click Remove
Volume Mappings. The next window prompts you to verify your changes and complete
the removal procedure.
4. If you intend to add a private mapping, click Add Volume Mappings. A list of volumes
appears, as shown in Figure 7-39. If a volume already has a private mapping to this host,
or it has a shared mapping with a host cluster that includes this host, it is not listed.
If a volume that you want to map is already mapped to another host or host cluster, you
see Yes in Host Mappings column. If you attempt to map that volume to the host, a
warning is shown (Figure 7-40). You can still continue to add a mapping if access is
coordinated at the host side.
Note: The SCSI ID of the volume can be changed only before it is mapped to a host.
Changing it afterward is a disruptive operation because the volume must be unmapped
from the host and mapped again with a new SCSI ID.
6. When the assignments are done, click Next to verify the prepared changes, and click Map
Volumes to complete the operation.
Notes:
When duplicating or importing mappings, all existing mappings are copied, both private
and shared. The shared mappings of an old host become the private mappings of a
new host.
You can duplicate mappings only for a host that does not have volumes mapped, or
import mappings only for a host that has no mappings.
442 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-42 Duplicate Mappings window
3. Select a target host and click Duplicate. After the operation completes, the target host has
the same volume mappings that the source host has. Both private and shared mappings
are duplicated. Mappings on the source host also remain, and they can be deleted
manually if necessary.
To import hosts mappings from an existing host to a new host, complete the following steps:
1. Right-click the new host that has no mapped volumes and select Import Volume
Mappings. Note that if the host already has private or shared mappings, this action is
inactive (disabled) in an Actions menu.
2. The Import Mappings window opens. Select the source host from which you want to
import the volume mappings, as shown in Figure 7-43, and click Import.
3. After the task completes, the host has all the volume mappings as the source host. Shared
mappings in which the source host participates are imported as private. Mappings on the
source host also remain, and they can be deleted manually if necessary.
Note: You can import mappings only from a source host that is in the same ownership
group as your target host. If they are not in the same ownership group, the import fails
with “The command failed because the objects are in different ownership groups”
message.
A host throttle sets the limit for combined read and write I/O to all mapped volumes. Other
hosts accessing the same set of volumes are not affected by a host throttle.
To create a host throttle, or change or remove an existing host throttle, complete the following
steps:
1. Select one host or several hosts, right-click, and select Edit Throttle.
2. The Edit Throttle for Host dialog opens, as shown in Figure 7-45 on page 445.
444 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-45 Edit Throttle for Host dialog
3. Specify the IOPS limit, Bandwidth limit, or both. Click Create to create a host throttle,
change the throttle limit and click Save to edit an existing throttle, or click Remove to
delete a host throttle.
4. When done editing or creating, click Close.
To view and edit all the throttles that are configured on the system, right-click any of the hosts
and select View All Throttles. As shown in Figure 7-46, a list of all throttles that are
configured on the system appears. You can switch between throttle types by clicking the
drop-down menu next to the Actions menu. You can also change the view to see all the
system’s throttles in one list.
From this view, you can delete or edit any existing throttle by right-clicking it in the list and
selecting the required action.
Note: When you click Remove, the host loses access to the unmapped volumes.
Ensure that you run the required procedures on your host OS, such as unmounting the
file system, taking the disk offline, or disabling the volume group, before removing the
volume mappings from your host object on the GUI.
Removing a host
To remove a host object, complete the following steps:
1. Select the host or multiple hosts that must be removed, right-click them, and select
Remove.
2. Confirm that the window shows the correct list of hosts that you want to remove by
entering the number of hosts to remove and clicking Remove (see Figure 7-48).
3. If the host that you are removing has volumes that are mapped to it, you can force the
removal by selecting the Remove the hosts even if volumes are mapped to them
checkbox in the lower part of the window. When this option is selected, all volume
mappings of this host are deleted, and the host is removed.
Viewing IP logins
If you right-click an iSCSI or iSER host, the IP Login Information window opens, where you
check the state of the host logins, as shown in Figure 7-49 on page 447. You can use the
drop-down menu in the upper part of the window to switch between the IQNs of the host.
446 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-49 Viewing the IP login information
The Host Details window has three tabs: Overview, Mapped Volumes, and Port
Definitions:
– In the Overview window, you can click Edit to change hostname, host type, select and
clear the associated host I/O groups, and modify the host status policy and status site.
– In the Mapped Volumes tab, you can list all volumes that are mapped to the host. Both
private and shared mappings are shown.
Selecting Hosts → Host Clusters shows a list of configured host clusters and their major
parameters, like cluster status, number of hosts in a cluster, and number of shared mappings.
Right-clicking any of the clusters or selecting one or several clusters and clicking the Actions
drop-down menu opens the list of available actions, as shown in Figure 7-52.
448 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-53 View Host Cluster Members window
If your changes are correct, click Add Hosts to complete the operation.
Select the action that you want, click Next, and after verifying the changes, click Remove
Hosts to complete the procedure.
450 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Modify Shared Volume Mappings action
With this action, you can create shared mappings for a host cluster or modify an existing host
cluster.
3. With this view, you can select one or more shared mappings that must be removed, and
then click Remove Volume Mappings.
4. If new shared mappings must be created, click Add Volume Mappings to open the next
window, as shown in Figure 7-58. A list shows the volumes that are not yet mapped to the
cluster that was selected.
6. After clicking Next, the next window prompts you to verify that the changes are correct, as
shown in Figure 7-60. Click Map Volumes to complete the operation, click Back to return
and change the SCSI IDs or volumes that are being mapped, or click Cancel to stop the
task.
The Modify I/O groups for hosts action for a host cluster object changes the I/O group
assignment for all hosts who are members of this cluster, as shown in Figure 7-61 on
page 453.
452 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-61 Setting the I/O groups for hosts
If you are creating a throttle for a host cluster, any hosts within that cluster adopt the throttle
for processing.
3. If there are no individual throttles, a window opens where you can set or edit I/O or data
rate limits, as shown in Figure 7-63. Click Create to create a throttle, or click IOPS limit
and click Save to change the existing throttle.
From this view, you can also delete or edit any existing throttle by right-clicking it in the list and
selecting the required action. An example of the View All Throttles window is shown in
Figure 7-46 on page 445.
An example of the Delete Host Cluster window is show in Figure 7-64. You can hover your
mouse pointer over the question marks that are next to the suggested removal options to get
more details.
454 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The actions that are performed from those views are the same ones that can be done from
the Hosts and Host Clusters views. However, depending on your current administration task
and the size of your configuration, they can provide a better view and are more convenient.
The left column shows all the configured hosts. At the top of the column is a text input string
that can perform quick filtering by hostnames. Below the list of hosts there is an Add Host
button, which opens a dialog box that is described in step 2 on page 419.
Main window shows a list of ports that are assigned to selected host. In the upper right, there
is a Host Actions drop-down menu that provides same set of actions as it was described in
7.5.3, “Actions on hosts” on page 437.
The list of host ports shows the type and status for each port, as shown in Figure 7-67.
Note: If a host record already has ports that are assigned to it, you can add only ports
of the same type to it. For example, you can add an iSCSI port to a host with iSCSI
ports, but you cannot add an FC-SCSI port.
4. Select the discovered port from the list or enter the port address manually and click Add
Port to List.
If the FC-SCSI port WWPN is not logged in to the system and its address was entered
manually, it is shown as unverified in the list, as shown in Figure 7-70 on page 457. The
first time that the port logs on, its state is automatically changed to Online.
For other host types, no automatic port verification is performed.
456 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-70 Unverified port
5. To remove a port from the list, click the red X next to the port.
6. After the list contains all the ports that you want to add, click Add Ports to Host to apply
the changes.
2. Click Delete and confirm the number of host ports that you want to remove by entering
that number into the Verify field (see Figure 7-72).
By using a drop-down menu in the upper left, you can switch between listing only private
mappings, only shared mappings, and all host mappings. The “Private mappings” and “All
Host mappings” views show the hosts, and switching to “Shared mappings” shows a list of
host clusters and their mappings. Examples of these views are shown in Figure 7-73 and
Figure 7-74.
If you select a line and click Actions, or right-click a mapping in the list, the following tasks
are available:
Unmap Volumes
Host Properties
Volume Properties
Unmapping a volume
This action removes the mappings for all selected entries. An unmap action is allowed for
shared mappings if you select the Shard mappings view, as shown in Figure 7-74. If you
select Private mappings or All Host mappings view, you can remove only private
mappings.
To remove a volume mapping or mappings, select the records to remove, right-click, and
select Unmap volumes, or select Unmap Volumes from the Actions menu. You can see an
example in Figure 7-75 on page 459.
458 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-75 Removing two private mappings
A dialog box opens. Confirm how many volumes are to be unmapped by entering that number
into the Verify field (see Figure 7-76), and then click Unmap.
Host Properties
Select a single entry and click Actions → Host Properties. The Host Properties window
opens. The contents of this window are described in “Viewing the host properties” on
page 447.
Volume Properties
Select an entry and select Actions → Volume Properties. The Volume Properties view
opens. The contents of this window are described in Chapter 6, “Volumes” on page 299.
The left column shows all configured hosts or host clusters. At top of the column is a text input
string that you can use to perform quick filtering by object names. Below the list, there is an
Add Host (Create Host Cluster) button, which opens a dialog box that is described in 7.5.1,
“Creating hosts” on page 419 and in 7.5.2, “Host clusters” on page 433.
You can also filter by the type of volume by selecting an option from the Volumes menu. The
options are as follows:
All Volumes
Thin-Provisioned Volumes
Compressed Volumes
Deduplicated Volumes
Right-clicking a volume in the list opens the Volume Actions menu, which is covered in
Chapter 6, “Volumes” on page 299. Finally, you can create and map a volume by clicking
Create Volumes.
460 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.6 Performing hosts operations by using CLI
This section describes some of the host-related actions that can be taken within the system
by using the CLI.
If zoning was implemented correctly, any new WWPNs are discovered by the system after
running the detectmdisk command.
2. List the candidate WWPNs and identify the WWPNs belonging to the new host, as shown
in Example 7-15.
3. Run the mkhost command with the required parameters, as shown in Example 7-16.
2. The iSCSI host can be verified by running the lshost command, as shown in
Example 7-18.
Example 7-18 Verifying the iSCSI host by running the lshost command
IBM_IBM FlashSystem:ITSO-FS7200:superuser>lshost 4
id 4
name RHEL-Host-04
port_count 1
type generic
....
status_site all
iscsi_name iqn.1994-05.com.redhat:e6ff477b58
node_logged_in_count 1
state active
Note: When the host is initially configured, the default authentication method is set to no
authentication, and no CHAP secret is set. To set a CHAP secret for authenticating the
iSCSI host with the system, run the chhost command with the chapsecret parameter. If
you must display a CHAP secret for a defined server, run the lsiscsiauth command. The
lsiscsiauth command lists the CHAP secret that is configured for authenticating an entity
to the system.
FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.
2. The NVMe host can be verified by running the lshost command, as shown in
Example 7-20.
462 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
port_count 1
...
status_site all
nqn nqn.2014-08.com.redhat:nvme:nvm-nvmehost01-edf223876
node_logged_in_count 2
state active
Note: If you have OBAC set up, you can use the -ownershipgroup parameter when
creating a host to add the host to a pre-configured ownership group. You can use either the
ownership group name or ID. Here is an example command:
mkhost -name NVMe-Host-01 -nqn
nqn.2014-08.com.redhat:nvme:nvm-nvmehost01-edf223876 -protocol nvme -type
generic -ownershipgroup ownershipgroup0
2. The volume mapping can be checked by running the lshostvdiskmap command against
that host, as shown in Example 7-22.
Note: The volume RHEL_VOLUME is mapped to both of the hosts by using the same SCSI
ID. Typically, that is the requirement for most host-based clustering software, such as
Microsoft Clustering Service, IBM PowerHA, and VMware ESX clustering.
2. The volume RHEL_VOLUME is mapped to two hosts (RHEL-HOST-01 and RHEL-Host-06), and
can be seen by running the lsvdiskhostmap command, as shown in Example 7-24.
Example 7-24 Ensuring that the same volume is mapped to multiple hosts
IBM_IBM FlashSystem:ITSO-FS7200:superuser>lsvdiskhostmap RHEL_VOLUME
id name SCSI_id host_id host_name .. IO_group_name mapping_type
0 RHEL_VOLUME 0 0 RHEL-HOST-01 .. io_grp0 private
0 RHEL_VOLUME 0 1 RHEL-Host-06 .. io_grp0 private
IBM_IBM FlashSystem:ITSO-FS7200:superuser>
Renaming a host
To rename a host definition, run the chhost -name command, as shown in Example 7-26. In
this example, the host RHEL-Host-06 is renamed to FC_RHEL_HOST.
Removing a host
To remove a host from the IBM FlashSystem system, run the rmhost command, as shown in
Example 7-27.
464 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Before removing a host from an IBM FlashSystem system, ensure that all of the
volumes are unmapped from that host, as shown in Example 7-25.
Host properties
To get more information about a host, run the lshost command with hostname or host id as a
parameter, as shown in Example 7-28.
Note: Starting from code release 8.3.0.0, the new status_policy property was added to
each host. The property has two potential values:
Complete: The default policy when a host is created. It uses the legacy algorithm.
Existing hosts on systems that are upgraded to a new code level have this policy set.
Redundant: This policy changes the meaning of Online and Degraded in the status
property:
– Online indicates redundant connectivity, that is, enough host ports are logged in to
enough nodes so that the removal of a single node or a single host port still enables
that host to access all its volumes.
– Degraded indicates non-redundant connectivity, that is, a state in which a single point
of failure (SPOF) prevents a host from accessing at least some of its volumes.
These options can be changed only by running the chhost command. When the host is
created by running mkhost, the default policy of redundant is set.
b. Use host or SAN switch utilities to verify whether the WWPN matches the information
for the new WWPN. If the WWPN matches, run the addhostport command to add the
port to the host, as shown in Example 7-30.
Example 7-30 Adding the newly discovered WWPN to the host definition
IBM_IBM FlashSystem:ITSO-FS7200:superuser>addhostport -hbawwpn
2100000E1E09E3E9:2100000E1E30E5E8 ITSO-VMHOST-01
Example 7-31 Adding a WWPN to the host definition by using the -force option
IBM_IBM FlashSystem:ITSO-FS7200:superuser>addhostport -hbawwpn
2100000000000001 -force ITSO-VMHOST-01
This command forces the addition of the WWPN 2100000000000001 to the host
ITSO-VMHOST-01.
d. Verify the host port count by running the lshost command. Example 7-32 shows that
the host ITSO-VMHOST-01 has a port count that updated from 2 to 5 after two commands
in previous examples ran.
466 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For iSCSI and FC-NVMe host ports:
a. If the host uses iSCSI or FC-NVMe as a connection method, the host port ID (iSCSI
IQN or NVMe NQN) is used to add the port. Unlike FC-attached hosts, the available
candidate IDs cannot be checked. Your host administrator provides you with the IQN or
NQN.
b. After getting the ID, run the addhostport command. Example 7-33 shows a command
for an iSCSI port.
2. When you discover the WWPN or iSCSI IQN that must be deleted, run the rmhostport
command to delete the host port, as shown in Example 7-36.
3. To remove the NVMe NQN, run the rmhostport with the nqn argument, as shown in
Example 7-38.
Note: Multiple ports can be removed at once by using the separator or colon (:) between
the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD ITSO-VMHOST-02
In Example 7-40, the hosts ITSO-VMHOST-01 and ITSO-VMHOST-02 were added as part of host
cluster ITSO-ESX-Cluster-01.
468 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Listing the host cluster member
To list the host members that are part of a particular host cluster, run the
lshostclustermember command, as shown in Example 7-41.
Example 7-41 Listing host cluster members by running the lshostclustermember command
IBM_IBM FlashSystem:ITSO-FS7200:superuser>lshostclustermember ITSO-ESX-Cluster-01
host_id host_name status type site_id site_name
0 ITSO-VMHOST-01 offline generic
4 ITSO-VMHOST-02 offline generic
IBM_IBM FlashSystem:ITSO-FS7200:superuser>
Note: When a volume is mapped to a host cluster, that volume is mapped to all of the
members of the host cluster with the same SCSI_ID.
Note: You can run the lshostvdiskmap command against each host that is part of a host
cluster to ensure that the mapping type for the shared volume is shared, and that the
non-shared volume is private.
In Example 7-44, volume VMware3 is unmapped from the host cluster ITSO-ESX-Cluster-01.
In Example 7-45, the host ITSO-VMHOST-02 was removed as a member from the host cluster
ITSO-ESX-Cluster-01, along with the associated volume mappings because the
-removemappings flag was specified.
Using the -removemappings flag also causes the system to remove any shared host mappings
to volumes. The mappings are deleted before the host cluster is deleted.
Note: To keep the volumes mapped to the host objects even after the host cluster is
deleted, use the -keepmappings flag instead of -removemappings for the rmhostcluster
command. When -keepmappings is specified, the host cluster is deleted, but the volume
mapping to the host becomes private instead of shared.
Note: You must specify the ID of the ownership group to which you want to add the host,
and then specify the ID of the host or host cluster. So, the command in Example 7-47 adds
host cluster ID 0 to ownership group ID 1.
This command removes host cluster 0 from the ownership group that it is assigned to.
470 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.7 Host attachment practical examples
This section provides practical examples for Linux based host attachment that are
implemented by using the information that is provided in the previous sections of this chapter.
7.7.1 Prerequisites
The host should be running on the supported OS, which in this example is Red Hat
Enterprise Linux (RHEL), and supported HBAs.
In the case of RHEL, it is possible to check the OS level by running the command that is
shown in Example 7-49.
2. To configure the host object on the storage system, follow the instructions in “Creating
Fibre Channel host objects” on page 420. If zoning is already done for the host, the host’s
WWPN should be available in the Host Port (WWPN) list. If the host is not zoned, it is
possible to add ports manually into the field.
4. Configure the host side to discover the mapped VDisks and use them:
a. RHEL has its own native multipath driver, which maps the discovered drives and their
paths to the mpath n device files in /dev/mapper. The multipath driver must be correctly
configured, which is described at IBM Documentation. To check that the volumes are
detected correctly by the host, run the command in Example 7-51.
472 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
|-+- policy='service-time 0' prio=50 status=enabled
| |- 33:0:15:3 sdd 8:48 active ready running
| |- 33:0:27:3 sdu 65:64 active ready running
| |- 33:0:28:3 sdaa 65:160 active ready running
| |- 33:0:31:3 sdaf 65:240 active ready running
| |- 34:0:13:3 sdah 66:16 active ready running
| |- 34:0:15:3 sdak 66:64 active ready running
| |- 34:0:1:3 sdv 65:80 active ready running
| `- 34:0:3:3 sdac 65:192 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 33:0:19:3 sdg 8:96 active ready running
|- 33:0:24:3 sdj 8:144 active ready running
|- 33:0:25:3 sdm 8:192 active ready running
|- 33:0:26:3 sdp 8:240 active ready running
|- 34:0:20:3 sdan 66:112 active ready running
|- 34:0:26:3 sdaq 66:160 active ready running
|- 34:0:29:3 sdat 66:208 active ready running
`- 34:0:31:3 sdaw 67:0 active ready running
mpathat (3600507640084031dd80000000000007c) dm-3 IBM ,2145
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=enabled
| |- 33:0:19:1 sdf 8:80 active ready running
| |- 33:0:24:1 sdi 8:128 active ready running
| |- 33:0:25:1 sdl 8:176 active ready running
| |- 33:0:26:1 sdo 8:224 active ready running
| |- 34:0:20:1 sdam 66:96 active ready running
| |- 34:0:26:1 sdap 66:144 active ready running
| |- 34:0:29:1 sdas 66:192 active ready running
| `- 34:0:31:1 sdav 66:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 33:0:15:1 sdc 8:32 active ready running
|- 33:0:27:1 sds 65:32 active ready running
|- 33:0:28:1 sdy 65:128 active ready running
|- 33:0:31:1 sdad 65:208 active ready running
|- 34:0:13:1 sdag 66:0 active ready running
|- 34:0:15:1 sdaj 66:48 active ready running
|- 34:0:1:1 sdt 65:48 active ready running
`- 34:0:3:1 sdz 65:144 active ready running
mpathas (3600507640084031dd80000000000007b) dm-2 IBM ,2145
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=enabled
| |- 33:0:15:0 sdb 8:16 active ready running
| |- 33:0:27:0 sdq 65:0 active ready running
| |- 33:0:28:0 sdw 65:96 active ready running
| |- 33:0:31:0 sdab 65:176 active ready running
| |- 34:0:13:0 sdae 65:224 active ready running
| |- 34:0:15:0 sdai 66:32 active ready running
| |- 34:0:1:0 sdr 65:16 active ready running
| `- 34:0:3:0 sdx 65:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 33:0:19:0 sde 8:64 active ready running
|- 33:0:24:0 sdh 8:112 active ready running
|- 33:0:25:0 sdk 8:160 active ready running
|- 33:0:26:0 sdn 8:208 active ready running
|- 34:0:20:0 sdal 66:80 active ready running
Summary
To introduce capacity to the host from the storage system, you must first deal with several
abstractions:
1. On the storage system:
a. Define the host object definition with all the credentials of the host.
b. Map volumes to the defined host object to introduce capacity to the host.
2. On the host:
a. The multipathing driver should be configured (usually, the native multipathing driver or
device mapper are configured and running in some OSs). It is used to map all paths for
the specific volume (VDisk) to the one device because the specifics of the protocol
system see each path as a separate device even for the one volume (VDisk).
Therefore, the multipath driver is essential for correct representation and usage of the
provided capacity resource.
b. Set up the LVM layer if you plan to use it for more flexibility.
c. Set the file system level, depending on the application.
474 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7.7.3 iSCSI host connectivity and capacity allocation
The iSCSI protocol uses an initiator from host side to send SCSI commands to storage
systems’ target devices. Therefore, it is necessary to prepare the correct environment on the
host side and configure the storage system, as described in “Creating iSCSI host objects” on
page 429.
This section demonstrates an RHEL host configuration and how to obtain access to the
dedicated volumes (VDisks) on the storage system.
The detailed steps to prepare an RHEL host for SCSI connectivity can be found by going to
IBM Documentation, selecting your specific system, and then selecting Configuring → Host
Attachment → iSCSI Ethernet host attachment.
2. Now, the iSCSI initiator should be configured, and the connection credentials should be
set in the /etc/iscsi files. Check or define IQN in /etc/iscsi/initiatorname.iscsi, as
shown in Example 7-54.
476 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 7-80 Ethernet Ports Configuration tab
Finally, to access the volumes (VDisk) space, which is mapped on the storage system to the
host object, log in to the discovered targets (Example 7-56).
After logging in successfully, make sure that the native multipath driver on the RHEL host is
installed and configured correctly similar to the example with the Fibre Channel connection
(IBM FICON) in 7.7.2, “Fibre Channel host connectivity and capacity allocation” on page 471,
and check the output by running multipath -ll.
Record the names of the devices that are marked bold in Example 7-57 on page 477 because
they will be used in further configuration, such as LVM physical volume creation or file system
creation and mounting, which are in /dev/mapper/.
Record the UID number after devices names without the first digit (3) because they
correspond to the UID of the volume (VDisk) on the storage system.
Summary
Although the example in this section is specifically for RHEL host connectivity, the main
principals can be followed when configuring connectivity through iSCSI for other OSs.
In summary, the actions that are necessary for host to storage iSCSI connectivity are:
1. Install the iSCSI initiator software on the host.
2. Configure the iSCSI initiator software according to the requirements for the storage
system target and the host’s OS.
3. Get the host IQNs.
4. Define the host object with iSCSI connectivity by using host IQNs.
5. Record and check the Ethernet ports IP addresses on the storage system, which are
configured for iSCSI connectivity.
6. Discover the iSCSI targets by using the storage system IP address that was obtained in
step 5.
7. Log in to the storage system iSCSI targets.
8. Check and configure the native multipath driver to confirm the volumes on the host.
Start by defining the necessary connectivity information and configuring the host and system.
478 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The concept of NVMe-oF is in much like iSCSI connectivity because an initiator and target
must be defined and configured so that the connection works.
Collect the information for connectivity from the host to the System by completing the
following steps:
1. You must discover the WWPNs of the host because FC-NVMe connectivity is achieved
through FC. To do so, run the command that is show in Example 7-58.
2. Discover the NVMe FC ports for the system by running the command that is shown in
Example 7-59. Decide which ones that you will use. The FC-NVMe connectivity dedicated
port is a virtualized port, so you must have NPIV enabled.
3. Zone the host with at least one NVMe dedicated port. In the example, the host is zoned to
the ports that are marked in bold in Example 7-59.
4. On the host, make sure that the driver is ready to provide NVMe connectivity. In this
example, we use an Emulex HBA, as shown in Example 7-60.
If the lpfc.conf is absent or does not contain the string that is marked in bold in the
example, create it and populate it with the string. Then, restart the lpfc driver by running
modprob commands (First, remove the driver, and then add it back).
Note: Reinitiating the lpfc driver by running the modprob command changes the NQN of the
host.
7. Create a host object on the system by using the host NQN, as described in “Creating
NVMe host objects” on page 430. Check that the host object has the correct NQN set.
(see Figure 7-81).
Example 7-63 Verifying the remote/target ports and information about the FC-NVMe connection
[root@flashlnx4 nvme]# cat /sys/class/scsi_host/*/nvme_info
NVMe Statistics
LS: Xmt 0000000031 Cmpl 0000000031 Abort 00000000
LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
Total FCP Cmpl 000000000035d907 Issue 000000000035d90a OutI/O 0000000000000003
480 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
abort 00000001 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
FCP CMPL: xb 00000001 Err 00000005
NVMe Statistics
LS: Xmt 0000000030 Cmpl 0000000030 Abort 00000000
LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
Total FCP Cmpl 000000000035d6c3 Issue 000000000035d6c6 OutI/O 0000000000000003
abort 00000001 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
FCP CMPL: xb 00000001 Err 00000005
Tip: If the remote ports (RPORTs), which are presented from the system are not
visible, check whether zoning is done correctly for the virtualized NVMe ports on the
system.
10.Discover and connect to the storage resources, which requires using information from the
nvme_info file, such as the WWNN and WWPN of the local port (host port) and RPORT
(storage port). This information can be cumbersome to collect and put into the discovery
and connect command manually, so you can use the script that is shown in Example 7-64
to automate the process. The commands for nvme-cli are in bold.
482 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
=====Discovery Log Entry 0======
trtype: fibre-channel
adrfam: fibre-channel
subtype: nvme subsystem
treq: not required
portid: 14
trsvcid: none
subnqn:
nqn.2017-12.com.ibm:nvme:mt:9840:guid:5005076061D30D60:cid:0000020061D16202
traddr: nn-0x500507605e8c3440:pn-0x500507605e8c3473
Performing Discovery and Connection with hostwwpn: 10000090faf20bc1 hostwwnn:
20000090faf20bc1 targetwwpn: 50050768102a01df targetwwnn: 50050768100001df
After the discovery and connection is successful, record the ports that marked in bold in
Example 7-65 on page 481, and check the list of NVMe devices that are visible from the
host, as shown in Example 7-66.
Example 7-66 NVMe devices list that is visible from the host
[root@flashlnx4 tmp]# nvme list
Node S/N Model
Namespace Usage Format FW Rev
---------------- -------------------- ----------------------------------------
--------- -------------------------- ---------------- --------
/dev/nvme6n1 204228003c IBM 2145
78 132.07 GB / 137.44 GB 512 B + 0 B 8.4.0.0
/dev/nvme7n1 204228003c IBM 2145
78 132.07 GB / 137.44 GB 512 B + 0 B 8.4.0.0
Summary
In this section, we provided an example of using an RHEL host and an IBM FlashSystem
9100 system. Although other OS distributions might have specific steps for configuration, the
main idea and principles are the same. If it is necessary to connect to the storage through
FC-NVMe, the following considerations and actions are usually performed:
1. Ensure that the host is ready and meets the requirements for FC-NVMe connectivity, such
as:
a. HBA supports FC-NVMe.
b. The drivers are configured for NVMe connectivity.
2. Make sure that the system supports the host HBA for FC-NVMe connectivity.
3. Obtain the connectivity information from the host.
4. Create a host object on the system by using connectivity information from the host.
5. Map volumes to the host object.
6. Do discovery and connection from the host, although some hosts OS can do it
automatically.
7. Use the obtained storage resources.
484 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
8
Storage migration uses the volume mirroring function to enable reads and writes during the
migration, which minimizes disruption and downtime. After the migration completes, the
existing controller can be retired.
The system supports migration through Fibre Channel (FC) and internet Small Computer
Systems Interface (iSCSI) connections.
In addition to migrating data through external virtualization and volume mirroring that is used
by the storage migration wizard, there are also scenarios in which host-based mirroring is a
best practice. In environments where operating system (OS) administrators can perform the
migration by using host-side tools, host-based mirroring can potentially reduce or eliminate
downtime if the new volumes that are presented from the IBM Spectrum Virtualize system
and the legacy storage system are visible to the host concurrently.
Note: For a “real-life” demonstration of the storage migration capabilities that are offered
with IBM Spectrum Virtualize, see this web page (login required).
The demonstration includes three different step-by-step scenarios showing the integration
of an IBM SAN Volume Controller (SVC) cluster into an environment with one Microsoft
Windows Server (image mode), one IBM AIX server (logical volume manager (LVM)
mirroring), and one VMware Elastic Sky X Integrated (ESXi) server (storage vMotion).
Table 8-1 New hardware clustering options for Storwize control enclosures
Storwize enclosure Clustering options
Note: This chapter covers the storage migration wizard in detail, along with a less detailed
description of the enclosure upgrade scenario. However, this chapter does not describe
other migration methodologies such as ones that use replication or host-based migrations.
This chapter also does not cover virtualization of external storage. For more information
about these topics, see Chapter 5, “Storage pools” on page 237.
Attention: The system does not require a license for its own control and expansion
enclosures. However, a license is required for any external systems that are being
virtualized, either based on storage capacity units (SCU) or based on the number of
enclosures. Data can be migrated from storage systems to your system by using the
external virtualization function within 90 days of purchase of the system without the
purchase of a license. After 90 days, any ongoing use of the external virtualization function
requires a license.
Set the license temporarily during the migration process to prevent messages that indicate
that you are in violation of the license agreement from being sent. When the migration is
complete, or after 45 days, reset the license to its original limit or purchase a new license.
486 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Consider the following points about the storage migration process:
Typically, storage controllers divide storage into many Small Computer System Interface
(SCSI) LUs that are presented to hosts.
I/O to the LUs must be stopped and changes made to the mapping of the external storage
controller LUs and to the fabric or iSCSI configuration so that the original LUs are
presented directly to the system and not to the hosts anymore. The system discovers the
external LUs as unmanaged managed disks (MDisks).
The unmanaged MDisks are imported to the system as image mode volumes and placed
into a temporary storage pool. This storage pool is now a logical container for the LUs.
Each MDisk has a one-to-one mapping with an image mode volume. From a data
perspective, the image mode volumes represent the LUs exactly as they were before the
import operation. The image mode volumes are on the same physical drives of the
external storage controller and the data remains unchanged. The system is presenting
active images of the LUs and acting as a proxy.
You might need to remove the storage system multipath device driver from the host and
reconfigure host attachment with this system. However, most current OSs might not
require vendor-specific multipathing drivers and can access both the legacy and the new
IBM Spectrum Virtualize systems through native multipathing drivers, such as AIX
AIXPCM, Linux device mapper, or Microsoft Device Specific Module (MSDSM). The hosts
are defined with worldwide port names (WWPNs) or iSCSI Qualified Names (IQNs), and
the volumes are mapped to the hosts. After the volumes are mapped, the hosts discover
the system’s volumes through a host rescan or restart operation.
After IBM Spectrum Virtualize volume mirroring operations are initiated, the image-mode
volumes are mirrored to standard striped volumes. Volume mirroring is an online migration
task, which means a host can still access and use the volumes during the mirror
synchronization process.
After the mirror operations are complete, the image mode volumes are removed. The
external storage system LUs are now migrated and the now redundant storage can be
decommissioned or reused elsewhere.
Important: If you are migrating volumes from another Storwize or IBM FlashSystem family
product through external virtualization instead of clustering or replication, the target system
must be configured in the replication layer, and the source system must be configured in
the storage layer. Otherwise, the source system does not discover the target as a host, and
the target does not discover the source as a back-end controller.
The default layer setting for Storwize and IBM FlashSystem family systems is storage:
chsystem -layer replication
chsystem -layer storage
Similarly, the layer setting might need to be changed if you cluster a Storwize system with
an IBM FlashSystem enclosure.
The matrix results indicate the external storage that you want to attach to the system, such as
validated firmware levels or support for disks greater than 2 TB.
8.1.2 Prerequisites
Before the storage migration wizard can be started, the external storage controller must be
visible to the system. You also must confirm that the restrictions, limits, and prerequisites are
met.
Data from the external storage system to the IBM Spectrum Virtualize system is sent through
an iSCSI or Fibre Channel connection (IBM FICON).
Common prerequisites
It is unlikely that VMware environments will use the Storage Migration wizard to move data
because it requires downtime for path cutover. It is much more likely that Storage V-motion
will be used to move guest data transparently to newly provisioned data stores from the
IBM Spectrum Virtualize system. However, if you have VMware Elastic Sky X (ESX) server
hosts and want to migrate by using image mode, you must change the settings on the
VMware host so that copies of the volumes can be recognized by the system after the
migration completes. To ensure that volume copies can be recognized by the system for
VMware ESX hosts, you must complete one of the following actions:
Enable the EnableResignature setting.
Disable the DisallowSnapshotLUN setting.
To learn more about these settings, see the documentation for the VMware ESX host.
Note: Test the setting changes on a non-production server. The logical unit number (LUN)
has a different unique identifier (UID) after it is imported. It resembles a mirrored volume to
the VMware server.
488 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Prerequisites for iSCSI connections
The following prerequisites for iSCSI connections must be met:
Cable this system to the external storage system with a redundant switched fabric.
Migrating iSCSI external storage requires that the system and the storage system are
connected through an Ethernet switch. Symmetric ports on all nodes of the system must
be connected to the same switch and must be configured on the same subnet.
In addition, modify the Ethernet port attributes to enable the external storage on the
Ethernet port to enable external storage connectivity. To modify the Ethernet port for
external storage, click Network → Ethernet Ports and right-click a configured port. Select
Modify Storage Ports to enable the port for external storage connections.
Cable the Ethernet ports on the storage system to the fabric in the same way as the
system and ensure that they are configured in the same subnet. Optionally, you can use a
virtual local area network (VLAN) to define network traffic for the system ports.
For full redundancy, configure two Ethernet fabrics with separate Ethernet switches. If the
source system nodes and the external storage system both have more than two Ethernet
ports, an extra redundant iSCSI connection can be established for increased throughput.
Attention: The risk of losing data when using the storage migration wizard correctly is low.
However, it is prudent to avoid potential data loss by creating a backup of all the data that is
stored on the hosts, the storage controllers, and the system before the wizard is used.
Complete the following steps to complete the migration by using the storage migration wizard:
1. Select Pools → System Migration, as shown in Figure 8-1. The System Migration
window provides access to the storage migration wizard and displays information about
the migration progress.
Note: Starting a new migration adds the volume to be migrated to the list that is shown
in Figure 8-2. After a volume is migrated, it remains in the list until you finalize the
migration.
3. If both FC and iSCSI external systems are detected, a dialog box opens and prompts you
about which protocol should be used. Select the type of attachment between the system
and the external controller from which you want to migrate volumes and click Next. If only
one type of attachment is detected, this dialog box does not open.
If the external storage system is not detected, the warning message that is shown in
Figure 8-3 is displayed when you attempt to start the migration wizard. Click Close and
correct the problem before you try to start the migration wizard again.
4. When the wizard starts, you are prompted to verify the restrictions and prerequisites that
are listed in Figure 8-4 on page 491. Address the following restrictions and prerequisites:
– Restrictions:
• You are not using the storage migration wizard to migrate clustered hosts, including
clusters of VMware hosts and Virtual I/O Servers (VIOSs).
• You are not using the storage migration wizard to migrate SAN boot images.
If you have either of these two environments, the migration must be performed outside
of the wizard because more steps are required.
The VMware vSphere Storage vMotion feature might be an alternative for migrating
VMware clusters. For information, see this web page.
– Prerequisites:
• The system and the external storage controller are connected to the same SAN
fabric.
• If there are VMware ESX hosts involved in the data migration, the VMware ESX
hosts are set to allow volume copies to be recognized.
490 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For more information about the Storage Migration prerequisites, see 8.1.2, “Prerequisites”
on page 488.
If all restrictions are satisfied and prerequisites are met, select all of the options and click
Next, as shown in Figure 8-4.
492 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Use the following guidelines to ensure that zones are configured correctly for migration:
• Zoning rules
For every storage controller, create one zone that contains this system’s ports from
every node and all external storage controller ports, unless otherwise stated by the
zoning guidelines for that storage controller.
This system requires single-initiator zoning for all large configurations that contain
more than 64 host objects. Each server FC port must be in its own zone, which
contains the FC port and this system’s ports. In configurations of fewer than 64
hosts, you can have up to 40 FC ports in a host zone if the zone contains similar
HBAs and OSs.
• Storage system zones
In a storage system zone, this system’s nodes identify the storage systems.
Generally, create one zone for each storage system. Host systems cannot operate
on the storage systems directly. All data transfer occurs through this system’s
nodes.
• Host zones
In the host zone, the host systems can identify and address this system’s nodes.
You can have more than one host zone and more than one storage system zone.
Create one host zone for each host FC port.
Because the system should now be seen as a host from the external controller to be
migrated, you must define the system as a host or host group by using the WWPNs or
IQNs on the system to be migrated. Some controllers do not support LUN-to-host
mapping, so they present all the LUs to the system. In that case, all the LUs should be
migrated.
Before you migrate storage, record the hosts and their WWPNs or IQNs for each volume
that is being migrated and the SCSI LUN when it is mapped to the system.
Table 8-2 on page 495 shows an example of a table that is used to capture information
that relates to the external storage system LUs.
494 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Table 8-2 Example table for capturing external LU information
Volume Name or ID Hosts accessing Host WWPNs or SCSI LUN when
this LUN IQNs mapped
Note: Make sure to record the SCSI ID of the LUs to which the host is originally
mapped. Some OSs do not support changing the SCSI ID during the migration.
Click Next and wait for the system to discover external devices. The wizard runs a
detectmdisk command, as shown in Figure 8-7.
Figure 8-7 Storage Migration external storage discovery detectmdisk command detail
7. The next window shows all the MDisks that were found. If the MDisks to be migrated are
not in the list, check your zoning or IP configuration, as applicable, and your LUN
mappings. Repeat step 6 on page 494 to trigger the discovery procedure again.
In this example, two MDisks (mdisk18 and mdisk16) were found for migration. Detailed
information about an MDisk is available by double-clicking it. To select multiple elements
from the table, press Shift and then click or Ctrl and then click. Optionally, you can export
the discovered MDisks list to a comma-separated value (CSV) file for further use by
clicking the download icon ( ) to Export to CSV.
Note: Select only the MDisks that are applicable to the current migration plan. After
step 15 on page 505 of the current migration completes, another migration can be
started to migrate any remaining MDisks.
496 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
8. Click Next and wait for the MDisk to be imported. During this task, the system creates a
new storage pool that is called MigrationPool_XXXX and adds the imported MDisk to the
storage pool as image mode volumes with the default naming of
{controller}_16digitSequenceNumber (controller2_0000000000000005)..., as shown
in Figure 8-9.
Figure 8-10 List of configured hosts to which to map the imported volume
498 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.If the host that needs access to the migrated data is not configured, select Add Host to
begin the Add Host wizard. Enter the host connection type, name, and connection details.
Optionally, click Advanced to modify the host type and I/O group assignment. Figure 8-11
shows the Add Host wizard with the details completed.
For more information about the Add Host wizard, see Chapter 7, “Hosts” on page 405.
11.Click Add. The host is created and now listed in the Configure Hosts window, as shown in
Figure 8-10 on page 498. Click Next to proceed.
500 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.Map the volumes to the hosts by selecting the volumes and clicking Map to Host or Host
Cluster, as shown in Figure 8-13. This step is optional and can be bypassed by clicking
Next.
Figure 8-13 Selecting the host to which to map the new volume
502 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When your LUN mapping is ready, click Next. A new dialog box opens with a summary of
the new and existing mappings, as shown in Figure 8-15.
Click Map Volumes and wait for the mappings to be created. Continue to map volumes to
hosts until all mappings are created. Click Next to continue with the next migration step.
14.Select the storage pool into which you want to migrate the imported volumes. Ensure that
the selected storage pool has enough space to accommodate the migrated volumes
before you continue. This step is optional. You can decide not to migrate to a storage pool
and to leave the imported MDisk as an image mode volume.
However, this technique is not recommended because no volume mirroring is created.
Therefore, no protection is available for the imported MDisk, and no data transfer occurs
from the controller to be migrated to the system. So, although it is acceptable to delay the
mirroring at some point, it should be done.
Figure 8-16 Selecting the target pool for the migration of the image mode MDisk
The migration starts. This task continues running in the background and uses the volume
mirroring function to place a generic copy of the image mode volumes in the selected
storage pool.
Note: With volume mirroring, the system creates two copies (Copy0 and Copy1) of a
volume. Typically, Copy0 is located in the migration pool, and Copy1 is created in the
target pool of the migration. When the host generates a write I/O on the volume, data is
written concurrently on both copies. Read I/Os are performed on the primary copy only.
In the background, a mirror synchronization of the two copies is performed and runs
until the two copies are synchronized. The speed of this background synchronization
can be changed in the volume properties.
504 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
15.Click Finish to end the storage migration wizard, as shown in Figure 8-17.
The end of the wizard is not the end of the migration task. You can find the progress of the
migration in the Storage Migration window, as shown in Figure 8-18. The target storage
pool and the progress of the volume copy synchronization is also displayed there.
Figure 8-18 The ongoing migration is listed in the Storage Migration window
16.If you want to check the progress by using the command-line interface (CLI), run the
lsvdisksyncprogress command because the process is essentially a volume copy, as
shown in Example 8-1.
You are asked to confirm the Finalize action because this process removes the MDisk
from the Migration Pool and deletes the primary copy of the mirrored volume. The
secondary copy remains in the destination pool and becomes the primary. Figure 8-20
shows the confirmation message.
18.When finalized, the image mode copies of the volumes are deleted and the associated
MDisks are removed from the migration pool. The status of those MDisks returns to
unmanaged. You can verify the status of the MDisks by selecting Pools → External
Storage, as shown in Figure 8-21 on page 507. In the example, mdisk3 was migrated
and finalized. It appears as unmanaged in the external storage window.
506 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 8-21 External Storage MDisks window
All the steps that are described in the Storage Migration wizard can be performed
manually with the GUI and the CLI, but you should use the wizard as a guide.
With the clustering capability, you may concurrently migrate the access to volumes from the
Storwize enclosure to the IBM FlashSystem enclosure and migrate the data from the Storwize
internal storage pool to the IBM FlashSystem internal storage pool.
The I/O group access change can be performed at any time, but ideally should be done
during a period of low production activity, and it must be coordinated with the OS
administrator to ensure that path discovery occurs, as shown in the “Modify I/O Group...”
wizard.
Note: There is a limitation in the NDVM process that prevents you from changing I/O
groups if a volume is in a FlashCopy map or replication relationship. In those instances, the
maps and relationships must be deleted and re-created. If an outage can be tolerated, use
the -sync flag for relationship re-creation to avoid a resync. Otherwise, if no downtime is
tolerable and a resync is acceptable, then the process can be concurrent and transparent
to the host.
For more information about volume mirroring, see 6.5, “Operations on volumes” on page 321.
Volume mirroring can be performed with either the CLI or GUI and be moderated to lessen or
eliminate the impact on performance by using the sync rate volume property.
There is not any particular order that is required for the enclosure upgrade. The access
change can be done before mirroring and vice versa. However, you should do the second
process without too much delay, and you should consider doing the mirroring first to minimize
the added impact of accessing volumes through the IBM FlashSystem enclosure while the
data still is on the Storwize system, which might lead to performance impact.
For more information about the planning and configuration of storage efficiency features, see
the following publications:
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521
Introduction and Implementation of Data Reduction Pools and Deduplication, SG24-8430
Many applications exhibit a significant skew in the distribution of I/O workload: A small fraction
of the storage is responsible for a disproportionately large fraction of the total I/O workload of
an environment.
Easy Tier acts to identify this skew and automatically place data to take advantage of it. By
moving the “hottest” data onto the fastest tier of storage, the workload on the remainder of the
storage is reduced. By servicing most of the application workload from the fastest storage,
Easy Tier accelerates application performance and increases overall server utilization, which
can reduce costs regarding servers and application licenses.
Easy Tier also reduces storage cost because the system always places the data with the
highest I/O workload on the fastest tier of storage. Depending on the workload pattern, a
large portion of the capacity can be provided by a lower and less expensive tier without
impacting application performance.
Note: Easy Tier is a licensed function. On IBM FlashSystem 9200 and IBM FlashSystem
7200, it is included in the base code. No actions are required to activate the Easy Tier
license on these systems.
On IBM FlashSystem 5100, you must have the appropriate number of licenses to run Easy
Tier.
The IBM FlashSystem 5000 entry systems also require a license for Easy Tier, which is a
one time charge per system.
Without a license, Easy Tier balances I/O workload only between managed disks (MDisks)
in the same tier.
In HyperSwap environments, all member controllers must be licensed with Easy Tier to
enable this function. For example, when clustering two IBM FlashSystem 5030 system, you
need two licenses.
510 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 9-1 Easy Tier
Easy Tier monitors the I/O activity and latency of the extents on all Easy Tier enabled storage
pools. Based on the performance log, it creates an extent migration plan and promotes
(moves) high activity or hot extents to a higher disk tier within the same storage pool. It also
demotes extents whose activity dropped off, or cooled, by moving them from a higher disk tier
MDisk back to a lower tier MDisk.
If a pool contains only MDisks of a single tier, Easy Tier operates only in balancing mode.
Extents are moved between MDisks in the same tier to balance I/O workload within that tier.
Tiers of storage
The MDisks (external logical units (LUs) or redundant array of independent disks (RAID)
arrays) that are presented to the system might have different performance attributes because
of their technology type, such as flash drives or HDDs and other characteristics.
The system automatically sets the tier for internal array mode MDisks because it knows the
capabilities of array members, physical drives, and modules. External MDisks need manual
tier assignment when they are added to a storage pool.
Note: The tier of MDisks that is mapped from certain types of IBM System Storage
Enterprise Flash is fixed to tier0_flash, and cannot be changed.
Although the system can distinguish between five tiers, Easy Tier manages only a three-tier
storage architecture within each storage pool. MDisk tiers are mapped to Easy Tier tiers
depending on the pool configuration, as shown in Table 9-1.
512 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Configuration Easy Tier top tier Easy Tier middle tier Easy Tier bottom tier
Tier0_Flash + NL Tier0_Flash NL
NL NL
The table represents all the possible pool configurations. Some entries in the table contain
optional tiers (shown in italic font), but the configurations without the optional tiers are also
valid.
Sometimes, a single Easy Tier tier contains MDisks from more than one storage tier. For
example, consider a pool with SCM, Tier1_Flash, Enterprise, and NL. SCM is the top tier, and
Tier1_Flash and Enterprise share the middle tier. NL is represented by the bottom tier.
Note: Some storage pool configurations with four or more different tiers are not supported.
If such a configuration is detected, an error is logged and Easy Tier enters measure mode,
which means no extent migrations are performed.
For more information about planning and configuration considerations or best practices, see
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521.
A set of algorithms is used to decide where the extents should be and whether extent
relocation is required. Once per day, Easy Tier analyzes the statistics to determine which data
should be sent to a higher performing tier or a lower tier. Four times per day, it analyzes the
statistics to identify whether any data must be rebalanced between MDisks in the same tier.
Once every 5 minutes, Easy Tier checks the statistics to identify whether any of the MDisks
are overloaded.
Based on this information, Easy Tier generates a migration plan that must be run for optimal
data placement. The system spends the necessary time running the migration plan. The
migration rate is limited to make sure host I/O performance is not affected while data is
relocated.
514 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Balancing Move
Data is moved within the same tier from an MDisk with a higher workload to one with a
lower workload to balance the workload within the tier, which automatically populates new
MDisks that were added to the pool.
Balancing Swap
Data is moved within the same tier from an MDisk with higher workload to one with a lower
workload to balance the workload within the tier. Other less active data is moved first to
make space.
Extent migration occurs at a maximum rate of 12 GB every 5 minutes for the entire system. It
prioritizes the following actions:
Promote and rebalance get equal priority.
Demote is 1 GB every 5 minutes, and then gets whatever is left.
Note: Extent promotion or demotion occurs only between adjacent tiers. In a three-tier
storage pool, Easy Tier does not move extents from the top directly to the bottom tier or
vice versa without moving to the middle tier first.
The Easy Tier overload protection is designed to avoid overloading any type of MDisk with too
much work. To achieve this task, Easy Tier must have an indication of the maximum capability
of a MDisk.
For an array made of locally attached drives, the system can calculate the performance of the
MDisk because it is pre-programmed with performance characteristics for different drives and
array configurations. For a storage area network (SAN)-attached MDisk, the system cannot
calculate the performance capabilities. Therefore, follow the best practice guidelines when
configuring external storage, particularly the ratio between physical disks and MDisks that is
presented to the system.
Each MDisk has an Easy Tier load parameter (low, medium, high, or very_high) that can be
fine-tuned manually. If you analyze the statistics and find that the system does not appear to
be sending enough IOPS to your external MDisk, you can increase the load parameter.
The default operation mode is Enabled. Therefore, the system balances storage pools. If the
required licenses are installed, they also optimize performance.
Implementation considerations
Consider the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the storage system:
If the system contains self-compressing drives (IBM FlashCore Module (FCM) drives) in
the top tier of storage in a pool with multiple tiers and Easy Tier is in use, consider setting
an overallocation limit within these pools, as described in “Overallocation limit” on
page 521.
Volumes that are added to storage pools use extents from the “middle” tier of three-tier
model, if available. Easy Tier then collects usage statistics to determine which extents to
move to “faster” or “slower” tiers. If there are no free extents in the middle tier, extents from
the other tiers are used (bottom tier if possible, otherwise top tier).
When an MDisk with allocated extents is deleted from a storage pool, extents in use are
migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from another tier are used.
Easy Tier monitors the extent I/O activity of each copy of a mirrored volume. Easy Tier
works with each copy independently of the other copy. This situation applies to volume
mirroring and IBM HyperSwap and Remote Copy (RC).
Note: Volume mirroring can have different workload characteristics on each copy of the
data because reads are normally directed to the primary copy and writes occur to both
copies. Therefore, the number of extents that Easy Tier migrates between the tiers
might differ for each copy.
Easy Tier automatic data placement is not supported on image mode or sequential
volumes. However, it supports evaluation mode for such volumes. I/O monitoring is
supported and statistics are accumulated.
When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy
Tier automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even when it is between
pools that both have Easy Tier automatic data placement enabled. Automatic data
placement for the volume is reenabled when the migration is complete.
When the system migrates a volume from one storage pool to another, it attempts to
migrate each extent to an extent in the new storage pool from the same tier as the original
extent, if possible.
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts command on that volume.
Without the proper licenses installed, the system only rebalances storage pools.
516 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
A few parameters can be adjusted. Also, Easy Tier can be turned off on selected volumes in
storage pools.
MDisk settings
The tier for internal (array) MDisks is detected automatically and depends on the type of
drives, which are its members. No adjustments are needed.
For an external MDisk, the tier is assigned when it is added to a storage pool. To assign the
MDisk, select Pools → External Storage, select the MDisk (or MDisks) to add, and click
Assign.
Note: The tier of MDisks that is mapped from certain types of IBM System Storage
Enterprise Flash is fixed to tier0_flash and cannot be changed.
You can choose the target storage pool and storage tier that is assigned, as shown in
Figure 9-3.
Note: Assigning a tier to an external MDisk that does not match the physical back-end
storage type is not supported by IBM and can lead to unpredictable consequences.
To determine what tier is assigned to an MDisk, select Pools → External Storage, select
Actions → Customize columns, and select Tier. This action includes the current tier setting
into a list of MDisk parameters that are shown in the External Storage window. You can also
find this information in MDisk properties. To show this information, right-click MDisk, select
Properties, and click View more details, as shown in Figure 9-5.
518 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To list MDisk parameters with the command-line interface (CLI), run the lsmdisk command.
The current tier for each MDisk is shown. To change the external MDisk tier, run the chmdisk
command with the -tier parameter, as shown in Example 9-1.
Example 9-1 Listing and changing tiers for MDisks (partially shown)
IBM FlashSystem 7200:ITSOFS7K:superuser>lsmdisk
id name status mode mdisk_grp_id ... tier encrypt
1 mdisk1 online unmanaged ... tier0_flash no
2 mdisk2 online managed 0 ... tier_enterprise no
3 mdisk3 online managed 0 ... tier_enterprise no
<...>
IBM FlashSystem 7200:ITSOFS7K:superuser>chmdisk -tier tier1_flash mdisk2
IBM FlashSystem 7200:ITSOFS7K:superuser>
For an external MDisk, the system cannot calculate its exact performance capabilities, so it
has several predefined levels. In rare cases, statistics analysis might show that Easy Tier is
overusing or underusing an MDisk. If so, levels can be adjusted only by using the CLI. Run
chmdisk with the -easytierload parameter. To reset the Easy Tier load to the system default
for the chosen MDisk, use -easytier default, as shown in Example 9-2.
Note: Adjust the Easy Tier load settings only if instructed to do so by IBM Technical
Support or your solution architect.
To list the current Easy Tier load setting of an MDisk, run lsmdisk with the MDisk name or ID
as a parameter.
You can disable Easy Tier or switch it to measure-only mode when creating a pool or any
other time. This task cannot be done by using the GUI, but can be done by using the CLI.
To find the state of the Easy Tier function on the pools by using the CLI, run the lsmdiskgrp
command without any parameters. To turn off or on Easy Tier, run the chmdiskgrp command,
as shown in Example 9-3. By running lsmdiskgrp with pool name/ID as a parameter, you can
also determine how much storage of each tier is available within the pool.
Example 9-3 Listing and changing the Easy Tier status on pools
IBM FlashSystem 7200:ITSOFS7K:superuser>lsmdiskgrp
id name status mdisk_count ... easy_tier easy_tier_status
0 TieredPool online 1 ... auto balanced
IBM FlashSystem 7200:ITSOFS7K:superuser>chmdiskgrp -easytier measure TieredPool
IBM FlashSystem 7200:ITSOFS7K:superuser>chmdiskgrp -easytier auto TieredPool
IBM FlashSystem 7200:ITSOFS7K:superuser>
520 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Overallocation limit
If the system contains self-compressing drives (FCM drives) in the top tier of storage in a pool
with multiple tiers and Easy Tier is in use, consider setting an overallocation limit within
these pools. The overallocation limit has no effect in pools with a different configuration.
Arrays that are created from self-compressing drives have a written capacity limit (virtual
capacity before compression) that is higher than the array’s usable capacity (physical
capacity). Writing highly compressible data to the array means that the written capacity limit
can be reached without running out of usable capacity. However, if data is not compressible or
the compression ratio is low, it is possible to run out of usable capacity before reaching the
written capacity limit of the array, which means the amount of data that is written to a
self-compressing array must be controlled to prevent the array from running out of space.
Without a maximum overallocation limit, Easy Tier scales the usable capacity of the array
based on the actual compression ratio of the data that is stored on the array at a point in time
(PiT). Easy Tier migrates data to the array and might use a large percentage of the usable
capacity in doing so, but it stops migrating to the array when the array comes close to running
out of usable capacity. Then, it might start migrating data away from the array again to free
space.
However, Easy Tier migrates storage only at a slow rate, which might not keep up with
changes to the compression ratio within the tier. When Easy Tier swaps extents or data is
overwritten by hosts, compressible data might be replaced with data that is less compressible,
which increases the amount of usable capacity that is consumed by extents and might result
in self-compressing arrays running out of space, which can cause a loss of access to data
until the condition is resolved.
So, the user might specify the maximum overallocation ratio for pools that contain
self-compressing arrays to prevent out-of-space scenarios. The value acts as a multiplier of
the physically available space in self-compressing arrays. The allowed values are a
percentage in the range of 100% (default) to 400% or off. The default setting allows no
overallocation on new pools. Setting the value to off disables this feature.
When enabled, Easy Tier scales the available usable capacity of self-compressing arrays by
using the specified overallocation limit and adjusts the migration plan to make sure the
fullness of these arrays stays below the maximum overallocation. Specify the maximum
overallocation limit based on the estimated lowest compression ratio of the data that is written
to the pool.
For example, for an estimated compression ratio of 1.2:1, specify an overallocation limit of
120% to put a limit on the overallocation. Easy Tier stops migrating data to self-compressing
arrays in the pool after the written capacity reaches 120% of the physical (usable) capacity of
the array, which is the case even if the written capacity limit of the array is not reached yet or
the current compression ratio of the data that is stored on the array is higher than 1.2:1 (and
thus more usable capacity would be available). This setting prevents changes to the
compression ratio within the specified limits from causing the array to run out of space.
On the CLI, run the chmdiskgrp command with the -etfcmoverallocationmax parameter to
set a percentage or use off to disable the limit.
Volume settings
By default, each striped-type volume enables Easy Tier to manage its extents. If you need to
fix the volume extent location (for example, to prevent extent demotes and to keep the volume
in the higher-performing tier), you can turn off Easy Tier management for a particular volume
copy.
Note: Thin-provisioned and compressed volumes in a DRP cannot have Easy Tier turned
off. You can turn off Easy Tier only at a pool level.
You can do this task only by using the CLI. Run the lsvdisk command to check and the
chvdisk command to modify the Easy Tier function status on a volume copy, as shown in
Example 9-4.
Example 9-4 Checking and modifying the Easy Tier settings on a volume
IBM_Storwize:ITSO-V7k:superuser>lsvdisk vdisk0 |grep easy_tier
easy_tier on
easy_tier_status balanced
IBM_Storwize:ITSO-V7k:superuser>chvdisk -easytier off vdisk0
IBM FlashSystem 7200:ITSOFS7K:superuser>
System-wide settings
There is a system-wide setting that is called Easy Tier acceleration that is disabled by default.
Turning it on makes Easy Tier move extents up to four times faster than the default setting. In
acceleration mode, Easy Tier can move up to 48 GiB per 5 minutes, but in normal mode it
moves up to 12 GiB. The following use cases are the most probable use cases for
acceleration:
When adding capacity to the pool either by adding to an existing tier or by adding a tier to
the pool, accelerating Easy Tier can quickly spread volumes onto the new MDisks.
Migrating the volumes between the storage pools when the target storage pool has more
tiers than the source storage pool, so Easy Tier can quickly promote or demote extents in
the target pool.
522 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Enabling Easy Tier acceleration is advised only during periods of low system activity
only after migrations or storage reconfiguration occurred. It is a best practice to keep off
the Easy Tier acceleration mode during normal system operation to avoid performance
impacts that are caused by accelerated data migrations.
This setting can be changed non-disruptively, but only by using the CLI. To turn on or off Easy
Tier acceleration mode, run the chsystem command. Run the lssystem command to check its
current state, as shown in Example 9-5.
Three types of reports are available per storage pool: Data Movement, Tier Composition, and
Workload Skew Comparison. Select the corresponding tabs in the GUI to view the charts.
Alternatively, click Export or Export All to download the reports in comma-separated value
(CSV) format.
The X-axis shows a timeline for the selected period by using the selected increments. The
Y-axis indicates the amount of extent capacity that is moved. For each time increment, a
color-coded bar displays the amount of data that is moved by each Easy Tier data movement
action, such as promote or cold demote. For more information about the different movement
actions, see “Easy Tier automatic data placement” on page 513 or click Movement
Description next to the chart to see an explanation in the GUI.
524 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 9-10 Tier Composition chart
A color-coded bar for each tier shows which workload types are present in that tier and how
much of the extent capacity in that tier to which they can be attributed. Easy Tier distinguishes
between the following workload types. Click Composition Description to show a short
explanation for each workload type in the GUI.
Active
Data with more than 0.1 IOPS / Extent access density for small IOPS (< 64 KB block size)
Active Large
All data that is not classified above (> 64 KB block size)
Low Activity
Data with less than 0.1 IOPS / Extent access density
Inactive
Data with zero IOPS / Extent access density (no recent activity)
526 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The X-axis shows the percentage of capacity and the Y-axis shows the corresponding
percentage of workload on that capacity. Workload is classified in small I/O (sum of small
reads and writes) and megabytes per second (MBps) (sum of small and large bandwidth).
The portion of capacity and workload that is attributed to a tier is color-coded in the chart with
a legend above the chart.
Figure 9-12 on page 526 shows that the top tier (Tier1 Flash) contributes only a tiny
percentage of capacity to the pool, but handles around 85% of the IOPS and more than 40%
of the bandwidth in that pool. The middle tier (enterprise disk) handles almost all the
remaining IOPS and an extra 20% of the bandwidth. The bottom tier (NL disk) provides most
of the capacity to the pool but does almost no small I/O workload.
Use this chart to estimate how much storage capacity in the high tiers must be available to
handle most of the workload.
Monitoring Easy Tier by using the IBM Storage Tier Advisor Tool
The IBM STAT is a Windows console application that can analyze heat data files that are
generated by Easy Tier and produce a graphical display of the amount of “hot” data per
volume and predictions of the performance benefits of adding more capacity to a tier in a
storage pool.
Using this method of monitoring, Easy Tier can provide more insights on top of the
information that is available in the GUI.
IBM STAT can be downloaded from this IBM Support web page.
You can download the IBM STAT and install it on your Windows-based computer. The tool is
packaged as an ISO file that must be extracted to a temporary location.
On the system, the heat data files are found in the /dumps/easytier directory on the
configuration node and are named dpa_heat.node_panel_name.time_stamp.data. Any heat
data file is erased when it exists for longer than 7 days.
Heat files must be offloaded and IBM STAT started from a Windows command prompt
console with the file specified as a parameter, as shown in Example 9-6.
Example 9-6 Running IBM STAT by using the Windows command prompt
C:\Program Files (x86)\IBM\STAT>stat dpa_heat.78DXRY0.191021.075420.data
The IBM STAT creates a set of .html and .csv files that can be used for Easy Tier analysis.
To download a heat data file, select Settings → Support → Support Package → Download
Support Package → Download Existing Package, as shown in Figure 9-13.
Figure 9-13 Downloading an Easy Tier heat file: Download Support Package
Figure 9-14 Downloading Easy Tier heat data file: dpa_heat files
You can also specify the output directory. IBM STAT creates a set of HTML files, and the user
can then open the index.html file in a browser to view the results. Also, the following CSV
files are created and placed in the Data_files directory:
<panel_name>_data_movement.csv
<panel_name>_skew_curve.csv
<panel_name>_workload_ctg.csv
These files can be used as input data for other utilities, such as the IBM STAT Charting Utility.
For more information about how to interpret IBM STAT tool output and CSV files analysis, see
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521.
Traditional storage allocation methods often provision large amounts of storage to individual
hosts, but some of it remains unused (not written to), which might result in poor usage rates
(often as low as 10%) of the underlying physical storage resources. Thin provisioning avoids
this issue by presenting more storage capacity to the hosts than it uses from the storage pool.
Physical storage resources can be expanded over time to respond to growth.
528 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
9.2.1 Concepts
The system supports thin-provisioned volumes in standard pools and in DRPs.
Each volume has a provisioned capacity and a real capacity. Provisioned capacity is the
volume storage capacity that is available to a host. It is the capacity that is detected by host
operating systems (OSs) and applications and can be used when creating a file system. Real
capacity is the storage capacity that is reserved to a volume copy from a pool.
In a standard-provisioned volume, the provisioned capacity and real capacity are the same.
However, in a thin-provisioned volume, the provisioned capacity can be much larger than the
real capacity.
The provisioned capacity of a thin-provisioned volume is larger than its real capacity. As more
information is written by the host to the volume, more of the real capacity is used. The system
identifies read operations to unwritten parts of the provisioned capacity and returns zeros to
the server without using any real capacity.
The autoexpand feature prevents a thin-provisioned volume from using up its capacity and
going offline. As a thin-provisioned volume uses capacity, the autoexpand feature maintains a
fixed amount of unused real capacity that is called the contingency capacity. For
thin-provisioned volumes in standard pools, the autoexpand feature can be turned on and off.
For thin-provisioned volumes in DRPs, the autoexpand feature is always enabled.
The capacity of a thin-provisioned volume is split into chunks that are called grains. Write I/O
to grains that have not previously been written to causes real capacity to be used to store
data and metadata. The grain size of thin-provisioned volumes in standard pools can be
32 KB, 64 KB, 128 KB, or 256 KB. Generally, smaller grain sizes save space but require more
metadata access, which can adversely impact performance. When you use thin provisioning
with IBM FlashCopy, specify the same grain size for the thin-provisioned volume and
FlashCopy. The grain size of thin-provisioned volumes in DRPs cannot be changed from the
default of 8 KB.
9.2.2 Implementation
For more information about creating thin-provisioned volumes, see Chapter 6, “Volumes” on
page 299.
Metadata
In a standard pool, the system uses real capacity to store data that is written to the volume
and metadata that describes the thin-provisioned configuration of the volume. The metadata
that is required for a thin-provisioned volume is usually less than 0.1% of the provisioned
capacity.
In a DRP, metadata for a thin-provisioned volume is stored separately from user data and not
reflected in the volume’s real capacity. Capacity reporting is handled at the pool level.
Volume parameters
When creating a thin-provisioned volume in a standard pool, some of its parameters can be
modified in Custom mode, as shown in Figure 9-15.
Real capacity defines both initial volume real capacity and the amount of contingency
capacity. When autoexpand is enabled, the system tries to maintain the contingency capacity
always by allocating extra real capacity when hosts write to the volume.
The warning threshold can be used to send a notification when the volume is about to run out
of space.
In a DRP, fine-tuning of these parameters is not required. The real capacity and warning
threshold are handled at the pool level. The grain size is always 8 KB, and autoexpand is
always on.
9.3 UNMAP
IBM Spectrum Virtualize systems running Version 8.1.0 and later support the Small Computer
System Interface (SCSI) UNMAP command. This command enables hosts to notify the storage
controller of capacity that is no longer required, which can improve capacity savings and
performance of flash storage.
530 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When a host writes to a volume, storage is allocated from the storage pool. To free allocated
space back to the pool, human intervention is needed on the storage system. The SCSI UNMAP
feature is used to allow host OSs to unprovision storage on the storage system, which means
that the resources can automatically be freed in the storage pools and used for other
purposes.
One of the most common use cases is a host application, such as VMware, freeing storage
within a file system. Then, the storage system can reorganize the space, such as optimizing
the data on the volume or the pool so that space can be reclaimed.
A SCSI unmappable volume is a volume that can have storage unprovision and space
reclamation that is triggered by the host OS. The system can pass the SCSI UNMAP command
through to back-end flash storage and external storage controllers that support the function.
This process occurs when volumes are formatted, deleted, extents are migrated, or an UNMAP
command is received from the host. SCSI UNMAP commands are sent only to the following
back-end controllers:
Beginning with Version 8.1.1: IBM FlashSystem A9000, IBM Storwize, and IBM
FlashSystem family products (excluding IBM FlashSystem 840 and IBM FlashSystem 900
AE2) and IBM PureSystems® storage systems
Beginning with 8.3.0.1: HPE Nimble storage systems
Back-end SCSI UNMAP commands help prevent an overprovisioned storage controller from
running out of free capacity for write I/O requests, which means that when you use supported
overprovisioned back-end storage, back-end SCSI UNMAP should be enabled.
Flash storage typically requires empty blocks to serve write I/O requests, which means UNMAP
can improve flash performance by erasing blocks in advance.
This feature is turned on by default. It is a best practice to keep back-end UNMAP enabled,
especially if a system is virtualizing an overprovisioned storage controller or uses FCM drives.
To verify that sending UNMAP commands to a back end is enabled, run the lssystem command,
as shown in Example 9-7.
The system also sends SCSI UNMAP commands to back-end controllers that support them if
host unmaps for corresponding blocks are received (and backend UNMAP is enabled).
With host SCSI UNMAP enabled, some host types (for example, Windows, Linux, or VMware)
change their behavior when creating a file system on a volume, issuing SCSI UNMAP
commands to the whole capacity of the volume. The format completes only after all of these
UNMAP commands complete. Some host types run a background process (for example, fstrim
on Linux), which periodically issues SCSI UNMAP commands for regions of a file system that
are no longer required. Hosts might also send UNMAP commands when files are deleted in a
file system.
Host SCSI UNMAP commands drive more I/O workload to back-end storage. In some
circumstances (for example, volumes on a heavily loaded NL-serial-attached SCSI (SAS)
array), this situation can cause an increase in response times on volumes that use the same
storage. Also, host formatting time is likely to increase compared to a system that does not
support the SCSI UNMAP command.
If you use DRPs, an overprovisioned back end that supports UNMAP, or FCM drives, it is a best
practice to turn on SCSI UNMAP support. Host UNMAP support is enabled by default.
If only standard pools are configured and the back end is traditional (fully provisioned),
consider keeping host UNMAP support turned off because it does not provide any benefit.
To check and modify the current settings for host SCSI UNMAP support, run the lssystem and
chsystem CLI commands, as shown in Example 9-8.
Note: You can switch host UNMAP support on and off nondisruptively on the system side.
However, hosts might need to rediscover storage, or (in the worst case) be restarted for
them to stop sending UNMAP commands.
Offload commands, such as UNMAP and XCOPY, free hosts and speed the copy process by
offloading the operations of certain types of hosts to a storage system. These commands are
used by hosts to format new file systems, or copy volumes without the host needing to read
and then write data.
532 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Offload commands can sometimes create I/O-intensive workloads, potentially taking
bandwidth from production volumes and affecting performance, especially if the underlying
storage cannot handle the amount of I/O that is generated.
Throttles can be used to delay processing for offloads to free bandwidth for other more critical
operations, which can improve performance but limits the rate at which host features, such as
VMware VMotion, can copy data. It can also increase the time that it takes to format file
systems on a host.
Note: For systems that are managing any NL storage, it might be a best practice to set the
offload throttle to 100 MBps.
To implement an offload throttle, run the mkthrottle command with the -type offload
parameter. In the GUI, select Monitoring → Systems Hardware, and then click System
Actions → Edit System Offload Throttle, as shown in Figure 9-16.
DRPs automatically reclaim used capacity that is no longer needed by host systems and
return it back to the pool as available capacity for future reuse.
The data reduction in DRPs is embedded in this pool type and no separate license is
necessary. This situation does not apply to real-time compression (RtC), where a specific
capacity-based license is needed.
Note: This book provides only an overview of DRP. For more information, see Introduction
and Implementation of Data Reduction Pools and Deduplication, SG24-8430.
Volumes in a DRP track when capacity is freed from hosts and possible unused capacity that
can be collected and reused within the storage pool. When a host no longer needs the data
that is stored on a volume, the host system uses SCSI UNMAP commands to release that
capacity from the volume. When these volumes are in DRPs, that capacity becomes
reclaimable capacity, and is monitored, collected, and eventually redistributed back to the
pool for use by the system.
Note: If the usable capacity usage of a DRP exceeds more than 85%, I/O performance can
be affected. The system needs 15% of usable capacity available in DRPs to ensure that
capacity reclamation can be performed efficiently.
At its core, a DRP uses a Log Structured Array (LSA) to allocate capacity. An LSA enables a
tree-like directory to define the physical placement of data blocks independent of size and
logical location.
Each volume has a range of logical block addresses (LBAs), starting from 0 and ending with
the block address that fills the capacity. The LSA enables the system to allocate data
sequentially when written to volumes (in any order) and provides a directory that provides a
lookup to match volume LBA with physical addresses within the array. A volume in a DRP
contains directory metadata to store the mapping from logical address on the volume to
physical location on the back-end storage.
This directory is too large to store in memory, so it must be read from storage as required.
The lookup and maintenance of this metadata results in I/O amplification. I/O amplification
occurs when a single host-generated read or write I/O results in more than one back-end
storage I/O request. For example, a read I/O request might need to read some directory
metadata in addition to the actual data. A write I/O request might need to read directory
metadata write updated directory metadata, journal metadata, and the actual data.
Conversely, data reduction reduces the size of data that uses compression and deduplication,
so less data is written to the back-end storage.
534 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
IBM Spectrum Virtualize V8.4 introduces child pools for DRPs. A child pool is a folder-like
object within a parent DRP that contains volumes. The child pool for DRP is quota-less and its
capacity is the sum of all volumes within the child pool. A child pool can be assigned to an
ownership group and to segment administrative domains. The parent pool and associated
child pools share MDisk, deduplication hash table, and encryption keys. Therefore, it seems
advisable to use this technology to separate departments of a single client, but not different
clients.
At the time of writing, VMware vSphere Virtual Volumes (VVOLs) are not supported by child
pools in DRPs.
Due to the nature of the newly introduced child pools, there is a new type of volume migration
that is available to move volumes within a single DRP and its affiliated child pools. With this
migration, you can move volumes between all pools within one DRP entity.
The cost reductions that are achieved through software can facilitate the transition to all flash
storage. Flash storage has lower operating costs, lower power consumption, higher density,
and is cheaper to cool than disk storage. However, the cost of flash storage is still higher.
Data reduction can reduce the total cost of ownership (TCO) of an all-flash system to be
competitive with HDDs.
One benefit of DRP is in the form of capacity savings that are achieved by deduplication and
compression. Real-time deduplication identifies duplicate data blocks during write I/O
operations and stores a reference to the first copy of the data instead of writing the data to the
storage pool a second time. It does this task by maintaining a fingerprint database containing
hashes of data blocks already written to the pool. If new data that is written by hosts matches
an entry in this database, then a reference is generated in the directory metadata instead of
writing the new data.
Compression reduces the size of the host data that is written to the storage pool. DRP uses
the Lempel-Ziv based RtC and decompression algorithm. It offers a new implementation of
data compression that is fully integrated into the IBM Spectrum Virtualize I/O stack. It makes
optimal use of node resources such as memory and CPU cores, and uses hardware
acceleration on supported platforms efficiently. DRP compression operates on small block
sizes, which results in consistent and predictable performance.
Deduplication and compression can be combined, in which case data is first deduplicated and
then compressed. Therefore, deduplication references are created on the compressed data
that is stored on the physical domain.
DRP supports end-to-end SCSI UNMAP functions. Hosts use the set of SCSI UNMAP commands
to indicate that the formerly used capacity is no longer required on a target volume.
Reclaimable capacity is unused capacity that is created when data is overwritten, volumes
are deleted, or when data is marked as unneeded by a host by using the SCSI UNMAP
command. That capacity can be collected and reused on the system.
DRP works well with Easy Tier. The directory metadata of DRPs does not fit in memory, so it
is stored on disk by using dedicated metadata volumes that are separate from the actual data.
The metadata volumes are small but frequently accessed by small block I/O requests.
Performance gains are expected because they are optimal candidates for promotion to the
fastest tier of storage through Easy Tier. In contrast, data volumes with large but frequently
rewritten data is grouped to consolidate “heat”. Easy Tier can accurately identify active data.
RAID Reconstruct Read (3R) is a technology to increase the reliability and availability of data
that is stored in DRPs. 3R is introduced in IBM Spectrum Virtualize V8.4.
All reads are evaluated, and if there is a mismatch, the data is reconstructed by using the
parity information. To eliminate rereading of corrupted data, the affiliate cache block is
marked invalid. This process works for internal and external back-end devices.
For more information about how to estimate the capacity savings that are achieved by
compression and deduplication, see 9.5, “Saving estimations for compression and
deduplication” on page 545.
The following software and hardware requirements must be met for DRP compression and
deduplication:
The system must run Version 8.1.3.2 or higher.
IBM FlashSystem 5010 is not supported.
IBM FlashSystem 5030 needs the Cache Upgrade option (#ALGA).
All other supported platforms need at least 32 GB of cache.
In most cases, it is a best practice to enable compression for all thin-provisioned and
deduplicated volumes. Overhead in DRPs is caused by metadata handling, which is the same
for compressed volumes and thin-provisioned volumes without compression.
In the IBM FlashSystem 5030 system, the limitation in CPU power and the lack of a hardware
accelerator might lead to a performance impact.
If the system contains self-compressing drives, DRPs provide a major benefit only if
deduplication is used and the estimated deduplication savings are significant. If there is no
plan to use deduplication or the expected deduplication ratio is low, consider using fully
allocated volumes instead and use drive compression for capacity savings. For more
information about how to estimate deduplication savings, see 9.5.2, “Evaluating compression
and deduplication” on page 547.
536 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
In systems with self-compressing drives, certain system configurations make determining
accurate physical capacity on the system difficult. If the system contains self-compressing
drives and DRPs with thin-provisioned volumes without compression, the system cannot
determine the accurate amount of physical capacity that is used on the system. In this case,
overcommitting and losing access to write operations is possible. To prevent this situation
from happening, use compressed volumes (with or without deduplication) or fully allocated
volumes. Separate compressed volumes and fully allocated volumes by using separate pools.
Similar considerations apply to configurations with compressing back-end storage controllers,
as described in 9.6, “Overprovisioning and data reduction on external storage” on page 548.
There is a maximum number of four DRPs in a system. When this limit is reached, only more
standard pools can be created.
A DRP uses a customer data volume per I/O group to store volume data. There is a limit on
the maximum size of a customer data volume of 128,000 extents per I/O group, which places
a limit on the maximum physical capacity in a pool after data reduction that depends on the
extent size, number of DRPs, and number of I/O groups, as shown in Table 9-2. DRPs have a
minimum extent size of 1024 MB.
Overwriting data, unmapping data, and deleting volumes cause reclaimable capacity in the
pool to increase. Garbage collection is performed in the background to convert reclaimable
capacity to available capacity. This operation requires free capacity in the pool to operate
efficiently without impacting I/O performance. A best practice is to ensure that the provisioned
capacity with the DRP does not exceed 85% of the total usable capacity of the DRP.
To ensure that garbage collection is working properly, there is minimum capacity limit in a
single DRP depending on extent size and number of I/O groups, as shown in Table 9-3. Even
when there are no volumes in the pool, some of the space is used to store metadata. The
required metadata capacity depends on the total capacity of the storage pool and on the
extent size, which should be considered when planning capacity.
For more information about the considerations of using data reduction on the system and the
back-end storage, see 9.6, “Overprovisioning and data reduction on external storage” on
page 548.
To create a volume within a DRP, select Volumes → Volumes, and click Create Volumes.
Figure 9-17 on page 539 shows the Create Volumes dialog. In the Capacity Savings menu,
the following selections are available: None, thin provisioned, and Compressed. If
Compressed or thin provisioned is selected, the Deduplicated option also becomes
available and can be selected.
538 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 9-17 Creating a compressed volume
Capacity monitoring
Capacity monitoring in DRPs is mainly done on the system and storage pool levels. Use the
Dashboard in the GUI to view a summary of the capacity usage and capacity savings of the
entire system.
To see more detailed capacity reporting including the warning threshold and capacity savings,
open the pool properties dialog by right-clicking a pool and selecting Properties. This dialog
shows the savings that are achieved by thin provisioning, compression, and deduplication,
and the total data reduction savings in the pool, as shown in Figure 9-19 on page 541. In
addition, the Reclaimable capacity is shown, which is unused capacity that is created when
data is overwritten, volumes are deleted, or when data is marked as unneeded by a host by
using the SCSI UNMAP command. This capacity is converted to available capacity by the
garbage collection background process.
540 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 9-19 Capacity reporting in a Data Reduction Pool
The capacity reporting shows 267 GiB capacity usage, and there is only a single virtual disk
(VDisk) copy with 100 GiB provisioned capacity. This capacity is the result of a DRP
reservation for deduplication and compression. Some extents are marked used, and only
some bytes are written. With increasing usage of the pool, the impact of this reservation is
decreasing.
The CLI can be used for limited capacity reporting on the volume level. The
used_capacity_before_reduction entry indicates the total amount of data that is written to a
thin-provisioned or compressed volume copy in a data reduction storage pool before data
reduction occurs. This field is empty for fully allocated volume copies and volume copies not
in a DRP.
To find this value, run the lsvdisk command with a volume name or ID as a parameter, as
shown in Example 9-9. It shows a thin-provisioned volume without compression and
deduplication with a virtual size of 1 TiB that is provisioned to the host. A 53 GB file was
written from the host.
Example 9-9 Data Reduction Pool volume capacity reporting on the CLI
IBM FlashSystem 7200:ITSOFS7K:superuser>lsvdisk thin_provisioned
id 34
name vdisk1
capacity 1.00TB
used_capacity
real_capacity
free_capacity
tier tier_scm
tier_capacity 0.00MB
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity
deduplicated_copy no
used_capacity_before_reduction 53.04GB
The used, real, and free capacity, and the capacity that is stored on each storage tier, is not
shown for volumes (except fully allocated volumes) in DRPs.
Capacity reporting on the pool level is available by running the lsmdiskgrp command with the
pool ID or name as a parameter, as shown in Example 9-10.
542 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
used_capacity 1.14TB
real_capacity 1.14TB
tier tier_scm
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier0_flash
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier1_flash
tier_capacity 5.00TB
tier_free_capacity 3.85TB
tier tier_enterprise
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier_nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
child_mdisk_grp_capacity 0.00MB
used_capacity_before_reduction 143.68GB
used_capacity_after_reduction 94.64GB
overhead_capacity 52.00GB
deduplication_capacity_saving 36.20GB
reclaimable_capacity 0.00MB
physical_capacity 5.00TB
physical_free_capacity 3.85TB
For more information about every reported value, see IBM FlashSystem 9200 documentation
and expand Command-line interface → Storage pool commands → lsmdiskgrp.
Also, real-time compressed volumes with data-reduced DRP volumes cannot co-exist in a
single I/O group. Therefore, migrating such volumes has extra considerations. One possible
solution might be to inflate real-time compressed volumes in standard pools and migrate
these volumes in a second step to data-reduced volumes in a DRP pool.
Note: All volumes that cannot coexist with data-reduced DRP volumes must be migrated in
a single step.
Depending on the system configuration and the type of migration, a one-step migration or a
two-step migration is necessary. The reason is that compressed volumes in standard pools
cannot coexist with deduplicated volumes in DRPs. Therefore, a two-step migration is
required in the following scenarios.
2. After you click Add, synchronization starts. The time that synchronization takes to
complete depends on the size of the volume, system performance, and the configured
migration rate. You can increase the synchronization rate by right-clicking the volume and
selecting Modify Mirror Sync Rate.
When both copies are synchronized, Yes is displayed for both copies in the Synchronized
column in the Volumes window. You can track the synchronization process by using the
Running tasks window, as shown in Figure 9-22. After it reaches 100% and the copies are
in-sync, you can complete migration by deleting the source copy.
544 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When a DRP is created, the system monitors the pool for reclaimable capacity from host
UNMAP operations. When space is freed from a host OS, it is a process called unmapping.
Hosts indicate that the allocated capacity is no longer required on a target volume. The freed
space is collected and reused by the system automatically without having to reallocate the
capacity manually.
After this process completes, the volume copies are deleted and disappear from the system
configuration. In a second step, garbage collection can give the reclaimable capacity that is
generated in the first step back to the pool as available capacity, which means that the used
capacity of a removed volume is not available for reuse immediately after the removal.
The time that it takes to delete a thin-provisioned or compressed volume copy depends on the
size of the volume, the system configuration, and the workload. For deduplicated copies, the
duration also depends on the amount and size of other deduplicated copies in the pool, which
means that it might take a long time to delete a small deduplicated copy if there are many
other deduplicated volumes in the same pool. The deletion process is a background process
that might impact system overall performance.
The deleting state of a volume or volume copy can be seen by running the lsvdisk command.
The GUI hides volumes in this state, but it shows deleting volume copies if the volume
contains another copy.
When one copy of a mirrored volume is in the deleting state, it is not possible to add a copy to
the volume before the deletion finishes. If a new copy must be added without waiting for the
deletion to complete, first split the copy that must be deleted into a new volume, and then
delete the new volume and add a new second copy to the original volume. To split a copy into
a new volume, right-click the copy and select Split into New Volume in the GUI or run the
splitvdiskcopy command on the CLI.
If the compression savings prove to be beneficial in your environment, volume mirroring can
be used to convert volumes to compressed volumes.
To see the results and the date of the latest estimation cycle, as shown in Figure 9-23, Go to
the Volumes window, right-click any volume, and select Space Savings → Estimate
Compression Savings. If no analysis was done, the system suggests running it. A new
estimation of all volumes can be started from this dialog. To run or rerun analysis on a single
volume, select Analyze in the Space Savings submenu.
To analyze all the volumes on the system from the CLI, run the analyzevdiskbysystem
command.
The command analyzes all the current volumes that are created on the system. Volumes that
are created during or after the analysis are not included and can be analyzed individually. The
time that it takes to analyze all the volumes on system depends on the number of volumes
that are being analyzed, and results can be expected at about a minute per volume. For
example, if a system has 50 volumes, compression savings analysis takes approximately 50
minutes.
You can run an analysis on a single volume by specifying its name or ID as a parameter for
the analyzevdisk CLI command.
To check the progress of the analysis, run the lsvdiskanalysisprogress command. This
command displays the total number of volumes on the system, total number of volumes that
are remaining to be analyzed, and estimated time of completion.
To display information for the thin provisioning and compression estimation analysis report for
all volumes, run the lsvdiskanalysis command.
546 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If you are using a version of IBM Spectrum Virtualize that is older than Version 7.6 or if you
want to estimate the compression savings of another IBM or non-IBM storage system, the
separate IBM Comprestimator Utility can be installed on a host that is connected to the device
that needs to be analyzed. For more information and the latest version of this utility, see
IBM Comprestimator Utility Version 1.5.3.1.
Note: IBM Comprestimator can run for a long period (a few hours) when it is scanning a
relatively empty device. The utility randomly selects and reads 256 KB samples from the
device. If the sample is empty (that is, full of null values), it is skipped. A minimum number
of samples with data is required to provide an accurate estimation. When a device is
mostly empty, many random samples are empty. As a result, the utility runs for a longer
time as it tries to gather enough non-empty samples that are required for an accurate
estimate. The scan is stopped if the number of empty samples is over 95%.
The DRET utility uses advanced mathematical and statistical algorithms to perform an
analysis with a low memory footprint. The utility runs on a host that has access to the devices
to be analyzed. It performs only read operations, so it has no effect on the data that is stored
on the device.
The following sections provide information about installing DRET on a host and using it to
analyze devices on it. Depending on the environment configuration, in many cases DRET is
used on more than one host to analyze more data types.
When DRET is used to analyze a block device that is used by a file system, all underlying
data in the device is analyzed regardless of whether this data belongs to files that were
already deleted from the file system. For example, you can fill a 100 GB file system and make
it 100% used, and then delete all the files in the file system to make it 0% used. When
scanning the block device that is used for storing the file system in this example, DRET
accesses the data that belongs to the files that are deleted.
Important: The preferred method of using DRET is to analyze volumes that contain as
much active data as possible rather than volumes that are mostly empty of data, which
increases the accuracy level and reduces the risk of analyzing old data that is deleted but
might still have traces on the device.
Overprovisioned MDisks from controllers that are supported by this feature can be used as
managed mode MDisks in the system and can be added to storage pools (including DRPs).
Implementation steps for overprovisioned MDisks are the same as for fully allocated storage
controllers. The system detects whether the MDisk is overprovisioned, its total physical
capacity, and used and remaining physical capacity. It detects whether SCSI UNMAP
commands are supported by the back end. By sending SCSI UNMAP commands to
overprovisioned MDisks, the system marks data that is no longer in use. Then, the garbage
collection processes on the back end can free unused capacity and convert it to free space.
At the time of writing, the following back-end controllers are supported by overprovisioned
MDisks:
IBM FlashSystem A9000 V12.1.0 and later
IBM FlashSystem 900 V1.4 and later
IBM FlashSystem V9000 AE2 and AE3 expansions
IBM Storwize or IBM FlashSystem family systems Version 8.1.0 and later
PureSystems storage
HPE Nimble
Extra caution is required when planning and monitoring capacity for such configurations.
Table 9-4 shows an overview of configuration guidelines when using overprovisioned external
storage controllers.
DRP Fully allocated Recommended. Use DRP on the system to plan for
compression and deduplication. DRP at the top level
provides the best application capacity reporting.
548 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
System Back end Comments
Fully allocated Overprovisioned, Use with great care. Easy Tier is unaware of physical
multiple tiers of capacity in tiers of a hybrid pool, so it tends to fill the top
storage tier with the hottest data. Changes in the compressibility
of data in the top tier can overcommit the storage, which
leads to out-of-space conditions.
DRP with Overprovisioned Avoid. Difficult to understand the physical capacity usage
thin-provisioned or of the uncompressed volumes. High risk of
fully allocated overcommitting the back end. If a mix of DRP and fully
volumes allocated volumes is required, use separate pools.
DRP DRP Avoid. Creates two levels of I/O amplification and capacity
impact. DRP at the bottom layer provides no benefit.
When using DRPs with a compressing back-end controller, use compression in DRP and
avoid overcommitting the back end by assuming a 1:1 compression ratio in back-end storage.
Small extra savings are realized from compressing metadata.
If the back-end controller uses FCM drives that are always compressing with hardware
acceleration, the same methodology should be used. The eventual capacity savings should
be used by creating more MDisks to be implemented in the DRP.
Note: Fully allocated volumes that are above overprovisioned MDisks configurations must
be avoided or used with extreme caution because it can lead to overcommitting back-end
storage.
The concept of provisioning groups is used for capacity reporting and monitoring of
overprovisioned external storage controllers. A provisioning group is an object that represents
a set of MDisks that share physical resources. Each overprovisioned MDisk is part of a
provisioning group that defines the physical storage resources that are available to a set of
MDisks.
Storage controllers report the usable capacity of an overprovisioned MDisk based on its
provisioning group. If multiple MDisks are part of the same provisioning group, then all these
MDisks share the physical storage resources and report the same usable capacity. However,
this usable capacity is not available to each MDisk individually because it is shared between
all these MDisks.
Provisioning groups are used differently depending on the back-end storage, as shown in the
following examples:
IBM FlashSystem A9000 and IBM FlashSystem 900: The entire subsystem forms one
provisioning group.
Storwize and IBM FlashSystem family systems: The storage pool forms a provisioning
group, which enables more than one independent provisioning group in a system.
RAID with compressing drives: An array is a provisioning group that presents the physical
storage that is in use much like an external array.
550 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The MDisk properties window, which opens by selecting Pools → MDisks by Pools,
right-clicking an MDisk, and then selecting the Properties option, as shown in
Figure 9-26.
Running lsmdisk with an MDisk name or ID as a parameter displays the full properties of a
thin-provisioned volume, as shown in Example 9-11.
The overprovisioning status and SCSI UNMAP support for the selected MDisk are displayed.
Note: It is not a best practice to create multiple storage pools from MDisks in a single
provisioning group.
552 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system (OS) and its cache. Therefore, the copy is not
apparent to the host unless it is mapped.
While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap after which I/O can resume. Although several FlashCopy options
require the data to be copied from the source to the target in the background (which can take
time to complete), the resulting data on the target volume is presented so that the copy
appears to complete immediately. This feature means that the copy can immediately be
mapped to a host and is directly accessible for read and write operations.
The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples of rapidly creating:
Consistent backups of dynamically changing data
Consistent copies of production data to facilitate data movement or migration between
hosts
Copies of production data sets for application development and testing, auditing purposes
and data mining, and for quality assurance
Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to several scenarios.
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is completed, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your servers infrastructure.
When FlashCopy is used for backup purposes, the target data often is managed as read-only
at the OS level. This approach provides extra security by ensuring that your target data was
not modified and remains true to the source.
554 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Restore with FlashCopy
FlashCopy can perform a restore from any FlashCopy mapping. Therefore, you can restore
(or copy) from the target to the source of your regular FlashCopy relationships. When
restoring data from FlashCopy, this method can be qualified as reversing the direction of the
FlashCopy mappings.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
Best practices: Although restoring from a FlashCopy is quicker than a traditional tape
media restore, you must not use restoring from a FlashCopy as a substitute for good
backup and archiving practices. Instead, keep one to several iterations of your FlashCopies
so that you can near-instantly recover your data from the most recent history, and keep
your long-term backup and archive as suitable for your business.
In addition to the restore option that copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files. To do that, you make the target available on a host. Is suggested to not make
the target available to the source host because seeing duplicates of disks causes problems
for most host OSs. Copy the files to the source by using normal host data copy methods for
your environment.
For more information about how to use reverse FlashCopy, see 10.1.12, “Reverse FlashCopy”
on page 575.
This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.
You can create a FlashCopy of your source and use that for your testing. This copy is a
duplicate of your production data down to the block level so that even physical disk identifiers
are copied. Therefore, it is impossible for your applications to tell the difference.
When a FlashCopy operation starts, a checkpoint creates a bitmap table that indicates that
no part of the source volume was copied. Each bit in the bitmap table represents one region
of the source volume and its corresponding region on the target volume. Each region is called
a grain.
The relationship between two volumes defines the way data are copied and is called a
FlashCopy mapping.
FlashCopy mappings between multiple volumes can be grouped in a Consistency group to
ensure their PiT (or T0) is identical for all of them. A simple one-to-one FlashCopy mapping
does not need to belong to a consistency group.
Figure 10-1 shows the basic terms that are used with FlashCopy. All elements are explained
later in this chapter.
556 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Backup: Sometimes referred to as incremental, a backup FlashCopy mapping consists of
a PiT full copy of a source volume, plus periodic increments or “deltas” of data that
changed between two points in time.
The FlashCopy mapping has four property attributes (clean rate, copy rate, autodelete,
incremental) and seven different states that are described later in this chapter. Users can
perform the following actions on a FlashCopy mapping:
Create: Define a source and target, and set the properties of the mapping.
Prepare: The system must be prepared before a FlashCopy copy starts. It flushes the
cache and makes it “transparent” for a short time, so no data is lost.
Start: The FlashCopy mapping is started and the copy begins immediately. The target
volume is immediately accessible.
Stop: The FlashCopy mapping is stopped (by the system or by the user). Depending on
the state of the mapping, the target volume is usable or not usable.
Modify: Some properties of the FlashCopy mapping can be modified after creation.
Delete: Delete the FlashCopy mapping. This action does not delete volumes (source or
target) from the mapping.
The source and target volumes must be the same size. The minimum granularity that IBM
Spectrum Virtualize supports for FlashCopy is an entire volume. It is not possible to use
FlashCopy to copy only part of a volume.
Important: As with any PiT copy technology, you are bound by OS and application
requirements for interdependent data and the restriction to an entire volume.
The source and target volumes must belong to the same IBM Spectrum Virtualize based
system. They do not have to be in the same I/O group or storage pool, although it is
recommended that they have the same preferred node for the best performance.
Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.
All FlashCopy operations occur on FlashCopy mappings. FlashCopy does not alter the
volumes. However, multiple operations can occur at the same time on multiple FlashCopy
mappings because of the use of consistency groups.
Consistency groups address the requirement to preserve PiT data consistency across
multiple volumes for applications that include related data that spans multiple volumes. For
these volumes, consistency groups maintain the integrity of the FlashCopy by ensuring that
“dependent writes” are run in the application’s intended sequence. Also, consistency groups
provide an easy way to manage several mappings.
FlashCopy mappings can be part of a consistency group, even if only one mapping exists in
the consistency group. If a FlashCopy mapping is not part of any consistency group, it is
referred as stand-alone.
The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.
In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.
Most of the actions that the user can perform on a FlashCopy mapping are the same for
consistency groups.
Both of these layers have various levels and methods of caching data to provide better speed.
Therefore, because the IBM Spectrum Virtualize and FlashCopy sit below these layers, they
are unaware of the cache at the application or OS layers.
To ensure the integrity of the copy that is made, it is necessary to flush the host OS and
application cache for any outstanding reads or writes before the FlashCopy operation is
performed. Failing to flush the host OS and application cache produces what is referred to as
a crash consistent copy.
The resulting copy requires the same type of recovery procedure, such as log replay and file
system checks, that is required following a host crash. FlashCopies that are crash consistent
often can be used after file system and application recovery procedures.
Various OSs and applications provide facilities to stop I/O operations and ensure that all data
is flushed from host cache. If these facilities are available, they can be used to prepare for a
FlashCopy operation. When this type of facility is unavailable, the host cache must be flushed
manually by quiescing the application and unmounting the file system or drives.
558 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The target volumes are overwritten with a complete image of the source volumes. Before the
FlashCopy mappings are started, any data that is held on the host OS (or application) caches
for the target volumes must be discarded. The easiest way to ensure that no data is held in
these caches is to unmount the target volumes before the FlashCopy operation starts.
Best practice: From a practical standpoint, when you have an application that is backed
by a database and you want to make a FlashCopy of that application’s data, it is sufficient
in most cases to use the write-suspend method that is available in most modern
databases. This approach is possible because the database maintains strict control over
I/O.
This method is as opposed to flushing data from the application and backing database,
which is always the suggested method because it is safer. However, this method can be
used when facilities do not exist or your environment includes time sensitivity.
IBM Spectrum Protect Snapshot protects data with integrated, application-aware snapshot
backup and restore capabilities that use FlashCopy technologies in the IBM Spectrum
Virtualize.
You can protect data that is stored by IBM Db2® SAP, Oracle, Microsoft Exchange, and
Microsoft SQL Server applications. You can create and manage volume-level snapshots for
file systems and custom applications.
In addition, you can use IBM Spectrum Protect Snapshot to manage frequent, near-instant,
nondisruptive, application-aware backups and restores that use integrated application and
VMware snapshot technologies. IBM Spectrum Protect Snapshot can be widely used in IBM
and non-IBM storage systems.
Other IBM products are also available for application-aware backup and restore capabilities,
such as IBM Spectrum Protect Plus and IBM Copy Data Management. For more information
about these offerings, speak to your IBM representative.
Note: To see how IBM Spectrum Protect Snapshot, IBM Spectrum Protect Plus, and
IBM Copy Data Management can help your business, see IBM Documentation.
You can think of the bitmap as a simple table of ones or zeros. The table tracks the difference
between a source volume grains and a target volume grains. At the creation of the FlashCopy
mapping, the table is filled with zeros, which indicates that no grain is copied yet.
The grain size can be 64 KB or 256 KB (the default is 256 KB). The grain size cannot be
selected by the user when a FlashCopy mapping is created from the GUI. The FlashCopy
bitmap contains 1 bit for each grain. The bit records whether the associated grain is split by
copying the grain from the source to the target.
After a FlashCopy mapping is created, the grain size for that FlashCopy mapping cannot be
changed. When a FlashCopy mapping is created, the grain size of that mapping is used if the
grain size parameter is not specified and one of the volumes is part of a FlashCopy mapping.
If neither volume in the new mapping is part of another FlashCopy mapping and at least one
of the volumes in the mapping is a compressed volume, the default grain size is 64 KB for
performance considerations. Other than in this situation, the default grain size is 256 KB.
560 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
With CoW, as shown in Figure 10-3, when data is written on a source volume, the grain where
the to-be-changed blocks are stored is first copied to the target volume and then modified on
the source volume. The bitmap is updated to track the copy.
With RoW, when the source volume is modified, the updated grain is written directly to a new
block in the DRP customer data volume. The source volume metadata and FlashCopy bitmap
are then updated to reflect this update. RoW was introduced with IBM Spectrum Virtualize
V8.4 for DRPs only. Compared to CoW, RoW reduces the back-end activity by removing the
copy operation, which improves the overall performance of FlashCopy operations.
Note: At the time of writing, RoW is used only for volumes with supported deduplication,
without a mirroring relationship, and when both the source and target volumes are within
the same pool and I/O group. The selection between CoW versus RoW is automatically
done by the base code under these conditions.
With IBM FlashCopy, the target volume is immediately accessible for read and write
operations. Therefore, a target volume can be modified even if it is part of a FlashCopy
mapping. In standard pools, as shown in Figure 10-4, when a write operation is performed on
the target volume, the grain that contains the blocks to be changed is first copied from the
source (CoD). Then, the grain is modified with the new value. The bitmap is modified so that
the grain from the source is not copied again, even if it is changed or a background copy is
enabled.
Note: If all the blocks of the grain to be modified are changed, there is no need to first read
or copy the source grain. There is no CoD, so the source grain is directly modified at the
target volume.
The indirection Layer intercepts any I/O coming from a host (read or write operation) and
addressed to a FlashCopy volume (source or target). It determines whether the addressed
volume is a source or a target, its direction (read or write), and the state of the bitmap table for
the FlashCopy mapping that the addressed volume is in. It then decides what operation to
perform. The different I/O indirections are described next.
562 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If the bitmap indicates that the grain is not yet copied, the grain is first copied to the target
(CoW), the bitmap table is updated, and the grain is modified on the source, as shown in
Figure 10-6. This process is true for standard pools in IBM Spectrum Virtualize V8.4 or
later or any pool type if the version is lower than Version 8.4.
If this pool is a DRP in a system running IBM Spectrum Virtualize V8.4 or later, the system
does a RoW operation, as described in “Copy-on-write, redirect-on-write, and Copy on
Demand” on page 560.
If the bitmap indicates the grain to be modified on the target was copied, it is directly
changed. The bitmap is not updated, and the grain is modified on the target with the new
value, as shown in Figure 10-8.
Note: The bitmap is not updated in that case. Otherwise, it might be copied from the
source late if a background copy is ongoing or if write operations are made on the source.
That process over-writes the changed grain on the target.
564 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-9 Reading a grain on a target
If source has multiple targets, the Indirection layer algorithm behaves differently on Target
I/Os. For more information about multi-target operations, see 10.1.11, “Multiple target
FlashCopy” on page 570.
Figure 10-10 shows the IBM Spectrum Virtualize software stack, which includes the cache
architecture.
Also, the two-level cache provides more performance improvements to the FlashCopy
mechanism. Because the FlashCopy layer is above the lower cache in the IBM Spectrum
Virtualize software stack, it can benefit from read prefetching and coalescing writes to
back-end storage. FlashCopy benefits from the two-level cache because the upper cache
write data does not have to go directly to back-end storage, but to the lower cache layer
instead.
The background copy rate property determines the speed at which grains are copied as a
background operation, immediately after the FlashCopy mapping is started. That speed is
defined by the user when the FlashCopy mapping is created, and can be changed
dynamically for each individual mapping, whatever its state. Mapping copy rate values can be
0 - 150, with the corresponding speeds that are listed in Table 10-1.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 mebibyte 4 16
(MiB)
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
566 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
User-specified copy rate attribute Data 256 KB 64 KB
value copied/sec grains/sec grains/sec
When the background copy function is not performed (copy rate = 0), the target volume
remains a valid copy of the source data only while the FlashCopy mapping remains in place.
The grains per second numbers represent the maximum number of grains that IBM Spectrum
Virtualize copies per second. This amount assumes that the bandwidth to the managed disks
(MDisks) can accommodate this rate.
If IBM Spectrum Virtualize cannot achieve these copy rates because of insufficient bandwidth
from the nodes to the MDisks, the background copy I/O contends for resources on an equal
basis with the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving
from the hosts tend to see an increase in latency and a consequential reduction in
throughput.
Background copy and foreground I/O continue to progress, and do not stop, hang, or cause
the node to fail.
The background copy is performed by one of the nodes that belong to the I/O group in which
the source volume is stored. This responsibility is moved to the other node in the I/O group if
the node that performs the background and stopping copy fails.
Running the -incremental option when creating the FlashCopy mapping allows the system to
keep the bitmap as it is when the mapping is stopped. Therefore, when the mapping is started
again (at another PiT), the bitmap is reused and only changes between the two copies are
applied to the target.
A system that provides Incremental FlashCopy capability allows the system administrator to
refresh a target volume without having to wait for a full copy of the source volume to be
complete. At the point of refreshing the target volume, if the data was changed on the source
or target volumes for a particular grain, the grain from the source volume is copied to the
target.
The advantages of Incremental FlashCopy are useful only if a previous full copy of the source
volume was obtained. Incremental FlashCopy helps with only further recovery time objectives
(RTOs, which are the time that is needed to recover data from a previous state), it does not
help with the initial RTO.
For example, as shown in Figure 10-11 on page 568, a FlashCopy mapping was defined
between a source volume and a target volume by using the -incremental option.
When the command-line interface (CLI) is used to perform FlashCopy on volumes, run a
prestartfcmap or prestartfcconsistgrp command before you start a FlashCopy (regardless
of the type and options specified). These commands put the cache into write-through mode
and provides a flushing of the I/O that is bound for your volume. After FlashCopy is started, an
effective copy of a source volume to a target volume is created.
The content of the source volume is presented immediately on the target volume and the
original content of the target volume is lost.
568 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Then, FlashCopy commands can be run to the FlashCopy consistency group and
simultaneously for all the FlashCopy mappings that are defined in the consistency group. For
example, when a FlashCopy start command is run to the consistency group, all of the
FlashCopy mappings in the consistency group are started at the same time. This
simultaneous start results in a PiT copy that is consistent across all of the FlashCopy
mappings that are contained in the consistency group.
Rather than running prestartfcmap or prestartfcconsistgrp, you can also use the -prep
parameter in the startfcmap or startfcconsistgrp command to prepare and start FlashCopy
in one step.
570 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-12 shows these different possibilities.
Every source-target relationship is a FlashCopy mapping and is maintained with its own
bitmap table. No consistency group bitmap table exists.
If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate consistency groups to separate each mapping of the same source volume.
Regardless of whether the source volume with multiple target volumes is in the same
consistency group or in separate consistency groups, the resulting FlashCopy produces
multiple identical copies of the source data.
Dependencies
When a source volume has multiple target volumes, a mapping is created for each
source-target relationship. When data is changed on the source volume, it is first copied to
the target volume because of the CoW mechanism that is used by FlashCopy. If running
IBM Spectrum Virtualize V8.4 or later, for DRP pools the software uses a RoW mechanism
instead, as described in “Copy-on-write, redirect-on-write, and Copy on Demand” on
page 560.
To avoid any significant effect on performance because of multiple targets, FlashCopy creates
dependencies between the targets. Dependencies can be considered as “hidden” FlashCopy
mappings that are not visible to and cannot be managed by the user. A dependency is
created between the most recent target and the previous one (in order of start time).
Figure 10-13 shows an example of a source volume with three targets.
When the three targets are started, Target T0 was started first and considered the “oldest.”
Target T1 was started next and is considered “next oldest,” and finally, Target T2 was started
last and considered the “most recent” or “newest.” The “next oldest” target for T2 is T1. The
“next oldest” target for T1 is T0. T1 is newer than T2, and T0 is newer than T1.
572 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Source write with multiple target FlashCopy (CoW)
A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the most recent target volume only. For example, consider the sequence of
events that are listed in Table 10-3 for a source volume and three targets that are started at
different times. In this example, no background copy exists. The “most recent” target is
indicated with an asterisk.
Table 10-3 Sequence example of write I/Os on a source with multiple targets
Time Source volume Target T0 Target T1 Target T2
An intermediate target disk (not the oldest or the newest) treats the set of newer target
volumes and the true source volume as a type of composite source. It treats all older volumes
as a kind of target (and behaves like a source to them).
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.
Yes Read from the target volume. Write to the target volume.
For example, if the mapping Source-T2 was stopped, the mapping enters the stopping state
while the cleaning process copies the data of T2 to T1 (next oldest). After all of the data is
copied, Source-T2 mapping enters the stopped state, and T1 is no longer dependent upon
T2. However, T0 remains dependent upon T1.
For example, as shown in Table 10-3 on page 573, if you stop the Source-T2 mapping on
“Time 7,” then the grains that are not yet copied on T1 are copied from T2 to T1. Reading T1
is then like reading the source at the time T1 was started (“Time 2”).
574 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
As another example, with Table 10-3 on page 573, if you stop the Source-T1 mapping on
“Time 8,” the grains that are not yet copied on T0 are copied from T1 to T0. Reading T0 is
then similar to reading the source at the time T0 was started (“Time 0”).
If you stop the Source-T1 mapping while Source-T0 mapping and Source-T2 are still in
copying mode, the grains that are not yet copied on T0 are copied from T1 to T0 (next oldest).
T0 now depends upon T2.
Your target volume is still accessible while the cleaning process is running. When the system
is operating in this mode, it is possible that host I/O operations can prevent the cleaning
process from reaching 100% if the I/O operations continue to copy new data to the target
volumes.
Cleaning rate
The data rate at which data is copied from the target of the mapping being stopped to the next
oldest target is determined by the cleaning rate. This property of FlashCopy mapping can be
changed dynamically. It is measured as is the copyrate property, but both properties are
independent. Table 10-5 lists the relationship of the cleaning rate values to the attempted
number of grains to be split per second.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
IBM Spectrum Virtualize also can create an optional copy of the source volume to be made
before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.
The production disk is instantly available with the backup data. Figure 10-14 shows an
example of Reverse FlashCopy with a simple FlashCopy mapping (single target).
This example assumes that a simple FlashCopy mapping was created between the “source”
volume and “target” volume, and no background copy is set.
When the FlashCopy mapping starts (Date of Copy1), if source volume is changed (write
operations on grain “A”), the modified grains are first copied to target, the bitmap table is
updated, and the source grain is modified (from “A” to “G”).
At a specific time (“Corruption Date”), data is modified on another grain (grain “D” below), so it
is first written on the target volume and the bitmap table is updated. Unfortunately, the new
data is corrupted on source volume.
The storage administrator can then use the Reverse FlashCopy feature by completing the
following steps:
1. Create a mapping from target to source (if not already created). Because FlashCopy
recognizes that the target volume of this new mapping is a source in another mapping, it
does not create another bitmap table. It uses the existing bitmap table instead, with its
updated bits.
2. Start the new mapping. Because of the existing bitmap table, only the modified grains are
copied.
576 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
After the restoration is complete, at the “Restored State” time, source volume data is similar to
what it was before the Corruption Date. The copy can resume with the restored data (Date of
Copy2) and, for example, data on the source volume can be modified (“D” grain is changed in
“H” grain in the example below). In this last case, because “D” grain was copied, it is not
copied again on target volume.
Consistency groups are reversed by creating a set of reverse FlashCopy mappings and
adding them to a new reverse consistency group. Consistency groups cannot contain more
than one FlashCopy mapping with the same target volume.
This method provides an exact number of bytes because image mode volumes might not line
up one-to-one on other measurement unit boundaries. Example 10-1 shows the size of the
ITSO-RS-TST volume. The ITSO-TST01 volume is then created, which specifies the same size.
Example 10-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_2145:ITSO-SV1:superuser>lsvdisk -bytes ITSO-RS-TST
id 42
name ITSO-RS-TST
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 21474836480
type striped
formatted no
formatting yes
mdisk_id
mdisk_name
FC_id
......
Flush done The FlashCopy mapping automatically moves from the preparing
state to the prepared state after all cached data for the source is
flushed and all cached data for the target is no longer valid.
578 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Mapping event Description
Start When all of the FlashCopy mappings in a consistency group are in the
prepared state, the FlashCopy mappings can be started. To preserve
the cross-volume consistency group, the start of all of the FlashCopy
mappings in the consistency group must be synchronized correctly
concerning I/Os that are directed at the volumes by running the
startfcmap or startfcconsistgrp command.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data is copied to the target and there are no
dependent mappings, the state is set to copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is deleted automatically. If this
option is not specified, the FlashCopy mapping is not deleted
automatically and can be reactivated by preparing and starting again.
With this configuration, use a copyrate equal to 0 only. In this state, the virtual capacity of the
target volume is identical to the capacity of the source volume, but the real capacity (the one
used on the storage system) is lower, as shown on Figure 10-15. The real size of the target
volume increases with writes that are performed on the source volume, on not already copied
grains. Eventually, if the entire source volume is written (unlikely), the real capacity of the
target volume is identical to the source’s volume.
Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.
580 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.1.16 Serialization of I/O by FlashCopy
In general, the FlashCopy function in the IBM Spectrum Virtualize introduces no explicit
serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and
target volumes.
However, a lock exists for each grain and this lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared, and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
The lock is held in exclusive mode while a grain is being copied from the source to the
target.
If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in shared or
exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O group becomes inaccessible.
FlashCopy continues with a single copy of the FlashCopy bitmap that is stored as non-volatile
in the remaining node in the source I/O group. The system metadata is updated to indicate
that the missing node no longer holds a current bitmap. When the failing node recovers or a
replacement node is added to the I/O group, the bitmap redundancy is restored.
Because the storage area network (SAN) that links IBM Spectrum Virtualize nodes to each
other and to the MDisks is made up of many independent links, it is possible for a subset of
the nodes to be temporarily isolated from several of the MDisks. When this situation occurs,
the MDisks are said to be Path Offline on certain nodes.
Other nodes: Other nodes might see the MDisks as Online because their connection to
the MDisks still exists.
Other configuration events complete synchronously, and no informational events are logged
as a result of the following events:
PREPARE_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the prepared state as a result of a user request to prepare. The user can now start (or
stop) the mapping or consistency group.
COPY_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the idle_or_copied state when it was in the copying or stopping state. This state
transition indicates that the target disk now contains a complete copy and no longer
depends on the source.
STOP_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the stopped state as a result of a user request to stop. It is logged after the automatic copy
process completes. This state transition includes mappings where no copying needed to
be performed. This state transition differs from the event that is logged when a mapping or
group enters the stopped state as a result of an I/O error.
For example, you can perform an MM copy to duplicate data from Site_A to Site_B, and then
perform a daily FlashCopy to back up the data to another location.
Table 10-7 on page 583 lists the supported combinations of FlashCopy and Remote Copy
(RC). In the table, RC refers to MM and GM.
582 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Table 10-7 FlashCopy and remote copy interaction
Component RC primary site RC secondary site
The IBM FlashCopy limitations for IBM Spectrum Virtualize V8.4 are listed in Table 10-8.
Although these presets meet most FlashCopy requirements, they do not support all possible
FlashCopy options. If more specialized options are required that are not supported by the
presets, the options must be performed by using CLI commands.
This section describes the preset options and their use cases.
Snapshot
This preset creates a PiT copy that tracks only the changes that are made, either at the
source or target volumes. The snapshot is not intended to be an independent copy. Instead,
the copy is used to maintain a view of the production data at the time that the snapshot is
created. Therefore, the snapshot holds only the data from regions of the production volume
that changed since the snapshot was created. Because the snapshot preset uses thin
provisioning, only the capacity that is required for the changes is used.
584 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Cleaning rate: No
Primary copy source pool: Target pool
Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.
By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced. Therefore, many Snapshot copies can be used
in the environment.
Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.
For example, in Figure 10-16, the source volume user can still work on the original data
volume (as with a production volume) and the target volumes can be accessed instantly.
Users of target volumes can modify the content and perform “what-if” tests; for example,
versioning. Storage administrators do not need to perform full copies of a volume for
temporary tests. However, the target volumes must remain linked to the source. When the link
is broken (FlashCopy mapping stopped or deleted), the target volumes become unusable.
Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.
Backup
The backup preset creates an incremental PiT replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.
Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, such as because of loss of the underlying physical controller. The user
plans to periodically update the secondary copy, and does not want to suffer from the
resource demands of creating a copy each time.
Incremental FlashCopy times are faster than full copy, which helps to reduce the window
where the new backup is not yet fully effective. If the source is thin-provisioned, the target is
also thin-provisioned in this option for the auto-create target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.
Note: IBM Spectrum Virtualize in general and FlashCopy in particular are not backup
solutions on their own. For example, a FlashCopy backup preset does not schedule a
regular copy of your volumes. Instead, it overwrites the mapping target and does not make
a copy of it before starting a new “backup” operation. It is the user’s responsibility to handle
the target volumes (for example, saving them to tapes) and the scheduling of the
FlashCopy operations.
When the IBM Spectrum Virtualize GUI is used, FlashCopy components can be seen in
different windows. Three windows are related to FlashCopy and are available by using the
Copy Services menu, as shown in Figure 10-17 on page 587.
586 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-17 Copy Services menu
The FlashCopy window is accessible by clicking Copy Services → FlashCopy. It displays all
of the volumes that are defined in the system. Volumes that are part of a FlashCopy mapping
appear, as shown in Figure 10-18. By clicking a source volume, you can display the list of its
target volumes.
Figure 10-18 Source and target volumes displayed in the FlashCopy window
All volumes are listed in this window, and target volumes appear twice (as a regular volume
and as a target volume in a FlashCopy mapping).
588 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.3 Creating a FlashCopy mapping
This section describes creating FlashCopy mappings for volumes and their targets.
Open the FlashCopy window from the Copy Services menu, as shown in Figure 10-21.
Select the volume for which you want to create the FlashCopy mapping. Right-click the
volume or click the Actions menu.
Multiple FlashCopy mappings: To create multiple FlashCopy mappings at the same time,
select multiple volumes by pressing and holding Ctrl and clicking the entries that you want.
Depending on whether you created the target volumes for your FlashCopy mappings or you
want the system to create the target volumes for you, the following options are available:
If you created the target volumes, see “Creating a FlashCopy mapping with existing target
Volumes” on page 590.
If you want the system to create the target volumes for you, see “Creating a FlashCopy
mapping and target volumes” on page 595.
Attention: When starting a FlashCopy mapping from a source volume to a target volume,
data that is on the target is over-written. The system does not prevent you from selecting a
target volume that is mapped to a host and contains data.
1. Right-click the volume that you want to create a FlashCopy mapping for, and select
Advanced FlashCopy → Use Existing Target Volumes, as shown in Figure 10-22.
The Create FlashCopy Mapping window opens, as shown in Figure 10-23 on page 591.
590 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-23 Selecting source and target for a FlashCopy mapping
In this window, you create the mapping between the selected source volume and the
target volume you want to create a mapping with. Then, click Add.
Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.
2. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-25 on page 593. For
more information about the presets, see 10.2.1, “FlashCopy presets” on page 584:
– Snapshot: Creates a PiT snapshot copy of the source volume.
– Clone: Creates a PiT replica of the source volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target volumes.
Note: If you want to create a simple Snapshot of a volume, you likely want the target
volume to be defined as thin-provisioned to save space on your system. If you use an
existing target, ensure it is thin-provisioned first. The use of the Snapshot preset does
not make the system check whether the target volume is thin-provisioned.
592 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-25 FlashCopy mapping preset selection
When selecting a preset, some options, such as Background Copy Rate, Incremental, and
Delete mapping after completion, are automatically changed or selected. You can still
change the automatic settings, but this action is not recommended for the following
reasons:
– If you select the Backup preset but then clear Incremental or select Delete mapping
after completion, you lose the benefits of the incremental FlashCopy and must copy
the entire source volume each time you start the mapping.
– If you select the Snapshot preset but then change the Background Copy Rate, you
with a full copy of your source volume.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 10-1 on page 566 or Table 10-5 on page 575.
When your FlashCopy mapping setup is ready, click Next.
The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and Consistency Groups.
Note: Creating a FlashCopy mapping does not automatically start any copy. You must
manually start the mapping.
594 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Creating a FlashCopy mapping and target volumes
Complete the following steps to create target volumes for FlashCopy mapping:
1. Right-click the volume that you want to create a FlashCopy mapping for and select
Advanced FlashCopy → Create New Target Volumes, as shown in Figure 10-27.
2. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-28.
For more information about the presets, see 10.2.1, “FlashCopy presets” on page 584.
– Snapshot: Creates a PiT snapshot copy of the source volume.
– Clone: Creates a PiT replica of the source volume.
Note: If you want to create a simple Snapshot of a volume, you likely want the target
volume to be defined as thin-provisioned to save space on your system. If you use an
existing target, ensure it is thin-provisioned first. The use of the Snapshot preset does
not make the system check whether the target volume is thin-provisioned.
When selecting a preset, some options, such as Background Copy Rate, Incremental, and
Delete mapping, after completion are automatically changed or selected. You can still
change the automatic settings, but this action is not recommended for the following
reasons:
– If you select the Backup preset but then clear Incremental or select Delete mapping
after completion, you lose the benefits of the incremental FlashCopy. You must copy
the entire source volume each time you start the mapping.
– If you select the Snapshot preset but then change the Background Copy Rate, you
have a full copy of your source volume.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 10-1 on page 566 or Table 10-5 on page 575.
When your FlashCopy mapping setup is ready, click Next.
596 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. You can choose whether to add the mappings to a consistency group, as shown in
Figure 10-29.
If you want to include this FlashCopy mapping in a consistency group, select Yes, add the
mappings to a consistency group, and select the consistency group from the drop-down
menu.
6. The system prompts the user how to define the new volumes that are created, as shown in
Figure 10-31 on page 599. It can be None, Thin-provisioned, or Inherit from source
volume. If Inherit from source volume is selected, the system checks the type of the
source volume and then creates a target of the same type. Click Finish.
598 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-31 Selecting the type of volumes for the created targets
Note: If you selected multiple source volumes to create FlashCopy mappings, selecting
Inherit properties from source Volume applies to each newly created target volume. For
example, if you selected a compressed volume and a generic volume as sources for the
new FlashCopy mappings, the system creates a compressed target and a generic target.
The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and consistency groups.
3. You can select multiple volumes at a time, which creates as many snapshots
automatically. The system then automatically groups the FlashCopy mappings in a new
consistency group, as shown in Figure 10-33 on page 601.
600 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-33 Selecting single-click snapshot creation and start
3. You can select multiple volumes at a time, which creates as many snapshots
automatically. The system then automatically groups the FlashCopy mappings in a new
consistency group, as shown in Figure 10-35 on page 603.
602 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-35 Selecting single-click clone creation and start
604 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. You can select multiple volumes at a time, which creates as many snapshots
automatically. The system then automatically groups the FlashCopy mappings in a new
consistency group, as shown Figure 10-37.
Figure 10-39 Entering the name and ownership group of a new consistency group
Consistency group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The volume name can be 1 - 63 characters.
606 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. Select a volume in the source volume column by using the drop-down menu. Then, select
a volume in the target volume column by using the drop-down menu. Click Add, as shown
in Figure 10-41.
Figure 10-41 Selecting source and target volumes for the FlashCopy mapping
Repeat this step to create other mappings. To remove a mapping that was created, click
. Click Next.
Important: The source and target volumes must be of equal size. Therefore, only the
targets with the suitable size are shown for a source volume.
Volumes that are target volumes in another FlashCopy mapping cannot be target of a
new FlashCopy mapping. Therefore, they do not appear in the list.
When selecting a preset, some options, such as Background Copy Rate, Incremental, and
Delete mapping after completion, are automatically changed or selected. You can still
change the automatic settings, but this is not recommended for the following reasons:
– If you select the Backup preset but then clear Incremental or select Delete mapping
after completion, you lose the benefits of the incremental FlashCopy. You must copy
the entire source volume each time you start the mapping.
– If you select the Snapshot preset but then change the Background Copy Rate, you
have a full copy of your source volume.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 10-1 on page 566 or Table 10-5 on page 575.
5. When your FlashCopy mapping setup is ready, click Finish.
608 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.9 Showing related volumes
To show related volumes for a specific FlashCopy mapping, complete the following steps:
1. Open the Copy Services FlashCopy Mappings window.
2. Right-click a FlashCopy mapping and select Show Related Volumes, as shown in
Figure 10-43. Also, depending on which window you are inside Copy Services, you can
right-click at mappings and select Show Related Volumes.
Figure 10-43 Showing related volumes for a mapping, a consistency group, or another volume
3. In the Related Volumes window, you can see the related mapping for a volume, as shown
in Figure 10-44. If you click one of these volumes, you can see its properties.
3. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for the FlashCopy mappings selection by using the drop-down menu, as shown in
Figure 10-46.
Figure 10-46 Selecting the consistency group to which to move the FlashCopy mapping
610 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.11 Removing FlashCopy mappings from consistency groups
To remove one or multiple FlashCopy mappings from a consistency group, complete the
following steps:
1. Open the FlashCopy Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to remove and select Remove from
Consistency Group, as shown in Figure 10-47.
Note: Only FlashCopy mappings that belong to a consistency group can be removed.
3. In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 10-48.
Note: It is not possible to select multiple FlashCopy mappings to edit their properties
concurrently.
3. In the Edit FlashCopy Mapping window, you can modify the background copy rate and the
cleaning rate for a selected FlashCopy mapping, as shown in Figure 10-50.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 10-1 on page 566 or Table 10-5 on page 575.
612 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4. Click Save to confirm your changes.
3. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to each FlashCopy mapping and click Rename, as shown in Figure 10-52.
FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.
3. Enter the new name that you want to assign to the consistency group and click Rename,
as shown in Figure 10-54.
Note: It is not possible to select multiple consistency groups to edit their names all at
the same time.
Consistency group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.
614 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.14 Deleting FlashCopy mappings
To delete one or multiple FlashCopy mappings, complete the following steps:
1. Open the FlashCopy Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to delete and select Delete Mapping,
as shown in Figure 10-55.
3. The Delete FlashCopy Mapping window opens, as shown in Figure 10-56. In the Verify
the number of FlashCopy mappings that you are deleting field, enter the number of
volumes that you want to remove. This verification was added to help avoid deleting the
wrong mappings.
Important: Deleting a consistency group does not delete the FlashCopy mappings that it
contains.
616 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.2.16 Starting FlashCopy mappings
Important: Only FlashCopy mappings that do not belong to a consistency group can be
started individually. If FlashCopy mappings are part of a consistency group, they can be
started only all together by using the consistency group start command.
It is the start command that defines the “PiT”. It is the moment that is used as a reference
(T0) for all subsequent operations on the source and the target volumes. To start one or
multiple FlashCopy mappings that do not belong to a consistency group, complete the
following steps:
1. Open the FlashCopy Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to start and select Start, as shown in
Figure 10-59.
You can check the FlashCopy state and the progress of the mappings in the Status and
Progress columns of the table, as shown in Figure 10-60.
FlashCopy Snapshots depend on the source volume and should be in a “copying” state if the
mapping is started.
For more information about FlashCopy starting operations and states, see 10.1.10, “Starting
FlashCopy mappings and consistency groups” on page 568.
Important: Only FlashCopy mappings that do not belong to a consistency group can be
stopped individually. If FlashCopy mappings are part of a consistency group, they can be
stopped all together only by using the consistency group stop command.
The only reason to stop a FlashCopy mapping is for incremental FlashCopy. When the first
occurrence of an incremental FlashCopy is started, a full copy of the source volume is made.
When 100% of the source volume is copied, the FlashCopy mapping does not stop
automatically and a manual stop can be performed. The target volume is available for read
and write operations, during the copy, and after the mapping is stopped.
In any other case, stopping a FlashCopy mapping interrupts the copy and resets the bitmap
table. Because only part of the data from the source volume was copied, the copied grains
might be meaningless without the remaining grains. Therefore, the target volumes are placed
offline and are unusable, as shown in Figure 10-61.
Figure 10-61 Showing the target volumes state and FlashCopy mappings status
To stop one or multiple FlashCopy mappings that do not belong to a consistency group,
complete the following steps:
1. Open the FlashCopy Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to stop and select Stop, as shown in
Figure 10-62 on page 619.
618 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-62 Stopping FlashCopy mappings
Note: FlashCopy mappings can be in a stopping state for some time if you created
dependencies between several targets. It is in a cleaning mode. For more information
about dependencies and stopping process, see “Stopping process in a multiple target
FlashCopy: Cleaning Mode” on page 574.
For every FlashCopy mapping that is created on an IBM Spectrum Virtualize system, a
bitmap table is created to track the copied grains. By default, the system allocates 20 MiB of
memory for a minimum of 10 TiB of FlashCopy source volume capacity and 5 TiB of
incremental FlashCopy source volume capacity.
Depending on the grain size of the FlashCopy mapping, the memory capacity usage is
different. 1 MiB of memory provides the following volume capacity for the specified I/O group:
For clones and snapshots FlashCopy with 256 KiB grains size, 2 TiB of total FlashCopy
source volume capacity
For clones and snapshots FlashCopy with 64 KiB grains size, 512 GiB of total FlashCopy
source volume capacity
For incremental FlashCopy, with 256 KiB grains size, 1 TiB of total incremental FlashCopy
source volume capacity
For incremental FlashCopy, with 64 KiB grains size, 256 GiB of total incremental
FlashCopy source volume capacity
5 TiB of incremental
FlashCopy source
volume capacity
a. The actual amount of functionality might increase based on settings, such as grain size and
strip size.
FlashCopy includes the FlashCopy function, Global Mirror with Change Volumes (GMCV),
and active-active (HyperSwap) relationships.
For multiple FlashCopy targets, you must consider the number of mappings. For example, for
a mapping with a grain size of 256 KiB, 8 KiB of memory allows one mapping between a
16 GiB source volume and a 16 GiB target volume. Alternatively, for a mapping with a 256 KiB
grain size, 8 KiB of memory allows two mappings between one 8 GiB source volume and two
8 GiB target volumes.
When creating a FlashCopy mapping, if you specify an I/O group other than the I/O group of
the source volume, the memory accounting goes toward the specified I/O group, not toward
the I/O group of the source volume.
When creating FlashCopy relationships or mirrored volumes, more bitmap space is allocated
automatically by the system, if required.
For FlashCopy mappings, only one I/O group uses bitmap space. By default, the I/O group of
the source volume is used.
When you create a reverse mapping, such as when you run a restore operation from a
snapshot to its source volume, a bitmap is created.
When you configure change volumes for use with GM, two internal FlashCopy mappings are
created for each change volume.
You can modify the resource allocation for each I/O group of an IBM Spectrum Virtualize
system by selecting Settings → System and clicking the Resources menu, as shown in
Figure 10-63 on page 621. At the time of writing, this GUI option is not available for other IBM
Spectrum Virtualize based systems, so resource allocation can be adjusted by running the
chiogrp command. For more information about the command’s syntax, see IBM
Documentation.
620 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-63 Modifying resources allocation per I/O group
TCT can help to solve business needs that require duplication of data of your source volume.
Volumes can remain online and active while you create snapshot copies of the data sets. TCT
operates below the host OS and its cache. Therefore, the copy is not apparent to the host.
IBM Spectrum Virtualize features built-in software algorithms that allow the TCT function to
securely interact; for example, with Information Dispersal Algorithms (IDAs), which is
essentially the interface to IBM Cloud Object Storage.
Object Storage is a general term that refers to the entity in which IBM Cloud Object Storage
organizes, manages, and stores units of data. To transform these snapshots of traditional
data into Object Storage, the storage nodes and the IDA import the data and transform it into
several metadata and slices. The object can be read by using a subset of those slices. When
an Object Storage entity is stored as IBM Cloud Object Storage, the objects must be
manipulated or managed as a whole unit. Therefore, objects cannot be accessed or updated
partially.
For more information about the IBM Cloud Object Storage portfolio, see this web page.
The use of TCT can help businesses to manipulate data as shown in the following examples:
Creating a consistent snapshot of dynamically changing data
Creating a consistent snapshot of production data to facilitate data movement or migration
between systems that are running at different locations
Creating a snapshot of production data sets for application development and testing
Creating a snapshot of production data sets for quality assurance
Using secure data tiering to off-premises cloud providers
From a technical standpoint, ensure that you evaluate the network capacity and bandwidth
requirements to support your data migration to off-premises infrastructure. To maximize
productivity, you must match the amount of data that must be transmitted to the cloud plus
your network capacity.
From a security standpoint, ensure that your on-premises or off-premises cloud infrastructure
supports your requirements in terms of methods and level of encryption.
Regardless of your business needs, TCT within the IBM Spectrum Virtualize can provide
opportunities to manage the exponential data growth and to manipulate data at low cost.
Today, many CSPs offers several storage-as-services solutions, such as content repository,
backup, and archive. Combining all of these services, your IBM Spectrum Virtualize can help
you solve many challenges that are related to rapid data growth, scalability, and
manageability at attractive costs.
When TCT is applied as your backup strategy, IBM Spectrum Virtualize uses the same
FlashCopy functions to produce PiT snapshot of an entire volume or set of volumes.
To ensure the integrity of the snapshot, it might be necessary to flush the host OS and
application cache of any outstanding reads or writes before the snapshot is performed. Failing
to flush the host OS and application cache can produce inconsistent and useless data.
Many OSs and applications provide mechanism to stop I/O operations and ensure that all
data is flushed from host cache. If these mechanisms are available, they can be used in
combination with snapshot operations. When these mechanisms are not available, it might be
necessary to flush the cache manually by quiescing the application and unmounting the file
system or logical drives.
When choosing cloud object storage as a backup solution, be aware that the object storage
must be managed as a whole. Backup and restore of individual files, folders, and partitions,
are not possible.
622 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To interact with external CSPs or a private cloud, IBM Spectrum Virtualize requires interaction
with the correct architecture and specific properties. Conversely, CSPs offer attractive prices
for Object Storage in cloud and deliver an easy-to-use interface. Normally, cloud providers
offer low-cost prices for Object Storage space, and charges are applied for the cloud
outbound traffic only.
TCT running on IBM Spectrum Virtualize queries for Object Storage stored in a cloud
infrastructure. It enables users to restore the objects into a new volume or set of volumes.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
Note: Always consider the bandwidth characteristics and network capabilities when
choosing to use TCT.
Restoring individual files by using TCT is not possible. Object Storage is unlike a file or a
block; therefore, Object Storage must be managed as a whole unit piece of storage, and not
partially. Cloud Object Storage is accessible by using an HTTP-based REST API.
Using your IBM Spectrum Virtualize management GUI, click Settings → Network → DNS
and insert your DNS IPv4 or IPv6. The DNS name can be anything that you want, and is used
as a reference. Click Save after you complete the choices, as shown in Figure 10-64.
624 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: It is important to implement encryption before enabling cloud connecting.
Encryption protects your data from attacks during the transfer to the external cloud
service. Because the HTTP protocol is used to connect to cloud infrastructure, it is likely
to start transactions by using the internet. For purposes of this writing, our system does
not have encryption enabled.
626 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5. The cloud credentials can be viewed and updated at any time by using the function icons
in left side of the GUI and clicking Settings → Systems → Transparent Cloud Tiering.
From this window, you can also verify the status, the data usage statistics, and the upload
and download bandwidth limits set to support this function.
In the account information window, you can visualize your cloud account information. This
window also enables you to remove the account.
An example of visualizing your cloud account information is shown in Figure 10-69.
Any volume can be added to the cloud volumes. However, snapshots work only for volumes
that are not related to any other copy service.
2. A new window opens, and you can use the GUI to select one or more volumes that you
need to enable a cloud snapshot or you can add volumes to the list, as shown in
Figure 10-71.
628 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-72 Add Volumes to Cloud
4. IBM Spectrum Virtualize GUI provides two options for you to select. If the first option is
selected, the system decides what type of snapshot is created based on previous objects
for each selected volume. If a full copy (full snapshot) of a volume was created, the system
makes an incremental copy of the volume.
The second option creates a full snapshot of one or more selected volumes. You can
select the second option for a first occurrence of a snapshot and click Finish, as shown in
Figure 10-73. You can also select the second option, even if another full copy of the
volume exists.
Figure 10-73 Selecting whether a full copy is made or whether the system decides
5. Click the Actions menu in the Cloud Volumes window to create and manage snapshots.
Also, you can use the menu to cancel, disable, and restore snapshots to volumes, as
shown in Figure 10-75.
“Managing” a snapshot is deleting one or multiple versions. The list of PiT copies list of PiT
copies appears and provide details about their status, type, and snapshot date, as shown in
Figure 10-76 on page 631.
630 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-76 Deleting versions of a volume’s snapshots
From this window, an administrator can delete old snapshots (old PiT copies) if they are no
longer needed. The most recent copy cannot be deleted. If you want to delete the most recent
copy, you must first disable Cloud Tiering for the specified volume.
If the cloud account is shared among systems, IBM Spectrum Virtualize queries the
snapshots that are stored in the cloud, and enables you to restore to a new volume. To restore
a volume’s snapshot, complete the following steps:
1. Open the Cloud Volumes window.
2. Right-click a volume and select Restore, as shown in Figure 10-77.
If the snapshot version that you selected has later generations (more recent Snapshot
dates), the newer copies are removed from the cloud.
4. The IBM Spectrum Virtualize GUI provides two options to restore the snapshot from cloud.
You can restore the snapshot from cloud directly to the selected volume, or create a
volume to restore the data on, as shown in Figure 10-79. Make a selection and click Next.
632 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Restoring a snapshot on the volume overwrites the data on the volume. The
volume is taken offline (no read or write access) and the data from the PiT copy of the
volume are written. The volume returns back online when all data is restored from the
cloud.
5. If you selected the Restore to a new Volume option, you must enter the following
information for the volume to be created with the snapshot data, as shown in Figure 10-80:
– Name
– Storage Pool
– Capacity Savings (None, Compressed or Thin-provisioned)
– I/O group
You are not asked to enter the volume size because the new volume’s size is identical to
the snapshot copy size.
Enter the settings for the new volume and click Next.
If you chose to restore the data from the cloud to a new volume, the new volume appears
immediately in the volumes window. However, it is taken offline until all the data from the
snapshot is written. The new volume is independent. It is not defined as a target in a
FlashCopy mapping with the selected volume, for example. It also is not mapped to a host.
Volume mirroring is provided by a specific volume mirroring function in the I/O stack. It cannot
be manipulated like a FlashCopy or other types of copy volumes. However, this feature
provides migration functions, which can be obtained by splitting the mirrored copy from the
source or by using the migrate to function. Volume mirroring cannot control backend storage
mirroring or replication.
With volume mirroring, host I/O completes when both copies are written. This feature is
enhanced with a tunable latency tolerance. This tolerance provides an option to give
preference to losing the redundancy between the two copies. This tunable timeout value is
Latency or Redundancy.
634 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The Latency tuning option, which is set by running the chvdisk -mirrorwritepriority
latency command, is the default. It prioritizes host I/O latency, which yields a preference to
host I/O over availability. However, you might need to give preference to redundancy in your
environment when availability is more important than I/O response time. Run the chvdisk
-mirrorwritepriority redundancy command to set the redundancy option.
Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.
Migration: Although these migration methods do not disrupt access, a brief outage does
occur to install the host drivers for your IBM Spectrum Virtualize system if they are not yet
installed.
With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. The use of volume mirroring over volume
migration is beneficial because with volume mirroring, storage pools do not need to have the
same extent size as is the case with volume migration.
Starting with Version 7.3 and the introduction of the dual-layer cache architecture, mirrored
volume performance was improved. The lower cache is beneath the volume mirroring layer,
which means that both copies have their own cache. This approach helps when you have
copies of different types, for example, generic and compressed, because both copies use
their independent cache and perform their own read prefetch. Destaging of the cache can be
done independently for each copy, so one copy does not affect the performance of a second
copy.
Also, because the IBM Spectrum Virtualize destage algorithm is MDisk aware, it can tune or
adapt the destaging process, depending on MDisk type and usage, for each copy
independently.
For more information about Volume Mirroring, see Chapter 6, “Volumes” on page 299.
IBM Spectrum Virtualize provides a single point of control when RC is enabled in your cluster
(regardless of the disk subsystems that are used as underlying storage, if those disk
subsystems are supported).
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems. Use intercluster MM/GM when possible. For mirroring volumes in the same
system, it is better to use volume mirroring or the FlashCopy feature.
A typical application of this function is to set up a dual-site solution that uses two
IBM Spectrum Virtualize systems. The first site is considered the primary site or production
site, and the second site is considered the backup site or failover site. The failover site is
activated when a failure at the first site is detected.
Table 10-10 on page 637 lists the amount of heartbeat traffic (in megabits per second (Mbps))
that is generated by various sizes of clustered systems.
636 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Table 10-10 Intersystem heartbeat traffic in Mbps
IBM Spectrum IBM Spectrum Virtualize system 2
Virtualize
system 1
2 nodes 5 6 6 6
4 nodes 6 10 11 12
6 nodes 6 11 16 17
8 nodes 6 12 17 21
10.6.1 IBM SAN Volume Controller and IBM FlashSystem system layers
An IBM Spectrum Virtualize based system can be in one of two layers: the replication layer or
the storage layer. The layer that the system is in affects how the system interacts with other
IBM Spectrum Virtualize based systems. IBM SAN Volume Controller (SVC) is always set to
the replication layer. This parameter cannot be changed.
In the storage layer, an IBM FlashSystem system has the following characteristics and
requirements:
The system can perform MM and GM replication with other storage layer systems.
The system can provide external storage for replication layer systems or SVC.
The system cannot use a storage layer system as external storage.
In the replication layer, an SVC or an IBM FlashSystem system has the following
characteristics and requirements:
Can perform MM and GM replication with other replication layer systems.
Cannot provide external storage for a replication layer system.
Can use a storage layer system as external storage.
An IBM FlashSystem family system is in the storage layer by default, but the layer can be
changed. For example, you might want to change an IBM FlashSystem 7200 to the replication
layer if you want to virtualize other IBM FlashSystem systems or replicate to an SVC system.
Note: Before you change the system layer, the following conditions must be met on the
system at the time of the layer change:
No other IBM Spectrum Virtualize based system can exist as a back-end or host entity.
No system partnerships can exist.
No other IBM Spectrum Virtualize based system can be visible on the SAN fabric.
In your IBM FlashSystem system, run the lssystem command to check the current system
layer, as shown in Example 10-2.
Example 10-2 Output from the lssystem command showing the system layer
IBM_IBM FlashSystem:GLTLoaner:superuser>lssystem
id 000002042160049E
name GLTLoaner
...
Note: Consider the following rules for creating remote partnerships between the SVC and
IBM FlashSystem systems:
An SVC is always in the replication layer.
By default, the IBM FlashSystem systems are in the storage layer, but can be changed
to the replication layer.
A system can form partnerships only with systems in the same layer.
Starting in Version 6.4, any IBM Spectrum Virtualize based system in the replication
layer can virtualize an IBM FlashSystem system in the storage layer.
Note: For more information about restrictions and limitations of native IP replication, see
10.8.2, “IP partnership limitations” on page 674.
638 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-82 Multiple-system mirroring configuration example
Figure 10-83 shows four systems in a star topology, with System A at the center. System A
can be a central DR site for the three other locations.
By using a star topology, you can migrate applications by using a process, such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
Figure 10-85 shows an example of an IBM Spectrum Virtualize system fully connected
topology (A → B, A → C, A → D, B → D, and C → D).
Figure 10-85 shows a fully connected mesh in which every system has a partnership to each
of the three other systems. This topology enables volumes to be replicated between any pair
of systems; for example, A → B, A → C, and B → C.
Although systems can have up to three partnerships, volumes can be part of only one RC
relationship, for example, A → B.
System partnership intermix: All these topologies are valid for the intermix of SVC with
IBM FlashSystem if the IBM FlashSystem system is set to the replication layer.
640 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
IBM Spectrum Virtualize V8.3.1 introduced a three-site replication solution option that was
expanded in Version 8.4. The solution enables active-active implementations while replicating
to a third site. For more information, see 1.16.2, “Business continuity with three-site
replication” on page 65, or for a detailed overview and configuration steps, see IBM Spectrum
Virtualize HyperSwap SAN Implementation and Design Best Practices, REDP-5597.
An application that performs many database updates is designed with the concept of
dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.
The IBM Spectrum Virtualize MM and GM implementations keep a consistent image at the
secondary site. The GM implementation uses complex algorithms that identify sets of data
and number those sets of data in sequence. Then, the data is applied at the secondary site in
this same defined sequence.
For more information about dependent writes, see 10.1.13, “FlashCopy and image mode
volumes” on page 577.
Therefore, these commands can be issued simultaneously for all MM/GM relationships that
are defined within that consistency group, or to a single MM/GM relationship that is not part of
an RC consistency group. For example, when a startrcconsistgrp command is issued to the
consistency group, all of the MM/GM relationships in the consistency group are started at the
same time.
Certain uses of MM/GM require the manipulation of more than one relationship. RC
consistency groups can group relationships so that they are manipulated in unison.
Although consistency groups can be used to manipulate sets of relationships that do not need
to satisfy these strict rules, this manipulation can lead to unwanted side effects. The rules
behind a consistency group mean that certain configuration commands are prohibited. These
configuration commands are not prohibited if the relationship is not part of a consistency
group.
For example, consider the case of two applications that are independent, yet they are placed
into a single consistency group. If an error occurs, synchronization is lost and a background
copy process is required to recover synchronization. While this process is progressing,
MM/GM rejects attempts to enable access to the auxiliary volumes of either application.
If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes, even though it is safe in this
case. The MM/GM policy is to refuse access to the entire consistency group if any part of it is
inconsistent. Stand-alone relationships and consistency groups share a common
configuration and state models. All of the relationships in a non-empty consistency group
feature the same state as the consistency group.
642 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
For more information about intercluster communication between systems in an IP
partnership, see 10.8.6, “States of IP partnership” on page 678.
Zoning
At least two FC ports of every node of each system must communicate with each other to
create the partnership. Switch zoning is critical to facilitate intercluster communication.
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the systems is interrupted or lost, an event is logged (and the MM/GM relationships stop).
Alerts: You can configure the system to raise SNMP traps to the enterprise monitoring
system to alert on events that indicate an interruption in internode communication
occurred.
Intercluster links
All IBM Spectrum Virtualize nodes maintain a database of other devices that is visible on the
fabric. This database is updated as devices appear and disappear.
Devices that advertise themselves as SVC or IBM FlashSystem nodes are categorized
according to the system to which they belong. Nodes that belong to the same system
establish communication channels between themselves and exchange messages to
implement clustering and the functional protocols of IBM Spectrum Virtualize.
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform an RC relationship.
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
Note: Run the chsystem command with -partnerfcportmask to dedicate several FC ports
only to system-to-system traffic to ensure that RC is not affected by other traffic, such as
host-to-node traffic or node-to-node traffic within the same system.
With synchronous copies, host applications write to the master volume, but they do not
receive confirmation that the write operation completed until the data is written to the auxiliary
volume. This action ensures that both the volumes have identical data when the copy
completes. After the initial copy completes, the MM function always maintains a fully
synchronized copy of the source data at the target site.
Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your MM auxiliary
location.
Consistency groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy consistency groups.
IBM Spectrum Virtualize provides intracluster and intercluster MM, which are described next.
Two IBM Spectrum Virtualize systems must be defined in a partnership, which must be
performed on both systems to establish a fully functional MM partnership.
By using standard single-mode connections, the supported distance between two systems in
an MM partnership is 10 km (6.2 miles), although greater distances can be achieved by using
extenders. For extended distance solutions, contact your IBM representative.
Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.
644 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.6.6 Synchronous Remote Copy
MM is a fully synchronous RC technique that ensures that writes are committed at the master
and auxiliary volumes before write completion is acknowledged to the host, but only if writes
to the auxiliary volumes are possible.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, MM suspends writes to the
auxiliary volume and enables I/O to the master volume to continue to avoid affecting the
operation of the master volumes.
Figure 10-88 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional FC MM has distance limitations that
are based on your performance requirements. IBM Spectrum Virtualize does not support
more than 300 km (186.4 miles).
IBM Spectrum Virtualize supports the resynchronization of changed data so that write failures
that occur on the master or auxiliary volumes do not require a complete resynchronization of
the relationship.
Switching copy direction: The copy direction for an MM relationship can be switched so
that the auxiliary volume becomes the master, and the master volume becomes the
auxiliary, which is similar to the FlashCopy restore option. However, although the
FlashCopy target volume can operate in read/write mode, the target volume of the started
RC is always in read-only mode.
646 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
While the MM relationship is active, the auxiliary volume is not accessible for host application
write I/O. The IBM Spectrum Virtualize based systems enable read-only access to the
auxiliary volume when it contains a consistent image. They also allow boot time OS discovery
to complete without an error so that any hosts at the secondary site can be ready to start the
applications with a minimal delay if required.
For example, many OSs must read logical block address (LBA) zero to configure a logical unit
(LU). Although read access is allowed at the auxiliary in practice, the data on the auxiliary
volumes cannot be read by a host because most OSs write a “dirty bit” to the file system
when it is mounted. Because this write operation is not allowed on the auxiliary volume, the
volume cannot be mounted.
This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the MM
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.
For example, the MM requirement to enable the auxiliary copy for access differentiates it from
third-party mirroring software on the host, which aims to emulate a single, reliable disk
regardless of what system is accessing it. MM retains the property that there are two volumes
in existence, but it suppresses one volume while the copy is being maintained.
The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required, and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.
The failover process can be automated by using failover management software. The
IBM Spectrum Virtualize software provides SNMP traps and programming (or scripting)
commands for the CLI to enable this automation.
GM function establishes a GM relationship between two volumes of equal size. The volumes
in a GM relationship are referred to as the master (source) volume and the auxiliary (target)
volume, which is the same as MM. Consistency groups can be used to maintain data integrity
for dependent writes, which is similar to FlashCopy consistency groups.
GM writes data to the auxiliary volume asynchronously, which means that host writes to the
master volume provide the host with confirmation that the write is complete before the I/O
completes on the auxiliary volume.
Limit: When a local fabric and a remote fabric are connected for GM purposes, the ISL
hop count between a local node and a remote node must not exceed seven hops.
The GM function provides the same function as MM RC, but over long-distance links with
higher latency without requiring the hosts to wait for the full round-trip delay of the
long-distance link.
Figure 10-89 shows that a write operation to the master volume is acknowledged back to the
host that is issuing the write before the write operation is mirrored to the cache for the
auxiliary volume.
648 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The GM algorithms maintain a consistent image on the auxiliary. They achieve this consistent
image by identifying sets of I/Os that are active concurrently at the master, assigning an order
to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a
result, GM maintains the features of Write Ordering and Read Stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.
GM write I/O from production system to a secondary system requires serialization and
sequence-tagging before being sent across the network to a remote site (to maintain a
write-order consistent copy of data).
To avoid affecting the production site, IBM Spectrum Virtualize supports more parallelism in
processing and managing GM writes on the secondary system by using the following
methods:
Secondary system nodes store replication writes in new redundant non-volatile cache
Cache content details are shared between nodes
Cache content details are batched together to make node-to-node latency less of an issue
Nodes intelligently apply these batches in parallel as soon as possible
Nodes internally manage and optimize GM secondary write I/O processing
In a failover scenario where the secondary site must become the master source of data,
specific updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, such as a transaction log replay.
GM is supported over FC, Fibre Channel over IP (FCIP), Fibre Channel over Ethernet
(FCoE), and native IP connections. The maximum distance cannot exceed 80 ms round trip,
which is approximately 4000 km (2485.48 miles) between mirrored systems. However,
starting with IBM Spectrum Virtualize V7.4, this distance was increased to 250 ms for certain
configurations. Figure 10-90 shows the supported round-trip distances for GM RC.
Colliding writes
The GM algorithm requires that only a single write is active on a volume. I/Os that overlap an
active I/O are sequential, which is called colliding writes. If another write is received from a
host while the auxiliary write is still active, the new host write is delayed until the auxiliary
write is complete. This rule is needed if a series of writes to the auxiliary must be tried again
and is called reconstruction. Conceptually, the data for reconstruction comes from the master
volume.
If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such
write activity do not achieve the performance that GM is intended to support. A volume
statistic is maintained about the frequency of these collisions.
650 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-91 Colliding writes example
The following numbers correspond to the numbers that are shown in Figure 10-91:
(1) The first write is performed from the host to LBA X.
(2) The host is provided acknowledgment that the write completed, even though the
mirrored write to the auxiliary volume is not yet complete.
(1’) and (2’) occur asynchronously with the first write.
(3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
(4) The host is provided acknowledgment that the second write is complete.
Delay simulation
GM provides a feature that enables a delay simulation to be applied on writes that are sent to
the auxiliary volumes. With this feature, tests can be done to detect colliding writes. It also
provides the capability to test an application before the full deployment. The feature can be
enabled separately for each of the intracluster or intercluster GMs.
By running the chsystem command, the delay setting can be set up and the delay can be
checked by running the lssystem command. The gm_intra_cluster_delay_simulation field
expresses the amount of time that intracluster auxiliary I/Os are delayed. The
gm_inter_cluster_delay_simulation field expresses the amount of time that intercluster
auxiliary I/Os are delayed. A value of zero disables the feature.
Tip: If you are experiencing repeated problems with the delay on your link, ensure that the
delay simulator was correctly disabled.
GM has functions that are designed to address the following conditions, which might
negatively affect certain GM implementations:
The estimation of the bandwidth requirements tends to be complex.
Ensuring that the latency and bandwidth requirements can be met is often difficult.
Congested hosts on the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
To address these issues, change volumes were added as an option for GM relationships.
Change volumes use the FlashCopy function, but they cannot be manipulated as FlashCopy
volumes because they are for a special purpose only. Change volumes replicate PiT images
on a cycling period. The default is 300 seconds.
Your change rate must include only the condition of the data at the PiT that the image was
taken, rather than all the updates during the period. The use of this function can provide
significant reductions in replication volume.
With change volumes, this environment looks as it is shown in Figure 10-93 on page 653.
652 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-93 Global Mirror with Change Volumes
With change volumes, a FlashCopy mapping exists between the primary volume and the
primary change volume. The mapping is updated in the cycling period (60 seconds - 1 day).
The primary change volume is then replicated to the secondary GM volume at the target site,
which is then captured in another change volume on the target site. This approach provides
an always consistent image at the target site and protects your data from being inconsistent
during resynchronization.
For more information about IBM FlashCopy, see 10.1, “IBM FlashCopy” on page 554.
You can adjust the cycling period by running the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. The default value is 300 seconds. If a copy does not
complete in the cycle period, the next cycle does not start until the prior cycle completes. For
this reason, the use of change volumes gives you the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, RPO is twice the completion
time. The next cycling period starts immediately after the prior cycling period is finished.
Carefully consider your business requirements versus the performance of GMCV. GMCV
increases the intercluster traffic for more frequent cycling periods. Therefore, selecting the
shortest cycle periods possible is not always the answer. In most cases, the default must
meet requirements and perform well.
Important: When you create your GM volumes with change volumes, ensure that you
remember to select the change volume on the auxiliary (target) site. Failure to do so leaves
you exposed during a resynchronization operation.
If this best practice is not maintained, such as if source volumes are assigned to only one
node in the I/O group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. You can also change the preferred node for volumes that
are in an RC relationship without affecting the host I/O to a particular volume.
Background copy I/O is scheduled to avoid bursts of activity that might have an adverse effect
on system behavior. An entire grain of tracks on one volume is processed at around the same
time, but not as a single I/O.
Double buffering is used to try to use sequential performance within a grain. However, the
next grain within the volume might not be scheduled for some time. Multiple grains might be
copied simultaneously, and might be enough to satisfy the requested rate, unless the
available resources cannot sustain the requested rate.
GM paces the rate at which background copy is performed by the appropriate relationships.
Background copy occurs on relationships that are in the InconsistentCopying state with a
status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships that are
performing a background copy.
The default value of the background copy is 25 megabytes per second (MBps) per volume.
Important: The background copy value is a system-wide parameter that can be changed
dynamically, but only on a per-system basis and not on a per-relationship basis. Therefore,
the copy rate of all relationships changes when this value is increased or decreased. In
systems with many RC relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed to 1 - 1000 MBps.
If the auxiliary volume is thin-provisioned and the region is deallocated, the special buffer
prevents a write and therefore, an allocation. If the auxiliary volume is not thin-provisioned or
the region in question is an allocated region of a thin-provisioned volume, a buffer of “real”
zeros is synthesized on the auxiliary and written as normal.
654 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Full synchronization after creation
The full synchronization after creation method is the default method. It is the simplest method
in that it requires no administrative activity apart from running the necessary commands.
However, in certain environments, the available bandwidth can make this method unsuitable.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established. Then, the administrator must run the following commands:
1. Run mkrcrelationship with the -sync flag.
2. Run startrcrelationship without the -clean flag.
Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not. This use can cause loss of a data or data integrity
exposure for hosts that are accessing data on the auxiliary volume.
You can create an HyperSwap topology system configuration where each I/O group in the
system is physically on a different site. These configurations can be used to maintain access
to data on the system when power failures or site-wide outages occur.
Since V7.8, it is possible to create a FlashCopy mapping (change volume) for an RC target
volume to maintain a consistent image of the secondary volume. The system recognizes it as
a Consistency Protection and a link failure or an offline secondary volume event is handled
differently now.
When Consistency Protection is configured, the relationship between the primary and
secondary volumes does not stop if the link goes down or the secondary volume is offline.
The relationship does not go in to the consistent stopped status. Instead, the system uses the
secondary change volume to automatically copy the previous consistent state of the
secondary volume. The relationship automatically moves to the consistent copying status as
the system resynchronizes and protects the consistency of the data. The relationship status
changes to consistent synchronized when the resynchronization process completes. The
relationship automatically resumes replication after the temporary loss of connectivity.
Change volumes that are used for Consistency Protection are not visible and manageable
from the GUI because they are used for Consistency Protection internal behavior only.
The option to add consistency protection is selected by default when MM/GM relationships
are created. The option must be cleared to create MM/GM relationships without consistency
protection.
656 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror
Table 10-11 lists the combinations of FlashCopy and MM/GM functions that are valid for a
single volume.
Total Volume size per I/O group A per I/O group limit of 1024 TB exists on the quantity of
master and auxiliary volume address spaces that can
participate in Metro Mirror and GM relationships. This
maximum configuration uses all 512 MiB of bitmap space
for the I/O group and allows 10 MiB of space for all
remaining copy services features.
When the MM/GM relationship is created, you can specify whether the auxiliary volume is in
sync with the master volume, and the background copy process is then skipped. This
capability is useful when MM/GM relationships are established for volumes that were created
with the format option.
658 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Step 3
When the background copy completes, the MM/GM relationship changes from the
InconsistentCopying state to the ConsistentSynchronized state.
Step 4:
a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the
MM/GM relationship enters the Idling state when you specify the -access option,
which enables write I/O on the auxiliary volume.
b. When an MM/GM relationship is stopped in the ConsistentSynchronized state without
an -access parameter, the auxiliary volumes remain read-only and the state of the
relationship changes to ConsistentStopped.
c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the
ConsistentStopped state, run the svctask stoprcrelationship command, which
specifies the -access option, and the MM/GM relationship enters the Idling state.
Step 5:
a. When an MM/GM relationship is started from the Idling state, you must specify the
-primary argument to set the copy direction. If no write I/O was performed (to the
master or auxiliary volume) while in the Idling state, the MM/GM relationship enters
the ConsistentSynchronized state.
b. If write I/O was performed to the master or auxiliary volume, the -force option must be
specified and the MM/GM relationship then enters the InconsistentCopying state
while the background copy is started. The background process copies only the data
that changed on the primary volume while the relationship was stopped.
Stop on Error
When a MM/GM relationship is stopped (intentionally, or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.
If the connection is broken between the two systems that are in a partnership, all (intercluster)
MM/GM relationships enter a Disconnected state. For more information, see “Connected
versus disconnected” on page 659.
State overview
The following sections provide an overview of the various MM/GM states.
When the two systems can communicate, the systems and the relationships that relationships
that span them are described as connected. When they cannot communicate, the systems
and the relationships spanning them are described as disconnected.
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments and considers any configuration or other
event that occurred while the relationship was disconnected. As a result, the relationship can
return to the state that it was in when it became disconnected, or it can enter a new state.
Relationships that are configured between volumes in the same IBM Spectrum Virtualize
based system (intracluster) are never described as being in a disconnected state.
An auxiliary volume is described as consistent if it contains data that can be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point.
The requirements for consistency are expressed regarding activity at the master up to the
recovery point. The auxiliary volume contains the data from all of the writes to the master for
which the host received successful completion and that data was not overwritten by a
subsequent write (before the recovery point).
Consider writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all). If the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from
the master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred). If an application is designed to cope with an unexpected power
failure, this assurance of consistency means that the application can use the auxiliary and
begin operation as though it was restarted after the hypothetical power failure. Again,
maintaining the application write ordering is the key property of consistency.
For more information about dependent writes, see 10.1.13, “FlashCopy and image mode
volumes” on page 577.
Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.
660 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Consistency as a concept can be applied to a single relationship or a set of relationships in a
consistency group. Write ordering is a concept that an application can maintain across
several disks that are accessed through multiple systems. Therefore, consistency must
operate across all of those disks.
When you are deciding how to use consistency groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.
If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
consistency group.
The systems must be recovered independently (each system within its own consistency
group). Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up to date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.
When communication is lost for an extended period and Consistency Protection was not
enabled, MM/GM tracks the changes that occurred on the master, but not the order or the
details of such changes (write data). When communication is restored, it is impossible to
synchronize the auxiliary without sending write data to the auxiliary out of order. Therefore,
consistency is lost.
Note: MM/GM relationships with Consistency Protection enabled use a PiT copy
mechanism (FlashCopy) to keep a consistent copy of the auxiliary. The relationships stay
in a consistent state, although not synchronized, even if communication is lost. For more
information about Consistency Protection, see 10.6.20, “Consistency Protection for Global
Mirror and Metro Mirror” on page 656.
Detailed states
The following sections describe the states that are portrayed to the user for consistency
groups or relationships. Also described is the information that is available in each state. The
major states are designed to provide guidance about the available configuration commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent. This state is entered when the relationship or
consistency group was InconsistentCopying and suffered a persistent error or received a
stop command that caused the copy process to stop.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. This state is entered after a
start command is issued to an InconsistentStopped relationship or a consistency group.
A persistent error or stop command places the relationship or consistency group into an
InconsistentStopped state. A start command is accepted but has no effect.
If the relationship or consistency group becomes disconnected, the auxiliary side changes to
InconsistentDisconnected. The master side changes to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master. This state can arise when a
relationship was in a ConsistentSynchronized state and experienced an error that forces a
Consistency Freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.
Normally, write activity that follows an I/O error causes updates to the master, and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must run a start command with the -force option to
acknowledge this condition, and the relationship or consistency group changes to
InconsistentCopying. Enter this command only after all outstanding events are repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can run a switch command that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
662 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are
sent to the master volume are also sent to the auxiliary volume. Successful completion must
be received for both writes, the write must be failed to the host, or a state must change out of
the ConsistentSynchronized state before a write is completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or consistency group becomes disconnected, the same changes are made
as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.
In this state, the relationship or consistency group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine what areas must be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.
Also, the relationship or consistency group accepts a -clean option on the start command
while in this state. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship changes to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or consistency group, which depends on the following factors:
The state when it became disconnected.
The write activity since it was disconnected.
The configuration activity since it was disconnected.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the auxiliary role, and do not accept read or write
I/O. Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.
When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are
true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the auxiliary role, and accept read I/O but not write
I/O.
In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point when consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the
other system.
A stop command with the -access flag set to true transitions the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Empty
This state applies only to consistency groups. It is the state of a consistency group that has
no relationships and no other state information to show.
It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point the state of the relationship becomes the state
of the consistency group.
664 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.7 Remote Copy commands
This section presents commands that must be issued to create and operate RC services.
Following these steps, the remote host server is mapped to the auxiliary volume and the disk
is available for I/O.
The command set for MM/GM contains the following broad groups:
Commands to create, delete, and manipulate relationships and consistency group
Commands to cause state changes
If a configuration command affects more than one system, MM/GM coordinates configuration
activity between the systems. Specific configuration commands can be run only when the
systems are connected, and fail with no effect when they are disconnected.
Other configuration commands are permitted, even if the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.
For any command (with one exception), a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command. In this case, the
system that is receiving the command is called the local system.
The exception is a command that sets systems into a MM/GM partnership. The
mkfcpartnership and mkippartnership commands must be issued on both the local and
remote systems.
The commands in this section are described as an abstract command set, and are
implemented by using one of the following methods:
CLI can be used for scripting and automation.
GUI can be used for one-off tasks.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.
-gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values of 60 - 86,400 seconds in increments of 10
seconds. The default value is 300. Do not change this value except under the direction of
IBM Support.
-gmmaxhostdelay max_host_delay
This parameter specifies the maximum time delay, in milliseconds, at which the GM link
tolerance timer starts counting down. This threshold value determines the extra effect that
GM operations can add to the response times of the GM source volumes. You can use this
parameter to increase the threshold from the default value of 5 milliseconds.
-maxreplicationdelay max_replication_delay
This parameter sets a maximum replication delay in seconds. The value must be a
number 0 - 360 (0 being the default value, no delay). This feature sets the maximum
number of seconds to be tolerated to complete a single I/O. If I/O cannot complete within
the max_replication_delay, the 1920 event is reported. This setting is system-wide and
applies to MM/GM relationships.
Run the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300
You can view all of these parameter values by running the lssystem <system_name>
command.
666 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.
However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations queue at the primary system. This queue results in an
extended response time to application hosts. In this situation, the gmlinktolerance feature
stops GM relationships, and the application host’s response time returns to normal.
After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state. Fix the cause of the event and restart your GM relationships.
For this reason, ensure that you monitor the system to track when these 1920 events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
During SAN maintenance windows in which degraded performance is expected from SAN
components, and application hosts can stand extended response times from GM volumes.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If 1920 events are occurring, you might need to use a performance monitoring and analysis
tool, such as the IBM Spectrum Control, to help identify and resolve the problem.
To establish a fully functional MM/GM partnership, you must run either of these commands on
both systems that will be part of the partnership. This step is a prerequisite for creating
MM/GM relationships between volumes on IBM Spectrum Virtualize systems.
The background copy bandwidth determines the rate at which the background copy is
attempted for MM/GM. The background copy bandwidth can affect foreground I/O latency in
one of the following ways:
The following results can occur if the background copy bandwidth is set too high compared
to the MM/GM intercluster link capacity:
– The background copy I/Os can back up on the MM/GM intercluster link.
– There is a delay in the synchronous auxiliary writes of foreground I/Os.
– The foreground I/O latency increases as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary site overload the auxiliary storage, and again
delay the synchronous secondary writes of foreground I/Os.
To set the background copy bandwidth optimally, ensure that you consider all three resources:
Primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
foreground I/O workload.
The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems owning this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.
The new consistency group does not contain any relationships and is in the Empty state. You
can add MM/GM relationships to the group (upon creation or afterward) by running the
chrelationship command.
668 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.7.6 Creating a Metro Mirror/Global Mirror relationship
Run the mkrcrelationship command to create a MM/GM relationship. This relationship
persists until it is deleted.
Optional parameter: If you do not use the -global optional parameter, an MM relationship
is created rather than a GM relationship.
The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O group. The master and
auxiliary volume cannot be in a relationship, and they cannot be the target of a FlashCopy
mapping. This command returns the new relationship (relationship_id) when successful.
When the MM/GM relationship is created, you can add it to a consistency group, or it can be a
stand-alone MM/GM relationship.
When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create a MM/GM relationship. If the
command is issued with no parameters, all of the volumes that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
When the command is run, you can set the copy direction if it is undefined. Optionally, you
can mark the auxiliary volume of the relationship as clean. The command fails if it is used as
an attempt to start a relationship that is a part of a consistency group.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship.
The use of the -force parameter here is a reminder that the data on the auxiliary becomes
inconsistent while resynchronization (background copying) occurs. Therefore, this data is
unusable for DR purposes before the background copy completes.
In the Idling state, you must specify the master volume to indicate the copy direction. In
other connected states, you can provide the -primary argument, but it must match the
existing setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you run a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.
For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.
670 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you run the startrcconsistgrp command. Write activity is no longer copied
from the master to the auxiliary volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.
If the relationship is disconnected at the time that the command is issued, the relationship is
deleted on only the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can run the rmrcrelationship command independently on both of the
systems.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
If you delete an inconsistent relationship, the auxiliary volume becomes accessible, even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is deleted on only the system on which the command is being run. When
the systems reconnect, the consistency group is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can run the rmrcconsistgrp command separately on both of the
systems.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
Important: By reversing the roles, your current source volumes become targets, and target
volumes become source. Therefore, you lose write access to your current primary
volumes.
Demonstration: The IBM Client Demonstration Center shows how data is replicated by
using GMCV (cycling mode set to multiple). This configuration perfectly fits the new IP
replication function because it is well-dwell designed for links with high latency, low
bandwidth, or both.
Bridgeworks SANSlide technology, which is integrated into the IBM Spectrum Virtualize
Software, uses artificial intelligence (AI) to help optimize network bandwidth use and adapt to
changing workload and network conditions.
This technology can improve remote mirroring network bandwidth usage up to three times.
Improved bandwidth usage can enable clients to deploy a less costly network infrastructure,
or speed up remote replication cycles to enhance DR effectiveness.
With an Ethernet network data flow, the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that is sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 10-95 on page 673.
672 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-95 Typical Ethernet network data flow
However, by using the embedded IP replication, this behavior can be eliminated with the
enhanced parallelism of the data flow by using multiple virtual connections (VCs) that share
IP links and addresses. The AI engine can dynamically adjust the number of VCs, receive
window size, and packet size to maintain optimum performance. While the engine is waiting
for one VCs ACK, it sends more packets across other VCs. If packets are lost from any VC,
data is automatically retransmitted, as shown in Figure 10-96.
Figure 10-96 Optimized network data flow by using Bridgeworks SANSlide technology
For more information about this technology, see IBM Storwize V7000 and SANSlide
Implementation, REDP-5023.
With native IP partnership, the following Copy Services features are supported:
MM
Referred to as synchronous replication, MM provides a consistent copy of a source
volume on a target volume. Data is written to the target volume synchronously after it is
written to the source volume so that the copy is continuously updated.
GM and GMCV
Referred to as asynchronous replication, GM provides a consistent copy of a source
volume on a target volume. Data is written to the target volume asynchronously so that the
copy is continuously updated. However, the copy might not contain the last few updates if
a DR operation is performed. An added extension to GM is GMCV. GMCV is the preferred
method for use with native IP replication.
674 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: A physical link is the physical IP link between the two sites: A (local) and B
(remote). Multiple IP addresses on local system A might be connected (by Ethernet
switches) to this physical link. Similarly, multiple IP addresses on remote system B
might be connected (by Ethernet switches) to the same physical link. At any time, only a
single IP address on cluster A can form an RC data session with an IP address on
cluster B.
The maximum throughput is restricted based on the use of 1 Gbps, 10 Gbps, or 25 Gbps
Ethernet ports. It varies based on distance (for example, round-trip latency) and quality of
communication link (for example, packet loss):
– One 1 Gbps port can transfer up to 110 MBps unidirectional, 190 MBps bidirectional.
– Two 1 Gbps ports can transfer up to 220 MBps unidirectional, 325 MBps bidirectional.
– One 10 Gbps port can transfer up to 240 MBps unidirectional, 350 MBps bidirectional.
– Two 10 Gbps port can transfer up to 440 MBps unidirectional, 600 MBps bidirectional.
The minimum supported link bandwidth is 10 Mbps. However, this requirement scales up
with the amount of host I/O that you choose to do. Figure 10-97 shows scaling host I/O.
The following equation describes the approximate minimum bandwidth that is required
between two systems with < 5 ms RTT and errorless link:
Minimum intersite link bandwidth in Mbps > Required Background Copy in Mbps +
Maximum Host I/O in Mbps + 1 Mbps heartbeat traffic
Increasing latency and errors results in a higher requirement for minimum bandwidth.
The Link Bandwidth setting is now configured by using megabits (Mb) not MB. You set
the Link Bandwidth setting to a value that the communication link can sustain, or to
what is allocated for replication. The Background Copy Rate setting is now a
percentage of the Link Bandwidth. The Background Copy Rate setting determines the
available bandwidth for the initial sync and resyncs or for GMCV.
Data compression is supported for IPv4 or IPv6 partnerships. To enable data compression,
both systems in an IP partnership must be running a software level that supports IP
partnership compression (Version 7.7 or later) and both must have the compression feature
enabled.
Volumes that are replicated by using IP partnership compression can be either compressed
or uncompressed on the system. Volume compression and IP replication compression are not
linked features. As an example, the following steps replicate a compressed volume over an IP
partnership with the compression feature enabled:
1. Read operations in the local system decompress the data when reading from the source
volume.
2. Decompressed data is transferred to the RC code.
3. Data is compressed before being sent over the IP partnership link.
4. The remote system RC code decompresses the received data.
5. Write operations in the remote system compress the data when writing to the target
volume.
When the VLAN ID is configured for IP addresses that is used for iSCSI host attach or
IP replication, the VLAN settings on the Ethernet network and servers must be configured
correctly to avoid connectivity issues. After the VLANs are configured, changes to the VLAN
settings disrupt iSCSI and IP replication traffic to and from the partnerships.
676 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O group can differ. To avoid any service disruption,
switches must be configured so that the failover VLANs are configured on the local switch
ports and the failover of IP addresses from a failing node to a surviving node succeeds. If
failover VLANs are not configured on the local switch ports, no paths are available to the IBM
Spectrum Virtualize system nodes during a node failure and the replication fails.
Consider the following requirements and procedures when implementing VLAN tagging:
VLAN tagging is supported for IP partnership traffic between two systems.
VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
VLAN tagging by default is disabled for any IP address of a node port (N_Port). You can
use the CLI or GUI to optionally set the VLAN ID for port IP addresses on both systems in
the IP partnership.
When a VLAN ID is configured for the port IP addresses that are used in RC port groups,
appropriate VLAN settings on the Ethernet network must also be configured to prevent
connectivity issues.
Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop the
partnership first before you configure VLAN tags. Restart the partnership after the
configuration is complete.
RC group or RC port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same RC group can form RC
connections with the partner system:
0: Ports that are not configured for RC
1: Ports that belong to RC port group 1
2: Ports that belong to RC port group 2
Each IP address can be shared for iSCSI host attach and RC
functions. Therefore, appropriate settings must be applied to
each IP address.
Failover Failure of a node within an I/O group causes the volume access
to go through the surviving node. The IP addresses fail over to
the surviving node in the I/O group. When the configuration
node of the system fails, management IP addresses also fail
over to an alternative node.
Failback When the failed node rejoins the system, all failed over IP
addresses are failed back from the surviving node to the
rejoined node, and volume access is restored through
this node.
IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.
For example, the first Discovery takes place when the user is
running the mkippartnership CLI command. Subsequent
Discoveries can take place as a result of user activities
(configuration changes) or as a result of hardware failures (for
example, node failure, ports failure, and so on).
The process to establish two systems in the IP partnerships includes the following steps:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory, and users can choose to not configure the CHAP secret.
2. The administrator configures the system IP addresses on both local and remote systems
so that they can discover each other over the network.
678 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. If you want to use VLANs, configure your local area network (LAN) switches and Ethernet
ports to use VLAN tagging.
4. The administrator configures the systems ports on each node in both of the systems by
using the GUI (or the cfgportip CLI command), and completes the following steps:
a. Configure the IP addresses for RC data.
b. Add the IP addresses in the respective RC port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system where the partnership state then changes to Partially_Configured_Local.
6. The administrator establishes the partnership from the remote system with the local
system. If this process is successful, the partnership state then changes to the
Fully_Configured, which implies that the partnerships over the IP network were
successfully established. The partnership state momentarily remains Not_Present before
moving to the Fully_Configured state.
7. The administrator creates MM, GM, and GMCV relationships.
RC port group ID is a numerical tag that is associated with an IP port of an IBM Spectrum
Virtualize system to indicate to which physical IP link it is connected. Multiple nodes might be
connected to the same physical long-distance link, and must therefore share RC port group
ID.
In scenarios with two physical links between the local and remote clusters, two RC port group
IDs must be used to designate which IP addresses are connected to which physical link. This
configuration must be done by the system administrator by using the GUI or running the
cfgportip CLI command.
Remember: IP ports on both partners must be configured with identical RC port group IDs
for the partnership to be established correctly.
The IBM Spectrum Virtualize system IP addresses that are connected to the same physical
link are designated with identical RC port groups. The system supports three RC groups: 0, 1,
and 2.
The systems’ IP addresses are, by default, in RC port group 0. Ports in port group 0 are not
considered for creating RC data paths between two systems. For partnerships to be
established over IP links directly, IP ports must be configured in RC group 1 if a single
inter-site link exists, or in RC groups 1 and 2 if two inter-site links exist.
The administrator might want to use IPv6 addresses for RC operations and use IPv4
addresses on that same port for iSCSI host attach. This configuration also implies that for two
systems to establish an IP partnership, both systems must have IPv6 addresses that are
configured.
Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be disabled for that IP address and any other IP address that is configured
on that Ethernet port.
Note: To establish an IP partnership, each IBM Spectrum Virtualize controller node must
have only a single RC port group that is configured, either 1 or 2. The remaining IP
addresses must be in RC port group 0.
Note: For explanation purposes, this section shows a node with two ports available: 1 and
2. This number generally increments when the latest models of IBM Spectrum Virtualize
systems are used.
The following supported configurations for IP partnership that were in the first release are
described in this section:
Two 2-node systems in IP partnership over a single inter-site link, as shown in
Figure 10-98 (configuration 1).
Figure 10-98 Single link with only one Remote Copy port group configured in each system
680 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
As shown in Figure 10-98 on page 680, two systems are available:
– System A.
– System B.
A single RC port group 1 is created on Node A1 on System A and on Node B2 on System
B because only a single inter-site link is used to facilitate the IP partnership traffic. An
administrator might choose to configure the RC port group on Node B1 on System B
rather than Node B2.
At any time, only the IP addresses that are configured in RC port group 1 on the nodes in
System A and System B participate in establishing data paths between the two systems
after the IP partnerships are created. In this configuration, no failover ports are configured
on the partner node in the same I/O group.
This configuration has the following characteristics:
– Only one node in each system has an RC port group that is configured, and no failover
ports are configured.
– If the Node A1 in System A or the Node B2 in System B encounter a failure, the IP
partnership stops and enters the Not_Present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state goes to the Fully_Configured state.
– If the inter-site system link fails, the IP partnerships change to the Not_Present state.
– This configuration is not recommended because it is not resilient to node failures.
Two 2-node systems in IP partnership over a single inter-site link (with failover ports
configured), as shown in Figure 10-99 on page 682 (configuration 2).
682 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-100 Multinode systems single inter-site link with only one RC port group
Figure 10-101 Multinode systems single inter-site link with only one Remote Copy port group
684 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
As shown in Figure 10-101 on page 684, an eight-node system (System A in Site A) and a
four-node system (System B in Site B) are used. A single RC port group 1 is configured on
nodes A1, A2, A5, and A6 on System A at Site A. Similarly, a single RC port group 1 is
configured on nodes B1, B2, B3, and B4 on System B.
Although four I/O groups (eight nodes) are in System A, any two I/O groups at maximum are
supported to be configured for IP partnerships. If Node A1 fails in System A, IP partnership
continues by using one of the ports that is configured in RC port group from any of the nodes
from either of the two I/O groups in System A.
However, it might take some time for discovery and path configuration logic to reestablish
paths post-failover. This delay might cause partnerships to change to the Not_Present state.
This process can lead to RC relationships stopping, and the administrator must manually start
them if the relationships do not auto-recover. The details of which particular IP port is actively
participating in IP partnership process are provided in lsportip output (reported as used).
Figure 10-102 Dual links with two Remote Copy groups on each system configured
As shown in Figure 10-102, RC port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O group. Instead, the ports
are maintained in different RC port groups on both of the nodes. They remain active and
participate in IP partnership by using both of the links.
However, if either of the nodes in the I/O group fail (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in RC port
group 2. Therefore, the effective bandwidth of the two links is reduced to 50% because
only the bandwidth of a single link is available until the failure is resolved.
This configuration has the following characteristics:
– Two inter-site links and two RC port groups are configured.
– Each node has only one IP port in RC port group 1 or 2.
– Both the IP ports in the two RC port groups participate simultaneously in IP
partnerships. Therefore, both of the links are used.
– During node failure or link failure, the IP partnership traffic continues from the other
available link and the port group. Therefore, if two links of 10 Mbps each are available
and you have 20 Mbps of effective link bandwidth, bandwidth is reduced to 10 Mbps
only during a failure.
– After the node failure or link failure is resolved and failback occurs, the entire bandwidth
of both of the links is available as before.
686 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Two 4-node systems in IP partnership with dual inter-site links, as shown in Figure 10-103
(configuration 6).
Figure 10-103 Multinode systems with dual inter-site links between the two systems
688 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-104 Multinode systems (two I/O groups on each system) with dual inter-site links
between the two systems
Figure 10-105 Two node systems with single inter-site link and Remote Copy port groups
configured
690 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
An example of an unsupported configuration for a dual inter-site link is shown in
Figure 10-106 (configuration 9).
Figure 10-106 Dual links with two Remote Copy Port Groups with failover Port Groups configured
In this configuration, IP ports are to be shared by both iSCSI hosts and for IP partnership.
The following configuration steps are used:
a. Configure System IP addresses properly so that they can be reached over the inter-site
link.
b. Qualify if the partnerships must be created over IPv4 or IPv6, and then assign IP
addresses and open firewall ports 3260 and 3265.
692 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
c. Configure IP ports for RC on System A1 by using the following settings:
• Node 1:
- Port 1, RC port group 1
- Host: Yes
- Assign IP address
• Node 2:
- Port 4, RC port group 2
- Host: Yes
- Assign IP address
d. Configure IP ports for RC on System B1 by using the following settings:
• Node 1:
- Port 1, RC port group 1
- Host: Yes
- Assign IP address
• Node 2:
- Port 4, RC port group 2
- Host: Yes
- Assign IP address
e. Check the MTU levels across the network (the default MTU is 1500 on SVC and
IBM Spectrum Virtualize systems).
f. Establish IP partnerships from both systems.
g. After the partnerships are in the Fully_Configured state, you can create the RC
relationships.
694 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.9.1 Creating a Fibre Channel partnership
Intra-cluster MM: If you are creating intra-cluster MM, do not perform this next step to
create the MM partnership. Instead, see 10.9.2, “Creating Remote Copy relationships” on
page 697.
To create an FC partnership between IBM Spectrum Virtualize systems by using the GUI,
open the Remote Copy window that is shown in the Figure 10-110 on page 694 and click
Create Partnership to create a partnership.
2. Select the partnership type (Fibre Channel or IP). If you choose an IP partnership, you
must provide the IP address of the partner system and the partner system’s CHAP key.
3. If your partnership is based on Fibre Channel Protocol (FCP), select an available partner
system from the menu. To be able to select a partner system, the two clusters must be
properly zoned between each other. If no other candidate cluster is available, the This
system does not have any candidates error message is displayed.
4. Enter a link bandwidth in Mbps that is used by the background copy process between the
systems in the partnership.
To fully configure the partnership between both systems, perform the same steps on the other
system in the partnership. If not configured on the partner system, the partnership is
displayed as Partial Local.
When both sides of the system partnership are defined, the partnership shows a Configured
green status, as shown in Figure 10-113.
696 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.9.2 Creating Remote Copy relationships
This section shows how to create RC relationships for volumes with their respective remote
targets. Before creating a relationship between a volume on the local system and a volume on
a remote system, both volumes must exist and have the same virtual size.
4. If you want to add a stand-alone relationship, select the Independent Relationships tab
and click Create Relationship, as shown in the Figure 10-115.
698 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. In the next window, select the target system for this RC relationship and click Next, as
shown in the Figure 10-117, “Selecting the target system for the RC relationship” on
page 699.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the correct size are shown in the list for a specific source volume.
700 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
8. In the next window, you can add change volumes if needed, as shown in Figure 10-119.
Click Finish.
702 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
10.In the next window, select whether the volumes are synchronized so that the relationship
is created, as shown in Figure 10-121. Click Next.
11.Select whether you want to start synchronizing the Master and Auxiliary volumes at the
time of creation of the relationship or start the copy later, as shown in Figure 10-122. Click
Finish.
2. Enter a name for the consistency group, select the target system, and click Add, as shown
in Figure 10-124. The consistency group is added to the configuration with no
relationships.
704 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. Then, you can either add existing stand-alone relationships to the recently added
consistency group, by selecting the Independent Relationships tab, right-clicking the
relationship and clicking Add to Consistency Group, or you can create new relationships
directly to this consistency group, by selecting it in the Consistency Group tab, as shown
in the Figure 10-125, and then clicking Create Relationship.
To create an RC relationship, see 10.9.2, “Creating Remote Copy relationships” on page 697.
RC relationship name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The RC name can be 1 - 15 characters. Blanks cannot be used.
706 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. Enter the new name that you want to assign to the consistency group and click Rename,
as shown in Figure 10-129.
RC consistency group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The RC name can be 1 - 15 characters. Blanks cannot
be used.
708 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4. Confirm your selection and click Remove, as shown in Figure 10-133.
To start a consistency group, select Copy Services → Remote Copy, select the target RC
system, and go to the Consistency Groups tab. Click the three dots for the consistency
group to be started, and select Start Group, as shown in Figure 10-135.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is disallowed
to that volume when it becomes the secondary. Therefore, careful planning is required
before you switch the copy direction for a relationship.
710 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To switch the direction of a stand-alone RC relationship, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Select the target RC system for the relationship to be switched, and go to the
Independent Relationships tab. Right-click the relationship to be switched and select
Switch, as shown in Figure 10-136.
Figure 10-137 Switching the master-auxiliary direction of a relationships changes the write access
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all the I/O is disallowed to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a relationship.
2. Select the target RC system and go to the Consistency Groups tab. Next, click the three
dots for the consistency group to be switched and select Switch Direction, as shown in
Figure 10-139.
712 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-140 Switching the direction of a consistency group changes the write access
714 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 10-144 Granting read/write access write to the auxiliary volumes
716 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. A confirmation message opens, as shown in Figure 10-148. Click Yes.
The total memory that can be dedicated to these functions is not defined by the physical
memory in the system. The memory is constrained by the software functions that use the
memory.
For every RC relationship that is created on an IBM Spectrum Virtualize system, a bitmap
table is created to track the copied grains. By default, the system allocates 20 MiB of memory
for a minimum of 2 TiB of remote copied source volume capacity. Every 1 MiB of memory
provides the following volume capacity for the specified I/O group: for 256 KiB grains size,
2 TiB of total MM, GM, or active-active volume capacity.
To help calculate the memory requirements and confirm that your system can accommodate
the total installation size, see the values in Table 10-15.
When you configure GMCV, two internal FlashCopy mappings are created for each change
volume.
MM/GM relationships do not automatically increase the available bitmap space. You might
need to run the chiogrp command to manually increase the space in one or both of the
master and auxiliary systems.
You can modify the resource allocation for each I/O group of an SVC system by selecting
Settings → System and clicking the Resources menu, as shown in Figure 10-149. At the
time of writing, this GUI option is not available for other IBM Spectrum Virtualize based
systems, so the resource allocation can be adjusted by running the chiogrp command. For
more information about this command, see IBM Documentation.
718 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
It can have several triggers, including the following probable causes:
Primary system or SAN fabric problem (10%)
Primary system or SAN fabric configuration (10%)
Secondary system or SAN fabric problem (15%)
Secondary system or SAN fabric configuration (25%)
Intercluster link problem (15%)
Intercluster link configuration (25%)
In practice, the most often overlooked cause is latency. GM has an RTT tolerance limit of 80
or 250 milliseconds, depending on the firmware version and the hardware model. A message
that is sent from the source IBM Spectrum Virtualize system to the target system and the
accompanying acknowledgment must have a total time of 80- or 250-millisecond round trip.
That is, it must have up to 40- or 125-millisecond latency each way.
The primary component of your RTT is the physical distance between sites. For every 1000
kilometers (621.4 miles), you observe a 5-millisecond delay each way. This delay does not
include the time that is added by equipment in the path. Every device adds a varying amount
of time, depending on the device, but a good rule is 25 microseconds for pure hardware
devices.
Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A uses a SAN FC router at each site to provide
FCIP to encapsulate the FC traffic between sites.
Now, there are seven devices and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. Combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of GM
until you realize that this number is the best case number.
The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link. Therefore, be sure to stay as far beneath the GM RTT
limit as possible. You can easily double or triple the expected physical latency with a lower
quality or lower bandwidth network link. Then, you are within the range of exceeding the limit
if high I/O occurs that exceeds the bandwidth capacity.
When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not properly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to enable you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT by using standard 64-byte ping
packets.
Effective transit time must be measured only by using packets that are large enough to hold
an FC frame, or 2148 bytes (2112 bytes of payload and 36 bytes of header). Allow estimated
resource requirements to be a safe amount because various switch vendors have optional
features that might increase this size. After you verify your latency by using the proper packet
size, proceed with normal hardware troubleshooting.
The amount of time in microseconds that is required to transmit a packet across network links
of varying bandwidth capacity is compared. The following packet sizes are used:
64 bytes: The size of the common ping packet
1500 bytes: The size of the standard TCP/IP packet
2148 bytes: The size of an FC frame
Finally, your path (MTU) affects the delay that is incurred to get a packet from one location to
another location. An MTU might cause fragmentation or be too large and cause too many
retransmits when a packet is lost.
Note: Unlike 1720 errors, 1920 errors are deliberately generated by the system because it
evaluated that a relationship can affect the host’s response time. The system has no
indication about if or when the relationship can be restarted. Therefore, the relationship
cannot be restarted automatically and it must be done manually.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O group if your fabric has
more than 64 HBA ports zoned. The suggested zoning configuration for fabrics is one port for
each node per I/O group per fabric that is associated with the host.
For those fabrics with 64 or more host ports, this suggestion becomes a rule. Therefore, you
see four paths to each volume discovered on the host because each host must have at least
two FC ports from separate HBA cards, each in a separate fabric. On each fabric, each host
FC port is zoned to two IBM Spectrum Virtualize N_Ports, where each N_Port comes from a
different IBM Spectrum Virtualize node. This configuration provides four paths per volume.
More than four paths per volume are supported but not recommended.
Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer and port send delay percentage by using
IBM Spectrum Control and comparing them against your sample interval reveals potential
SAN congestion. If a zero buffer credit or port send delay percentage is more than 2% of the
total time of the sample interval, it might cause problems.
Always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences might indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
system partnership information to verify its status and settings. Then, perform diagnostics for
every piece of equipment in the path between your two IBM Spectrum Virtualize systems. It
720 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.
If your investigations fail to resolve your RC problems, contact your IBM Support
representative for a more complete analysis.
Ownership groups restrict access for users in the ownership group to only those objects that
are defined within that ownership group. An owned object can belong to one ownership group.
Users in an ownership group are restricted to viewing and managing objects within their
ownership group. Users that are not in an ownership group can continue to view or manage
all the objects on the system based on their defined user role, including objects within
ownership groups.
Only users with Security Administrator roles (for example, superuser) can configure and
manage ownership groups.
The system supports several resources that you assign to ownership groups:
Child pools
Volumes
Volume groups
Hosts
Host clusters
Host mappings
IBM FlashCopy mappings
FlashCopy consistency groups
An owned object can belong to only one ownership group. An owner is a user with an
ownership group that can view and manipulate objects within that group.
Before you create ownership groups and assign resources and users, review the following
guidelines:
Users can be in only one ownership group at a time (applies to both local and remotely
authenticated users).
Objects can be within at most one ownership group.
Global resources, such as drives, enclosures, and arrays, cannot be assigned to
ownership groups.
Global users that do not belong to an ownership group can view and manage (depending
on their user role) all resources on the system, including the ones that belong to an
ownership group, and users within an ownership group.
Users within an ownership group cannot have the Security Administrator role. All Security
Administrator role users are global users.
Users within an ownership group can view or change resources within the ownership
group in which they belong.
724 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Users within an ownership group cannot change any objects outside of their ownership
group. This restriction includes global resources that are related to resources within the
ownership group. For example, a user can change a volume in the ownership group, but
not the drive that provides the storage for that volume.
Users within an ownership group cannot view or change resources if those resources are
assigned to another ownership group or are not assigned to any ownership group.
However, users within ownership groups can view and display global resources. For
example, users can display information on drives on the system because drives are a
global resource that cannot be assigned to any ownership group.
When a user group is assigned to an ownership group, the users in that user group retain
their role but are restricted to only those resources that belong to the same ownership group.
The role that is associated with a user group can define the permitted operations on the
system, and the ownership group can further limit access to individual resources. For
example, you can configure a user group with the Copy Operator role, which limits user
access to FlashCopy operations. Access to individual resources, such as a specific
FlashCopy consistency group, can be further restricted by assigning it to an ownership group.
A child pool is a key requirement for the ownership groups feature. By defining a child pool
and assigning it to an ownership group, the system administrator provides capacity for
volumes that ownership group users can create or manage.
Depending on the type of resource, the owning group for the resource can be defined
explicitly or inherited from explicitly defined objects. For example, a child pool needs an
ownership group parameter to be set by a system administrator, but volumes that are created
in that child pool automatically inherit the ownership group from a child pool. For more
information about ownership inheritance, see IBM FlashSystem 9200 documentation and
expand Product overview → Technical overview → Ownership groups.
When the user logs on to the management GUI or command-line interface (CLI), only
resources that they have access to through the ownership group are available. Additionally,
only events and commands that are related to the ownership group in which a user belongs
are viewable by those users.
After the first group is created, the window changes to ownership group mode, as shown in
Figure 11-2. The new ownership group has no user groups and no resources that are
assigned to it.
For a description of user roles, see IBM FlashSystem 9200 documentation and expand
Product overview → Technical overview → User roles.
726 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To create volume, host, and other objects in an ownership group, users must have an
Administrator or Restricted Administrator role. Users with the Security Administrator role
cannot be assigned to an ownership group.
You may also set up a user group to use remote authentication, if it is enabled. To do so,
select the Lightweight Directory Access Protocol (LDAP) checkbox.
Note: Users that use LDAP can belong to multiple user groups, but belong to only one
ownership group that is associated with one of the user groups.
If remote authentication is not configured, you must create a user (or users) and assign it to a
created user group, as shown in Figure 11-4.
Multiple user groups with different user roles may be assigned to one ownership group. For
example, you may create and assign a user group with the Monitor role in addition to a group
with the Administrator role to have two sets of users with different privilege levels accessing
an ownership group’s resources.
When creating a child pool, specify an ownership group for it and assign a part of the parent’s
pool capacity, as shown in Figure 11-7. Ownership group objects can use only capacity that is
provisioned for them with the child pool.
Multiple child pools that are created from the same or different parent pools can be assigned
to a single ownership group.
728 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
After a child pool is created and assigned, the ownership group management window, which
you open by selecting Access → Ownership Groups, changes to show the assigned and
available resources, as shown in Figure 11-8.
Any volumes that are created on a child pool that is assigned to an ownership group inherits
ownership from the child pool.
After a child pool and user group are assigned to an ownership group, ownership group
administrators can log in with their credentials and start creating volumes, host and host
clusters, or FlashCopy mappings. For more information about creating those objects, see
Chapter 6, “Volumes” on page 299, Chapter 7, “Hosts” on page 405, and Chapter 10,
“Advanced Copy Services” on page 553.
Although an ownership group administrator can create objects only within the resources that
are assigned to them, the system administrator can create, monitor, and assign objects for
any ownership group.
The global system administrator can see and manage the resources of all ownership groups
and resources that are not assigned to any groups.
When the ownership group user logs in, they can see and manage only resources that are
assigned to their group. Figure 11-11 shows the initial login window for an ownership group
user with the Administrator role.
This user does not see a dashboard with global system performance and capacity
parameters, but instead can see only tiles for their existing ownership group resources. Out of
eight volumes that are configured on a system and shown in Figure 11-10, they can see and
manage only three volumes that belong to the group.
The ownership group user can use the GUI to browse, create, and delete (depending on their
user role) resources that are assigned to their group. To see information about the global
resources (for example, list managed disks (MDisks) or arrays on the pool), they must use the
CLI. Ownership group users cannot manage global resources, but can only view them.
730 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When an ownership group is removed by using the GUI, all ownership assignment
information for all the objects of the ownership group is removed, but the objects remain
configured. Only the system administrator can manage those resources afterward.
If child pools are on the system, you can define an ownership group to the child pool or child
pools. Before you define an ownership group to existing child pools, determine other related
objects that you want to migrate. Any volumes that are currently in the child pool inherit the
ownership group that is defined for the child pool.
If no child pools are on the system, you must create child pools and move any volumes to
those child pools before you can assign them to ownership groups. If volumes currently are in
a parent pool, volume mirroring can be used to create copies of the volume within the child
pool. Alternatively, volume migration can be used to relocate a volume from a parent pool to a
child pool within that parent pool without requiring copying.
3. Repeat step 2 on page 731 for all volumes that must belong to an ownership group, and
then remove the source copies.
4. Create an ownership group as described in 11.2.1, “Creating an ownership group” on
page 726. Assign a user group to it, as described in 11.2.2, “Assigning users to an
ownership group” on page 726.
5. As shown in Figure 11-15, in Access → Ownership Groups, select the wanted
ownership group and click Assign Child Pool.
After you click Next, the system notifies you that there are more resources that will inherit
ownership from a volume, and because the volume is mapped to a host, the host will
become an ownership group object, as shown in Figure 11-17 on page 733.
732 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 11-17 Additional Resources to add
6. As shown in Figure 11-18, a volume and a host both belong to an ownership group. As a
host and a volume are in a group, host mapping inherits ownership and becomes a part of
an ownership group too.
Now, a child pool is assigned to an ownership group. If you must migrate more volumes to the
child pool later, the same approach can be used. However, during migration one volume copy
is in an owned child pool, and the original copy remains in an unowned parent pool. Such a
condition causes inconsistent ownership, as shown in Figure 11-19.
Until the inconsistent volume ownership is resolved, the volume does not belong to an
ownership group and cannot be seen or managed by an ownership group administrator. To
resolve it, delete one of the copies after both are synchronized.
The key per pool (and allowing different keys for child pools) supports some part of the
multi-tenant use case (if you delete a pool, you delete the key and cryptoerase the data), but
all the keys are wrapped and protected by a single master key that is obtained either from a
USB stick or an external key server.
As a special case, it is possible to turn off encryption for individual MDisks within the storage
pool, which means that if an external storage controller supports encryption that you can
choose to allow it to encrypt the data instead.
You can migrate volumes from a non-encrypted storage pool to an encrypted storage pool, or
you can add an encrypted array to a storage pool and then delete the unencrypted array
(which migrates all the data automatically) as a way of encrypting data.
A storage pool can include a mixture of two or all three types of storage. In this case, the SAS
and NVMe internal storage use a key per RAID array for encryption, and the externally
virtualized storage uses the pool level key. Because it is almost impossible to control exactly
what storage is used for each volume, from a security viewpoint you effectively have a single
key for the whole pool, and a cryptographic erase is possible only by deleting the entire
storage pool and arrays.
736 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.2 Planning for encryption
Data-at-rest encryption is a powerful tool that can help organizations protect the
confidentiality of sensitive information. However, encryption, like any other tool, must be used
correctly to fulfill its purpose.
Multiple drivers exist for an organization to implement data-at-rest encryption. These drivers
can be internal, such as protection of confidential company data and ease of storage
sanitization, or external, such as compliance with legal requirements or contractual
obligations.
Therefore, before configuring encryption on storage, the organization defines its needs and if
it decides that data-at-rest encryption is required, it includes it in the security policy. Without
defining the purpose of the particular implementation of data-at-rest encryption, it is difficult or
impossible to choose the best approach to implement encryption and verify whether the
implementation meets the set of goals.
The following items are worth considering during the design of a solution that includes
data-at-rest encryption:
Legal requirements
Contractual obligations
Organization's security policy
Attack vectors
Expected resources of an attacker
Encryption key management
Physical security
Another document that should be consulted when planning data-at-rest encryption is the
organization’s security policy.
The outcome of a data-at-rest encryption planning session answers the following questions:
1. What are the goals that the organization wants to realize by using data-at-rest encryption?
2. How will data-at-rest encryption be implemented?
3. How can it be demonstrated that the proposed solution realizes the set of goals?
The encryption of system data and metadata is not required, so they are not encrypted.
Which method that is used for encryption is chosen automatically by the system based on the
placement of the data:
Hardware encryption: Data is encrypted by using SAS hardware or self-encrypting drives,
for example, if IBM FlashCore Module (FCM) drives are presented in the system,
hardware-based data compression and self-encryption is used. Hardware encryption is
used only for internal storage (drives).
Software encryption: Data is encrypted by using the node’s CPU (the encryption code
uses the AES-NI CPU instruction set). Used only for external storage.
Note: Software encryption is available in IBM Spectrum Virtualize V7.6 and later.
Both methods of encryption use the same encryption algorithm, key management
infrastructure, and license.
Note: The design for encryption is based on the concept that a system is encrypted or not
encrypted. Encryption implementation is intended to encourage solutions that contain only
encrypted volumes or only unencrypted volumes. For example, after encryption is enabled
on the system, all new objects (for example, pools) are by default created as encrypted.
Data is encrypted or decrypted when it is written to or read from internal drives (hardware
encryption) or external storage systems (software encryption).
So, data is encrypted when transferred across the storage area network (SAN) only between
IBM Spectrum Virtualize systems and external storage. Data in transit is not encrypted when
transferred on SAN interfaces under the following circumstances:
Server-to-storage data transfer
Remote Copy (RC) (for example, Global Mirror or Metro Mirror (MM)
Intracluster (node-to-node) communication
738 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: Only data-at-rest is encrypted. Host to storage communication and data that is sent
over links that are used for Remote Mirroring are not encrypted.
Figure 12-1 shows an encryption example. Encrypted disks and encrypted data paths are
marked in blue. Unencrypted disks and data paths are marked in red. The server sends
unencrypted data to an SVC 2145-DH8 system, which stores hardware-encrypted data on
internal disks. The data is mirrored to a remote Storwize V7000 Gen1 system by using RC.
The data flowing through the RC link is not encrypted. Because the Storwize V7000 Gen1
(2076-324) system cannot perform any encryption activities, data on the Storwize V7000
Gen1 is not encrypted.
Server
Remote Copy
2145-DH8 2076-324
SAS Hardware SAS
Encryption
2145-24F 2076-224
2145-24F 2076-224
To enable encryption of both data copies, the Storwize V7000 Gen1 system must be replaced
by an encryption capable (with optional encryption enabled) IBM Spectrum Virtualize system,
as shown in Figure 12-2. After the replacement, both copies of data are encrypted, but the
RC communication between both sites remains unencrypted.
Server
Remote Copy
2145-DH8 2076-524
SAS Hardware SAS
Encryption
2145-24F 2076-24F
2145-24F 2076-24F
Server
2145-SV1
Software SAS Hardware
FC
Encryption Encryption
2145-24F
2076-324
2145-24F
The placement of hardware encryption and software encryption in the IBM Spectrum
Virtualize code stack are shown in Figure 12-4. As compression is performed before
encryption, it is possible to get benefits of compression for the encrypted data.
Figure 12-4 Encryption placement in the IBM Spectrum Virtualize Software stack (with IBM Real-time
Compression Appliance)
740 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Each volume copy can use different encryption methods (hardware and software). It also may
have volume copies with different encryption status (encrypted versus unencrypted). The
encryption method depends only on the pool that is used for the specific copy. You can
migrate data between different encryption methods by using volume migration or volume
mirroring.
If you add a control enclosure to a system that has encryption that is enabled, the control
enclosure must also be licensed.
No trial license for encryption exists because when the trial runs out, the access to the data is
lost. Therefore, you must purchase an encryption license before you activate encryption.
Licenses are generated by IBM Data Storage Feature Activation (DSFA) based on the serial
number (S/N) and the machine type and model (MTM) of the control enclosure.
You can activate an encryption license during the initial system setup (on the Encryption
window of the initial setup wizard) or later on in the running environment.
Both methods are available during the initial system setup and when the system is in use.
742 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.4.1 Obtaining an encryption license
You must purchase an encryption license before you activate encryption. If you did not
purchase a license, contact an IBM marketing representative or IBM Business Partner to
purchase an encryption license.
When you purchase a license, you receive a function authorization document with an
authorization code that is printed on it. With this code, you may proceed with the automatic
activation process.
If the automatic activation process fails or if you prefer to use the manual activation process,
see IBM Data Storage Feature Activation to retrieve your license keys.
For more information about how to retrieve the machine signature of a control enclosure, see
12.4.5, “Activating the license manually” on page 750.
12.4.2 Starting the activation process during the initial system setup
One of the steps in the initial setup enables the encryption license activation. The system
asks “Was the encryption feature purchased for this system?”. To activate encryption at
this stage, complete the following steps:
1. Select Yes, as shown in Figure 12-5.
Figure 12-6 Information storage system during the initial system setup
2. Right-click the control enclosure to open a menu with two license activation options
(Activate License Automatically and Activate License Manually), as shown in
Figure 12-7. Use either option to activate encryption. For more information about how to
complete the automatic activation process, see 12.4.4, “Activating the license
automatically” on page 747. For more information about how to complete a manual
activation process, see 12.4.5, “Activating the license manually” on page 750.
744 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. After either activation process is complete, you can see a green check mark in the column
that is labeled Licensed next to a control enclosure for which the license was enabled.
You can proceed with the initial system setup by clicking Next, as shown in Figure 12-8.
Note: Every enclosure needs an active encryption license before you can enable
encryption on the system. Attempting to add a non-licensed enclosure to an
encryption-enabled system fails.
Figure 12-8 Successful encryption license activation during the initial system setup
Figure 12-9 Expanding the Encryption Licenses section on the Licensed Functions window
3. The Encryption Licenses window displays information about your control enclosures.
Right-click the enclosure on which you want to install an encryption license. This action
opens a menu with two license activation options (Activate License Automatically and
Activate License Manually), as shown in Figure 12-10. Use either option to activate
encryption. For more information about how to complete an automatic activation process,
see 12.4.4, “Activating the license automatically” on page 747. For more information about
how to complete a manual activation process, see 12.4.5, “Activating the license
manually” on page 750.
Figure 12-10 Selecting the Control Enclosure on which you want to enable the encryption
746 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
After either activation process is complete, you can see a green check mark in the column
that is labeled Licensed for the control enclosure, as shown in Figure 12-11.
Important: To perform this operation, the PC that was used to connect to the GUI and
activate the license must connect to the internet.
To activate the encryption license for a control enclosure automatically, complete the following
steps:
1. Click Activate License Automatically to open the Activate License Automatically
window, as shown in Figure 12-12.
The system connects to IBM to verify the authorization code and retrieve the license key.
Figure 12-14 shows a window that is displayed during this connection. If everything works
correctly, the procedure takes less than a minute.
After the license key is retrieved, it is automatically applied, as shown in Figure 12-15 on
page 749.
748 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-15 Successful encryption license activation
Check whether the PC that is used to connect to the IBM FlashSystem GUI and activate the
license can access the internet. If you cannot complete the automatic activation procedure,
use the manual activation procedure that is described in 12.4.5, “Activating the license
manually” on page 750.
Although authorization codes and encryption license keys use the same format (four groups
of four hexadecimal digits), you can use each of them only in the appropriate activation
process. If you use a license key when the system expects an authorization code, the system
displays the error message.
2. If you have not done so, obtain the encryption license for the control enclosure. The
information that is required to obtain the encryption license is displayed in the Manual
Activation window. Use this data to follow the instructions in 12.4.1, “Obtaining an
encryption license” on page 743.
750 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. You can enter the license key either by typing it, pasting it, or clicking the folder icon and
uploading the license key file to the storage system that was downloaded from DSFA. In
Figure 12-18, the sample key is entered. Click Activate.
After the task completes successfully, the GUI shows that encryption is licensed for the
specified control enclosure, as shown in Figure 12-19.
For a list of supported key servers, see Supported Key Servers - IBM Spectrum Virtualize.
IBM Spectrum Virtualize V8.1 introduced the ability to define up to four encryption key
servers, which is a preferred configuration because it increases key provider availability. In
this version, support for simultaneous use of both USB flash drives and key server was
added.
752 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Organizations that use encryption key management servers might consider parallel use of
USB flash drives as a backup solution. During normal operation, such drives can be
disconnected and stored in a secure location. However, during a catastrophic loss of
encryption servers, the USB drives can still be used to unlock the encrypted storage.
The key server and USB flash drive characteristics that are described next might help you to
choose the type of encryption key provider that you want to use.
Important: Maintaining confidentiality of the encrypted data hinges on the security of the
encryption keys. Pay special attention to ensure secure creation, management, and
storage of the encryption keys.
You can select Settings → Security → Encryption, and the click Enable Encryption, as
shown in Figure 12-22.
The Enable Encryption wizard starts by prompting you to select the encryption key provider to
use for storing the encryption keys, as shown in Figure 12-23 on page 755. You can enable
either or both providers.
754 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-23 Enable Encryption wizard Welcome window
The next section presents a scenario in which both encryption key providers are enabled
concurrently.
For more information about how to enable encryption by using only USB flash drives, see
12.5.2, “Enabling encryption by using USB flash drives” on page 755.
For more information about how to enable encryption by using key servers as the sole
encryption key provider, see 12.5.3, “Enabling encryption by using key servers” on page 759.
Note: The system needs at least three USB flash drives before you can enable encryption
by using this encryption key provider. IBM USB flash drives are preferred and can be
obtained from IBM with the Feature Code Encryption USB flash drives (Four Pack). Other
flash drives might also work. You can use any USB ports in any node of the cluster.
Using USB flash drives as the encryption key provider requires a minimum of three USB flash
drives to store the generated encryption keys. Because the system attempts to write the
encryption keys to any USB key that is inserted into a node port (N_Port), it is critical to
maintain physical security of the system during this procedure.
While the system enables encryption, you are prompted to insert USB flash drives into the
system. The system generates and copies the encryption keys to all available USB flash
drives.
If your system is in a secure location with controlled access, one USB flash drive for each
canister can remain inserted in the system. If a risk of unauthorized access exists, all USB
flash drives with the master access keys must be removed from the system and stored in a
secure place.
Securely store all copies of the encryption key. For example, any USB flash drives that are
holding an encryption key copy that are not left plugged into the system can be locked in a
safe. Similar precautions must be taken to protect any other copies of the encryption key that
are stored on other media.
Notes: Generally, create at least one extra copy on another USB flash drive for storage in
a secure location. You can also copy the encryption key from the USB drive and store the
data on other media, which can provide extra resilience and mitigate risk that the USB
drives used to store the encryption key come from a faulty batch.
Every encryption key copy must be stored securely to maintain confidentiality of the
encrypted data.
A minimum of one USB flash drive with the correct master access key is required to unlock
access to encrypted data after a system restart, such as a system-wide restart or power loss.
No USB flash drive is required during a warm restart, such as node-exiting service mode or a
single node restart. The data center power-on procedure must ensure that USB flash drives
containing encryption keys are plugged into the storage system before it is powered on.
During power-on, insert the USB flash drives into the USB ports on two supported canisters
to safeguard against failure of a node, node’s USB port, or USB flash drive during the
power-on procedure.
To enable encryption by using USB flash drives as the only encryption key provider, complete
the following steps:
1. In the Enable Encryption wizard Welcome tab, select USB flash drives and click Next, as
shown in Figure 12-24 on page 757.
756 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-24 Selecting USB flash drives in the Enable Encryption wizard
2. If there are fewer than three USB flash drives that are inserted into the system, you are
prompted to insert more drives. The system reports how many more drives must be
inserted.
Note: The Next option remains disabled until at least three USB flash drives are
inserted and the system detects them.
3. Insert the USB flash drives into the USB ports as requested.
After the minimum required number of drives is detected, the encryption keys are
automatically copied onto the USB flash drives, as shown in Figure 12-25.
Figure 12-25 Writing the master access key to USB flash drives
You receive a message confirming that the encryption is now enabled on the system, as
shown in Figure 12-27.
758 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption, as shown in Figure 12-28.
Figure 12-28 Encryption view that uses USB flash drives as the enabled provider
IBM Spectrum Virtualize supports the following key servers as encryption key providers:
IBM Security Key Lifecycle Manager
Gemalto SafeNet KeySecure
Note: Support for IBM Security Key Lifecycle Manager was introduced in IBM Spectrum
Virtualize V7.8. Support for Gemalto SafeNet KeySecure was introduced in IBM Spectrum
Virtualize V8.2.1.
IBM Security Key Lifecycle Manager and SafeNet KeySecure support KMIP, which is a
standard for the management of cryptographic keys.
Note: Make sure that the key management server function is fully independent from the
encrypted storage that has encryption that is managed by this key server environment.
Failure to observe this requirement might create an encryption deadlock. An encryption
deadlock is a situation in which none of key servers in the environment can become
operational because some critical part of the data in each server is stored on a storage
system that depends on one of the key servers to unlock access to the data.
IBM Spectrum Virtualize V8.1 and later supports up to four key server objects that are defined
in parallel. But, only one key server type (IBM Security Key Lifecycle Manager or KeySecure)
can be enabled at one time.
Another characteristic when working with key servers is that it is not possible to migrate from
one key server type directly to another. If you want to migrate from one type to another, you
first must migrate from your current key server to USB encryption, and then migrate from USB
to the other type of key server.
For more information about completing these tasks, see IBM Documentation.
Access to the key server that stores the correct master access key is required to enable
access to encrypted data in the system after a system restart. A system restart might be a
system-wide restart or power loss. Access to the key server is not required during a warm
restart, such as node-exiting service mode or a single node restart. The data center power-on
procedure must ensure key server availability before the storage system that uses encryption
starts. If a system with encrypted data restarts and does not have access to the encryption
keys, then the encrypted storage pools are offline until the encryption keys are detected.
To enable encryption by using an IBM Security Key Lifecycle Manager key server, complete
the following steps:
1. Ensure that service IP addresses are configured on all your nodes.
2. In the Enable Encryption wizard Welcome tab, select Key servers and click Next, as
shown in Figure 12-29.
Figure 12-29 Selecting the key server as the only provider in the Enable Encryption wizard
760 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. Select IBM SKLM (with KMIP) as the key server type, as shown in Figure 12-30.
Figure 12-30 Selecting IBM Security Key Lifecycle Manager as the key server type
4. The wizard opens the Key Servers tab, as shown in Figure 12-31 on page 762. Enter the
name and Internet Protocol (IP) address of the key servers. The first key server that is
specified must be the primary IBM Security Key Lifecycle Manager key server.
Note: The supported versions of IBM Security Key Lifecycle Manager (up to Version
4.0, which was the latest code version that was available at the time of writing)
differentiate between the primary and secondary key server role. The primary
IBM Security Key Lifecycle Manager server as defined on the Key Servers window of
the Enable Encryption wizard must be the server that is defined as the primary by
IBM Security Key Lifecycle Manager administrators.
The key server name serves only as a label. Only the provided IP address is used to
contact the server. If the key server’s TCP port number differs from the default value for
the KMIP protocol (that is, 5696), enter the port number.
Figure 12-31 Configuring the primary IBM Security Key Lifecycle Manager server
5. If you want to add secondary IBM Security Key Lifecycle Manager servers, click the +
symbol and enter the data for the secondary IBM Security Key Lifecycle Manager servers,
as shown in Figure 12-32. You can define up to three extra IBM Security Key Lifecycle
Manager servers. Click Next when you are done.
Figure 12-32 Configuring multiple IBM Security Key Lifecycle Manager servers
762 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. The next window in the wizard is a reminder that the Spectrum_VIRT device group that is
dedicated for IBM Spectrum Virtualize systems must exist on the IBM Security Key
Lifecycle Manager key servers. Make sure that this device group exists and click Next to
continue, as shown in Figure 12-33.
7. Enable secure communication between the IBM Spectrum Virtualize system and the
IBM Security Key Lifecycle Manager key servers by uploading the key server certificate
from a trusted third-party certificate authority (CA) or by using a self-signed certificate.
The self-signed certificate can be obtained from each of the key servers directly.
8. Configure the IBM Security Key Lifecycle Manager key server to trust the public key
certificate of the IBM Spectrum Virtualize system. You can download the IBM Spectrum
Virtualize system public SSL certificate by clicking Export Public Key, as shown in
Figure 12-35. Install this certificate in the IBM Security Key Lifecycle Manager key server
in the Spectrum_VIRT device group.
764 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
9. When the IBM Spectrum Virtualize system public key certificate is installed on the
IBM Security Key Lifecycle Manager key servers, acknowledge this installation by clicking
the checkbox below the Export Public Key button and click Next.
10.The key server configuration is shown in the Summary tab, as shown in Figure 12-36.
Click Finish to create the key server object and finalize the encryption enablement.
Figure 12-36 Finishing the enablement of encryption by using IBM Security Key Lifecycle Manager
key servers
11.If no errors occur while the key server object is created, you receive a message that
confirms that the encryption is now enabled on the system. Click Close.
12.Confirm that encryption is enabled by selecting Settings → Security → Encryption, as
shown in Figure 12-37. The Online state that indicates which IBM Security Key Lifecycle
Manager servers are detected as available by the system.
Figure 12-37 Encryption that is enabled with only IBM Security Key Lifecycle Manager servers as
encryption key providers
IBM Spectrum Virtualize supports Gemalto SafeNet KeySecure V8.3.0 and later, and uses
only the KMIP protocol. It is possible to configure up to four SafeNet KeySecure servers in
IBM Spectrum Virtualize for redundancy, and they can coexist with USB flash drive
encryption.
It is not possible to have both SafeNet KeySecure and IBM Security Key Lifecycle Manager
key servers that are configured concurrently in IBM Spectrum Virtualize. It is also not possible
to migrate directly from one type of key server to another (from IBM Security Key Lifecycle
Manager to SafeNet KeySecure or vice versa). If you want to migrate from one type to
another, first migrate to USB flash drives encryption, and then migrate to the other type of key
servers.
KeySecure uses an active-active clustered model. All changes to one key server are instantly
propagated to all other servers in the cluster.
Although KeySecure uses the KMIP protocol like IBM Security Key Lifecycle Manager does,
an option is available to configure the username and password for IBM Spectrum Virtualize
and KeySecure server authentication, which is not possible when the configuration is
performed with IBM Security Key Lifecycle Manager.
The certificate for client authentication in SafeNet KeySecure can be self-signed or signed by
a CA.
To enable encryption in IBM Spectrum Virtualize by using a Gemalto SafeNet KeySecure key
server, complete the following steps:
1. Ensure that the service IP addresses are configured on all your nodes.
2. In the Enable Encryption wizard Welcome tab, select Key servers and click Next, as
shown in Figure 12-38 on page 767.
766 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-38 Selecting key servers as the only provider in the Enable Encryption wizard
3. In the next window, you can choose between the IBM Security Key Lifecycle Manager or
Gemalto SefeNet KeySecure server types, as shown in Figure 12-39. Select Gemalto
SefeNet KeySecure and click Next.
Figure 12-39 Selecting Gemalto SafeNet KeySecure as the key server type
5. The next window in the wizard prompts for the key servers’ credentials (username and
password), as shown in Figure 12-41 on page 769. This setting is optional because it
depends on how the SafeNet KeySecure servers are configured.
768 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-41 Key server credentials input (optional)
6. Enable secure communication between the IBM Spectrum Virtualize system and the
SafeNet KeySecure key servers by uploading the key server certificate from a trusted
third-party CA or by using a self-signed certificate. The self-signed certificate can be
obtained from each of key servers directly. After uploading any of the certificates in the
window that is shown in Figure 12-42, click Next.
8. The key server configuration is shown in the Summary tab, as shown in Figure 12-44.
Click Finish to create the key server object and finalize the encryption enablement.
Figure 12-44 Finishing the enablement of encryption by using SafeNet KeySecure key servers
770 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
9. If no errors occurred while creating the key server object, you receive a message that
confirms that the encryption is now enabled on the system. Click Close.
10.Confirm that encryption is enabled by selecting Settings → Security → Encryption, as
shown in Figure 12-45. Check whether the four servers are shown as online, which
indicates that all four SafeNet KeySecure servers are detected as available by the system.
Figure 12-45 Encryption that is enabled with four SafeNet KeySecure key servers
Note: Make sure that the key management server function is fully independent from
encrypted storage that has encryption that is managed by this key server environment.
Failure to observe this requirement might create an encryption deadlock. An encryption
deadlock is a situation in which none of key servers in the environment can become
operational because some critical part of the data in each server is stored on an encrypted
storage system that depends on one of the key servers to unlock access to the data.
IBM Spectrum Virtualize V8.1 and later supports up to four key server objects that are defined
in parallel.
Before you enable encryption by using both USB flash drives and key servers, confirm the
requirements that are described in 12.5.2, “Enabling encryption by using USB flash drives” on
page 755 and 12.5.3, “Enabling encryption by using key servers” on page 759.
Figure 12-46 Selecting key servers and USB flash drives in the Enable Encryption wizard
3. The wizard opens the Key Server Types window, as shown in Figure 12-47. Select the key
server type that manages the encryption keys.
772 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The next flow of the actions is the same as described in 12.5.3, “Enabling encryption by
using key servers” on page 759, depending on the type of key server that is selected.
When the key servers details are entered, the USB flash drive encryption configuration is
displayed. In this step, the master encryption key copies are stored in the USB flash
drives. If fewer than three drives are detected, the system requests that you plug in more
USB flash drives. You cannot proceed until the required minimum number of USB flash
drives is detected by the system.
After at least three USB flash drives are detected, the system writes the master access
key to each of the drives, as shown n Figure 12-48. The system attempts to write the
encryption key to any flash drive that it detects. Therefore, it is crucial to maintain the
physical security of the system during this procedure.
Figure 12-48 Master access key that is writing to the USB flash drives
4. After copying the encryption keys to USB flash drives, a window opens and shows a
summary of the configuration that is implemented on the system. Click Finish to create
the key server object and finalize the encryption enablement.
If no errors occur while creating the key server object, the system displays a window that
confirms that the encryption is now enabled on the system and that both encryption key
providers are enabled.
5. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption. Note the state Online of key servers and
state Validated of USB ports where USB flash drives are inserted to make sure that they
are properly configured.
Note: If you set up encryption of your storage system when it was running a version of IBM
Spectrum Virtualize earlier than Version 7.8.0, you must rekey the master encryption key
before you can enable a second encryption provider when you upgrade to Version 8.1 or
later.
2. Complete the steps that are required to configure the key server provider, as described in
12.5.3, “Enabling encryption by using key servers” on page 759. The difference in the
process that is described in that section is that the wizard gives you an option to disable
USB flash drive encryption, which aims to migrate from the USB flash drive to key server
provider.
Select No to enable both encryption key providers, as shown in Figure 12-50 on page 775.
774 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-50 Do not disable the USB flash drive encryption key provider
This choice is confirmed on the summary window before the configuration is committed,
as shown in Figure 12-51.
3. After you click Finish, the system configures the keys servers as a second encryption key
provider. Successful completion of the task is confirmed by a message. Click Close.
Figure 12-52 Encryption that is enabled with two key providers available
776 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-53 Encryption that is enabled with two key providers that are available
If you want to migrate from one key server type to another (for example, migrating from
IBM Security Key Lifecycle Manager to SafeNet KeySecure or vice versa), direct migration is
not possible. In this case, you first must migrate from the current key server type to a USB
flash drive, and then migrate to the other type of key server.
Figure 12-54 Disabling the USB flash drive provider while changing to the IBM Security Key Lifecycle
Manager provider
12.7.2 Changing from an encryption key server to a USB flash drive provider
Changing from using encryption key servers provider to a USB flash drives provider is not
possible by using only the GUI.
To change the direction, add USB flash drives as a second provider by completing the steps
that are described in 12.6.2, “Adding USB flash drives as a second provider” on page 776.
To make sure that the USB drives contain the correct master access key, disable the
encryption key server provider by running the following command:
chencryption -keyserver disable
This command disables the encryption key server provider, which effectively migrates your
system from an encryption key server to a USB flash drive provider.
778 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.7.3 Migrating between different key server types
The migration between different key server types cannot be performed directly from one type
of key server to another. USB flash drives encryption must be used to facilitate this task.
If you want to migrate from one type of key server to another, you first must migrate from your
current key servers to USB encryption, and then migrate from USB to the other type of key
servers.
The procedure to migrate from one key server type to another is shown here. In this example,
we migrate an IBM Spectrum Virtualize system that is configured with IBM Security Key
Lifecycle Manager key server (as shown in Figure 12-55) to SafeNet KeySecure servers.
Figure 12-55 IBM Spectrum Virtualize encryption that is configured with IBM Security Key Lifecycle
Manager servers
Figure 12-56 IBM FlashSystem encryption that is configured with USB flash drives
Figure 12-57 IBM FlashSystem encryption that is configured with SafeNet KeySecure
If you lose access to the encryption key server provider, run the following command:
chencryption -keyserver disable
If you lose access to the USB flash drives provider, run the following command:
chencryption -usb disable
If you want to restore the configuration with both encryption key providers, follow the
instructions that are described in 12.6, “Configuring more providers” on page 774.
Note: If you lose access to all encryption key providers that are defined in the system, no
method is available to recover access to the data that is protected by the master access
key.
780 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.9 Using encryption
The design for encryption is based on the concept that a system is fully encrypted or not
encrypted. Encryption implementation is intended to encourage solutions that contain only
encrypted volumes or only unencrypted volumes. For example, after encryption is enabled on
the system, all new objects (for example, pools) are by default created as encrypted.
Some unsupported configurations are actively policed in code. For example, no support exists
for creating unencrypted child pools from encrypted parent pools. However, exceptions exist:
During the migration of volumes from unencrypted to encrypted volumes, a system might
report both encrypted and unencrypted volumes.
It is possible to create unencrypted arrays from CLI by manually overriding the default
encryption setting.
Notes: Encryption support for distributed redundant array of independent disks (DRAID) is
available in IBM Spectrum Virtualize V7.7 and later.
You must decide whether to encrypt or not encrypt an object when it is created. You cannot
change this setting later. To change the encryption state of stored data, you must migrate
from an encrypted object (for example, a pool) to an unencrypted one, or vice versa.
Volume migration is the only way to encrypt any volumes that were created before enabling
encryption on the system.
You can click Create to create an encrypted pool. All storage that is added to this pool is
encrypted.
If you create an unencrypted pool but add only encrypted arrays or self-encrypting MDisks to
the pool, the pool is reported as encrypted because all extents in the pool are encrypted. The
pool reverts to the unencrypted state if you add an unencrypted array or MDisk. By default, if
encryption is enabled on the storage, newly added internal MDisks (arrays) are created
encrypted and the pool is reported as encrypted unless there is any unencrypted MDisks in
the pool.
You can mix and match storage encryption types in a pool. Figure 12-60 on page 783 shows
an example of an encrypted pool that contains storage by using different encryption methods.
782 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-60 Mixing and matching encryption in a pool
However, if you want to create encrypted child pools from an unencrypted storage pool that
contains a mix of internal arrays and external MDisks, the following restrictions apply:
The parent pool must not contain any unencrypted internal arrays. If any unencrypted
internal arrays are in the unencrypted pool, when you try to create a child pool and select
the option to set it as encrypted, it is created as unencrypted.
All IBM FlashSystem Control Enclosures in the system must support software encryption
and have the encryption license activated.
Example 12-1 Creating an unencrypted array by using the CLI with IBM FlashSystem
IBM_SAN:ITSO-V7k:superuser>svctask mkarray -drive 6:4 -level raid1 -sparegoal 0
-strip 256 -encrypt no Pool2
MDisk, id [2], successfully created
IBM_SAN:ITSO-V7k:superuser>
You can customize the MDisks by Pools view to show the array encryption status. Select
Pools → MDisk by Pools, and then select Actions → Customize Columns → Encryption.
You also can right-click the table header to customize columns and select Encryption, as
shown in Figure 12-62.
You can also check the encryption state of an array by reviewing its drives by selecting
Pools → Internal Storage. The internal drives that are associated with an encrypted array
are assigned an encrypted property that you can view, as shown in Figure 12-63 on
page 785.
784 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-63 Drive encryption state
The user interface gives no method to see which extents contain encrypted data and which
do not. However, if a volume is created in a correctly configured encrypted pool, all data that
is written to this volume is encrypted.
You can use the MDisk by Pools view to view the object encryption state by selecting
Pools → MDisk by Pools. Figure 12-64 shows an example in which a self-encrypting MDisk
is in an unencrypted pool, where the pool is reported as unencrypted.
When working with MDisk encryption, take extra care when configuring the MDisks and
pools.
If the MDisk was earlier used for storage of unencrypted data, the extents can contain stale
unencrypted data. This issue occurs because file deletion marks disk space only as free. The
data is not removed from the storage. Therefore, if the MDisk is not self-encrypting and was a
part of an unencrypted pool and later was moved to an encrypted pool, the MDisk still
contains stale data from its previous state.
However, all data that is written to any MDisk that is a part of correctly configured encrypted
storage pool is going to be encrypted.
IBM Spectrum Virtualize products can detect that an MDisk is self-encrypting by using the
SCSI Inquiry page C2. MDisks that are provided by other IBM Spectrum Virtualize products
report this page correctly. The Externally encrypted checkbox is selected for those MDisks.
Note: You can override the external encryption setting of a detected MDisk as
self-encrypting and configure it as unencrypted by running chmdisk -encrypt no. However,
run this command only if you plan to decrypt the data on the back end or if the back end
uses inadequate data encryption.
786 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To check whether an MDisk was declared as self-encrypting, select Pools → MDisk by
Pools and verify the information in the Encryption column, as shown in Figure 12-66.
The value that is shown in the Encryption column shows the property of objects in respective
rows, which means that in the configuration that is shown in Figure 12-66, Pool1 is encrypted,
so every volume that is created from this pool is encrypted. However, that pool is formed by
two MDisks, out of which one is self-encrypting and one is not. Therefore, a value of No next to
mdisk7 does not imply that the encryption of Pool1 is in any way compromised. It indicates
that encryption of the data that is placed on mdisk7 is done only by using software encryption.
Data that is placed on mdisk3 is encrypted by the back-end storage that is providing these
MDisks.
Note: You can change the self-encrypting attribute of an MDisk that is unmanaged or a
member of an unencrypted pool. However, you cannot change the self-encrypting attribute
of an MDisk after it is added to an encrypted pool.
You can modify the Volumes view to show whether the volume is encrypted. Select
Volumes → Volumes, and then select Actions → Customize Columns → Encryption to
customize the view to show the volume’s encryption status, as shown in Figure 12-67.
When creating volumes, make sure to select encrypted pools to create encrypted volumes, as
shown in Figure 12-69.
For more information about these methods, see Chapter 6, “Volumes” on page 299.
12.9.6 Restrictions
The following restrictions apply to encryption:
Image mode volumes cannot be in encrypted pools.
You cannot add external non-self-encrypting MDisks to encrypted pools unless all control
enclosures in the system support encryption.
788 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.10 Rekeying an encryption-enabled system
Changing the master access key is a security requirement. Rekeying is the process of
replacing the current master access key with a newly generated one. The rekey operation
works whether encrypted objects exist. The rekeying operation requires access to a valid
copy of the original master access key on an encryption key provider that you plan to rekey.
Use the rekey operation according to the schedule that is defined in your organization’s
security policy and whenever you suspect that the key might be compromised.
If you have both USB and key servers that are enabled, rekeying is done separately for each
of the providers.
Important: Before you create a master access key, ensure that all nodes are online and
that the current master access key is accessible.
No method is available to directly change data encryption keys. If you must change the data
encryption key that is used to encrypt data, the only available method is to migrate that data
to a new encrypted object (for example, an encrypted child pool). Because the data
encryption keys are defined per encrypted object, such migration forces a change of the key
that is used to encrypt that data.
To rekey the master access key that is kept on the key server provider, complete the following
steps:
1. Select Settings → Security → Encryption. Ensure that Encryption Keys shows that all
configured IBM Security Key Lifecycle Manager servers are reported as Accessible. Click
Key Servers to expand the section.
2. Click Rekey, as shown in Figure 12-70.
Figure 12-70 Starting the rekey on the IBM Security Key Lifecycle Manager key server
Note: The rekey operation is performed on only the primary key server that is
configured in the system. If more key servers are configured apart from the primary key,
they do not hold the updated encryption key until they obtain it from the primary key
server. To restore encryption key provider redundancy after a rekey operation, replicate
the encryption key from the primary key server to the secondary key servers.
You receive a message confirming that the rekey operation was successful.
After the rekey operation is complete, update all other copies of the encryption key, including
copies that are stored on other media. Take the same precautions to securely store all copies
of the new encryption key as when you enabled encryption for the first time.
To rekey the master access key on USB flash drives, complete the following steps:
1. Select Settings → Security → Encryption. Click USB Flash Drives to expand the
section.
2. Verify that all USB drives that are plugged into the system are detected and show as
Validated, as shown in Figure 12-71. Click Rekey. You need at least three USB flash
drives, with at least one reported as Validated to process a rekey.
3. If the system detects a validated USB flash drive and at least three available USB flash
drives, new encryption keys are automatically copied on the USB flash drives, as shown in
Figure 12-72 on page 791. Click Commit to finalize the rekey operation.
790 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 12-72 Writing new keys to USB flash drives
4. You receive a message confirming that the rekey operation was successful, as shown in
Figure 12-73. Click Close.
2. You receive a message confirming that encryption was disabled. Figure 12-75 shows the
message when a key server is used.
792 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13
Fault tolerance and high levels of availability are achieved by using the following methods:
The distributed redundant array of independent disks (DRAID) capabilities of the
underlying disks.
IBM FlashSystem nodes clustering that use a Compass architecture.
Auto-restart of hung nodes.
Integrated battery backup units (BBUs) to provide memory protection if a site power failure
occurs.
Host system failover capabilities by using N_Port ID Virtualization (NPIV).
Deploying advanced multi-site configurations, such as IBM HyperSwap and stretched
clusters.
The heart of the IBM FlashSystem system is a pair of node canisters. These two canisters
share the read and write data workload from the attached hosts and to the disk arrays. This
section examines the RAS features of the systems, monitoring, and troubleshooting.
794 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.1.1 Node canisters
Two node canisters are contained in the control enclosure that work as a clustered system
that runs the IBM Spectrum Virtualize Software. As shown in Figure 13-1, the top node
canister is inverted above the bottom one. The control enclosure also contains two power
supply units (PSUs) that operate independently of each other. The PSUs are visible from the
back of the control enclosure.
Rear Front
PSU LED
The connections of a single node canister (bottom) are shown in Figure 13-2.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 795
Host interface cards
Each canister (apart from the IBM FlashSystem 5100) has three host interface card (HIC)
slots. Depending on the system, there might already be a 4-port serial-attached Small
Computer System Interface (SCSI) (SAS) card that is installed in each node, leaving two HIC
slots that can be populated with a range of cards, as shown in Table 13-1. Nodes in the same
I/O group must have the same HIC configuration.
Table 13-1 Supported card configurations for IBM FlashSystem 7xxx / IBM FlashSystem 9xxx systems
Supported Ports Protocol Slot positions Note
number of
cards
0-3 4 16 Gb Fibre 1, 2, 3
Channel (FC)
0-3 2 25 Gb Ethernet 1, 2, 3
(GbE) (internet
Wide Area
Remote Direct
Memory Access
(RDMA) Protocol
(iWARP))
Note: The systems have onboard compression cards. There are no compression-assist
cards like in previous models.
For V5100, there are only two card slots, and the following card configurations are supported
(Table 13-2).
Table 13-2 Supported card configurations for IBM FlashSystem 5100 / IBM FlashSystem 5xxx systems
Supported Ports Protocol Slot positions Note
number of
cards
0-1 4 16 Gb FC 2
796 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: For IBM FlashSystem 5100 and IBM FlashSystem 5000 systems, Peripheral
Component Interconnect Express (PCIe) slot 1 has a blanking plate, so this slot cannot be
used, and slots 2 and 3 become slots 1 and 2. The fabric attach card goes only in slot 2 (far
right when the canister is in the lower canister) so that you can better use the direct
connection of slot 2 to the CPU. Slot 1 (middle position) is connected through the PCIe
switch and accepts only the optional (and slower) SAS card.
The FC card is required to add other control enclosures to the system (0 - 2). Using an FC
card, you can connect the IBM FlashSystem 9xxx or 7xxx control enclosure to up to three
more systems (for a maximum of eight nodes). For the IBM FlashSystem 5100 system, you
can connect only one extra control enclosure (for a maximum of four nodes). For FC
configurations, the meaning of the port LEDs is explained in Table 13-3.
USB ports
Two active USB connectors are available in the horizontal position to the right of the node.
They have no numbers, and no indicators are associated with them. These ports can be used
for initial cluster setup, encryption key backup, and node status or log collection.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 797
Each port has two LEDs, and their status values are listed in Table 13-5. However, the T port
is strictly dedicated to technician actions (initial and emergency configuration by local support
personnel).
Orange Fault on the SAS link (disconnected, wrong speed, and errors).
Left Green Power On The node is started and active. It might not be safe to
remove the canister. If the fault LED is off, the node is an
active member of a cluster or candidate. If the fault LED
is also on, the node is in the service state or in error,
which prevents the software to start.
798 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Position Color Name State Meaning
Right Amber Fault On The canister is in a service state or in error, for example,
a POST error that is preventing the software from starting.
Battery LEDs
Immediately to the right of the canister LEDs, with a short gap between them, are the Battery
LEDs, which provide the status of the battery (see Table 13-8).
Left Green Status On Indicates that the battery is fully charged and has
sufficient charge to complete two fire hose dumps.
Off Indicates that the battery is not available for use (for
example, it is missing or contains a fault).
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 799
13.1.2 Expansion canisters
As Figure 13-3 shows, two 12 gigabits per second (Gbps) SAS ports are side by side on the
canister of every enclosure. They are numbered 1 on the left and 2 on the right. Like the
controller canisters, expansion canisters are also installed in the enclosure side by side in a
vertical position.
The interpretation of the SAS status LED indicators has the same meaning as the LED
indicators of SAS ports in the control enclosure (Table 13-6 on page 798).
Table 13-9 lists the LED status values of the expansion canister.
800 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-4 Dense Drawer LEDs
The interpretation of SAS status LED indicators has the same meaning as the LED indicators
of SAS ports that are mentioned in the previous section (see Table 13-9 on page 800).
Table 13-10 shows the LED status values of the expansion canister.
A strand starts with an SAS initiator chip inside an IBM FlashSystem node canister and
progresses through SAS expanders, which connect disk drives. Each canister contains an
expander. Each drive has two ports, each connected to a different expander and strand. This
configuration ensures that both nodes in the input/output (I/O) group have direct access to
each drive, and that is no single point of failure (SPOF) exists.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 801
Figure 13-5 shows how the SAS connectivity works inside the node and expansion canisters.
Note: The last expansion enclosure in a chain must not have cables in port 2 of canister 1
or port 2 of canister 2. So, if you add another two enclosures to the setup that is shown in
Figure 13-5, you connect a cable to port 2 of the existing enclosure canisters and port 1 of
the new enclosure canisters.
A chain consists of a set of enclosures that are correctly interconnected (Figure 13-6 on
page 803). Chain 1 of an I/O group is connected to SAS port 1 of both node canisters. Chain
2 is connected to SAS port 3. This configuration means that chain 2 includes the SAS
expander and drives of the control enclosure.
802 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4
At system initialization, when devices are added to or removed from strands, the system
performs a discovery process to update the state of the drive and enclosure objects.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 803
13.1.6 Power
All enclosures accommodate two PSUs for normal operation. A single PSU can supply the
entire enclosure for redundancy. For this reason, it is highly advised to supply AC power to
each PSU from different power distribution units (PDUs).
There is a power switch on the power supply and indicator LEDs. The switch must be on for
the PSU to be operational. If the power switch is turned off, the PSU stops providing power to
the system.
For control enclosure PSUs, the battery that is integrated in the node canister continues to
supply power to the node. It supports the power outage for 5 seconds before initiating safety
procedures. A fully charged battery can perform two fire hose dumps. A fire hose dump is a
process where a node stores cache and system data to an internal drive in the event of a
power failure.
Figure 13-7 shows two PSUs that are present in the control and expansion enclosure. The
controller PSU has one LED that can be green or amber, depending on the status of the PSU.
If the LED is off, that means there is no AC power to the entire enclosure.
PSU1 PSU2
Figure 13-8 presents the rear overview of the enclosure canister with a PSU. The enclosure is
powered on by the direct attachment of a power cable.
PSU
Power supplies in both control and expansion enclosures are hot-swappable and replaceable
without needing to shut down a node or cluster. If the power is interrupted in one node for less
than 5 seconds, the canister does not perform a fire hose dump and continues operation from
the battery. This feature is useful for a case of, for example, maintenance of UPS systems in
the data center or replugging the power to a different power source or PDU unit. A fully
charged battery can perform two fire hose dumps.
804 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.2 Shutting down the IBM FlashSystem
You can safely shut down the system by using the GUI or command-line interface (CLI).
Important: Never shut down your system by powering off the PSUs, removing both PSUs,
or removing both power cables from a running system. These actions can lead to
inconsistency or loss of the data that is staged in the cache.
Before shutting down the IBM FlashSystem system, stop all hosts that allocated volumes
from the device. This step can be skipped for hosts that have volumes that are also
provisioned with mirroring (host-based mirroring) from different storage devices. However,
doing so incurs errors that are related to lost storage paths and disks on the host error log.
You can shut down a single node canister, or you can shut down the entire cluster. When you
shut down only one node canister, all activities remain active. When you shut down a canister
or the entire cluster, you must power on locally to start the canister or system.
Shutting down
To shut down the infrastructure, complete the following steps:
1. Shut down your servers and all applications.
2. Shut down your IBM FlashSystem systems:
a. Shut down the IBM FlashSystem by using the GUI or CLI.
b. Power off both switches of the controller enclosure.
c. Power off both switches of all the expansion enclosures.
3. Shut down your storage area network (SAN) switches.
Powering on
To power on your infrastructure, complete the following steps:
1. Power on your SAN switches and wait until the start completes.
2. Power on your storage systems by completing the following steps:
a. Power on both power supplies of all the expansion enclosures.
b. Power on both power supplies of the control enclosure.
c. When the storage systems are up, power on your servers and start your applications.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 805
The easiest way to do this task is running svcinfo lsnode to display all nodes and their ID
and status, as shown in Example 13-1. You can make sure that each IOgroup has two nodes
online (or that if you remove a node, that one node remains in the IOgroup to continue serving
I/O).
In this example, we remove node 1 from the cluster. Run the svctask rmnode 1 command, as
shown in Example 13-2.
A node can also be removed by using the GUI. Complete the following steps:
1. Select Monitoring → System, and then select the relevant control enclosure that the
node you want to remove is on, which opens the Enclosure Details window. Select the
node and either right-click it and click Remove, or use the menu in the Components
Details to remove it, as shown in Figure 13-9, which opens a confirmation window.
After you remove the node, if you rerun svcinfo lsnode, you see that it disappeared from
the cluster, as shown in Example 13-3. The Service Assistant Tool (SAT) and GUI also
reflect that there is now only one node in the cluster.
806 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: By default, the cache is flushed before the node is deleted to prevent data loss if
a failure occurs on the other node in the I/O group. This flush incurs a delay after you
remove a node to when it comes back up as candidate status.
2. After a brief period, check the SAT, which shows that the node that you removed is in the
service or candidate status, as shown in Figure 13-10.
3. Select the radio button for the node that is in service and then select Exit Service State
from the Actions menu. Click GO, and a confirmation window opens, as shown in
Figure 13-11.
4. A confirmation window opens and shows that the node exited the service state. Click OK,
or close the window and click Refresh under the list of the nodes.
5. The node should automatically readd itself to the system. If not, look at the numbers in the
Panel column and go back to your CLI session. Run the addnode command and specify the
panel ID to add the node back into the cluster, as shown in Example 13-4.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 807
6. Run svcinfo lsnode again or check the SAT to ensure that the node was added back, as
shown in Example 13-5.
Note: If you want to remove an entire control enclosure from the cluster to reduce the size
of the cluster or to decommission it, you can do this task by using the GUI. Go to the
Enclosure Overview window, as shown in Figure 13-9 on page 806, but instead of
selecting a node, select Enclosure Actions and then Remove. A confirmation window
opens. This action runs the rmnode command against both nodes in the control enclosure.
For more information about removing an enclosure, see IBM Documentation and search
for “Removing a control enclosure and its expansion enclosures”.
The backup file is updated by the cluster every day. Saving it after any changes to your
system configuration is important. It contains configuration data of arrays, pools, volumes,
and other items. The backup does not contain any data from the volumes.
To successfully perform the configuration backup, the following prerequisites must be met:
All nodes are online.
No independent operations that change the configuration can be running in parallel.
No object name can begin with an underscore.
Important: Ad hoc backup of configuration can be done only from the CLI by using the
svcconfig backup command. Then, the output of the command can be downloaded by
using SCP or GUI.
808 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.4.1 Backing up by using the CLI
You can use the CLI to trigger configuration backups manually or by a regular automated
process. The svcconfig backup command generates a new backup file. Triggering a backup
by using the GUI is not possible. However, you might choose to save the automated 1 AM
cron backup if you have not made any configuration changes.
Example 13-6 shows how to use the svcconfig backup command to generate an ad hoc
backup of the current configuration.
The svcconfig backup command generates three files that provide information about the
backup process and cluster configuration. These files are dumped into the /tmp directory on
the configuration node. Run the lsdumps command to list them (see Example 13-7).
Note: The svc.config.backup.bak file is a previous copy of the configuration, and not part
of the current backup.
Table 13-11 lists the three files that are created by the backup process.
svc.config.backup.sh This file contains the names of the commands that ran to create the
backup of the cluster.
svc.config.backup.log This file contains details about the backup, including any error
information that might have been reported.
Save the current backup to a secure and safe location. The files can be downloaded by
running scp (UNIX) or pscp (Microsoft Windows), as shown in Example 13-8. Replace the IP
address with the cluster IP address of your system and specify a local folder on your
workstation. In this example, we save to C:\V7000Backup.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 809
svc.config.backup.log_782 | 16 kB | 16.8 kB/s | ETA: 00:00:00 | 100%
svc.config.backup.sh_7822 | 5 kB | 5.9 kB/s | ETA: 00:00:00 | 100%
svc.config.backup.xml_782 | 105 kB | 52.8 kB/s | ETA: 00:00:00 | 100%
C:\putty>
C:\>dir FS7200backup
Volume in drive C has no label.
Volume Serial Number is 0608-239A
Directory of C:\FS7200backup
C:\>
Using the -unsafe option enables you to use the wildcard for downloading all the
svc.config.backup files with a single command.
Tip: If you encounter the Fatal: Received unexpected end-of-file from server error,
when running the pscp command, consider upgrading your version of PuTTY.
810 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-12 Download Existing Package
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 811
4. Filter the view by clicking in the Filter box, entering backup, and pressing Enter, as shown
in Figure 13-14.
Note: You must select the configuration node in the upper left drop-down menu
because the backup files are stored there.
5. Select all the files to include in the compressed file, and then click Download. Depending
on your browser preferences, you might be prompted about where to save the file,
otherwise it downloads to your defined download directory.
The format for the software update package name ends in four positive integers that are
separated by dots. For example, a software update package might have the following name:
IBM_2076_INSTALL_8.4.0
812 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Important: Before you attempt any code update, read and understand the concurrent
compatibility and code cross-reference matrix for your system. For more information, see
Concurrent Compatibility and Code Cross Reference for IBM Spectrum Virtualize and click
Latest system code.
During the update, each node in the IBM FlashSystem clustered system is automatically shut
down and restarted by the update process. Because each node in an I/O group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to make sure that all I/O
paths between all hosts and SANs work.
If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when the IBM FlashSystem node that provides that access is shut
down during the update process. You can check the I/O paths by running datapath query
SDD commands.
The software update test utility can be downloaded in advance of the update process.
Alternately, it can be downloaded and run directly during the software update, as guided by
the update wizard.
You can run the utility multiple times on the same system to perform a readiness check-in as
preparation for a software update. Run this utility a final time immediately before you apply the
software update, but make sure that you always use the latest version of the utility.
The installation and use of this utility is nondisruptive, and it does not require a restart of any
IBM FlashSystem nodes. Therefore, there is no interruption to host I/O. The utility is installed
only in the current configuration node.
System administrators must continue to check whether the version of code that they plan to
install is the latest version. For more information, see Concurrent Compatibility and Code
Cross Reference for IBM Spectrum Virtualize.
This utility is intended to supplement rather than duplicate the tests that are performed by the
IBM Spectrum Virtualize update procedure (for example, checking for unfixed errors in the
error log).
A concurrent software update of all components is supported through the standard Ethernet
management interfaces. However, most of the configuration tasks are restricted during the
update process.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 813
13.5.3 Updating your IBM FlashSystem to Version 8.4.0.
To update the IBM Spectrum Virtualize Software to Version 8.4.0, complete the following
steps:
1. Log in by using superuser credentials. The management home window opens. Hover the
cursor over Settings and click System (see Figure 13-15).
814 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
2. In the System menu, click Update System. The Update System window opens (see
Figure 13-16).
3. From this window, you can select to run the update test utility and continue with the code
update or run the test utility. For this example, we click Test and Update.
See My Notifications (an IBM account is required) to add your system to the
notifications list to be advised of support information and to download the current code
to your workstation for later upload.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 815
4. Because you downloaded both files from Concurrent Compatibility and Code Cross
Reference for IBM Spectrum Virtualize, you can click each folder, browse to the location
where you saved the files, and upload them to the system. If the files are correct, the GUI
detects and updates the target code level, as shown in Figure 13-17.
Figure 13-17 Upload option for both the test utility and update package
5. Select the type of update you want to perform, as shown in Figure 13-18. Select
Automatic update unless IBM Support suggests Service Assistant Manual update. The
manual update might be preferable in cases where misbehaving host multipathing is
known to cause loss of access. Click Next to begin the update package upload process.
816 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
When updating from Version 8.1 or later, another window opens, in which you can choose
a fully automated update, one that pauses when half the nodes complete the update, or
one that pauses after each node update, as shown in Figure 13-19. The pause option
requires that you click Resume to continue the update after each pause. Click Finish.
6. After the update packages upload, the update test utility looks for any known issues that
might affect a concurrent update of your system. Click Read more (see Figure 13-20).
Figure 13-20 Issues that are detected by the update test utility
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 817
The results window opens and shows you what issues were detected (see Figure 13-21).
In our example, the system identified an error that one or more drives in the system are
running microcode with a known issue and a warning that email notification (Call Home) is
not enabled. Although this issue is not a recommended condition, it does not prevent the
system update from running. Therefore, we click Close and proceed with the update.
However, you might need to contact IBM Support to help resolve more serious issues
before continuing.
7. Click Resume in the Update System window and the update proceeds, as shown in
Figure 13-22 on page 819.
818 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-22 Resuming the update
Note: Because the utility detects issues, another warning appears to ensure that you
investigated them and are certain that you want to proceed. When you are ready to
proceed, click Yes.
8. The system begins updating the IBM Spectrum Virtualize Software by taking one node
offline and installing the new code. This process takes approximately 20 minutes. After the
node returns from the update, it is listed as complete, as shown in Figure 13-23.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 819
9. After a 30-minute pause, a node failover occurs and you temporarily lose connection to the
GUI to ensure that multipathing recovered on all attached hosts. A warning window opens
and prompts you to refresh the current session, as shown in Figure 13-24.
Tip: If you are updating from Version 7.8 or later, the 30-minute wait period can be
adjusted by running applysoftware -delay (mins) parameter to begin the update
instead of using the GUI.
You now see the new Version 8.4.0 GUI and the status of the second node updating, as
shown in Figure 13-25.
After the last node completes, the update is committing to the system, as shown in
Figure 13-26 on page 821.
820 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-26 Updating the system level
The update process completes when all nodes and the system unit are committed. The
final status indicates the new level of code that is installed in the system.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 821
13.5.4 Updating the IBM FlashSystem drive code
After completing the software update as described in 13.5, “Software update” on page 812,
the firmware of the disk drives in the system also must be updated. The upgrade test utility
identified that earlier drives are in the system, as shown in Figure 13-27. However, this fact
does not stop the system software update from being performed.
822 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-28 Upgrading all internal drives
Tip: The Upgrade all action displays only if you did not select any individual drive in the
list. If you clicked an individual drive in the list, the action gives you individual drive
actions; selecting Upgrade upgrades only that drive’s firmware. You can clear an
individual drive by pressing Ctrl and clicking the drive again.
4. The Upgrade All Drives window opens, as shown in Figure 13-29, in which you click the
small folder at the right side of the Upgrade package drop-down menu to go to where you
saved the downloaded file in step 1 on page 822. Click Upgrade to upload the firmware
package and begin upgrading any drives that are earlier. Do not select the option to install
firmware, even if the drive is running a newer level. Do that only under guidance from IBM
Support.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 823
Note: The system upgrades member drives one at a time. Although the firmware
upgrades are concurrent, they do cause a brief reset to the drive. However, the
redundant array of independent disks (RAID) technology enables the system to
continue after this brief interruption. After a drive completes its update, a calculated wait
time exists before the next drive updates to ensure that the previous drive is stable after
upgrading and can vary on system load.
5. With the drive upgrades running, you can view the progress by clicking the Tasks icon and
clicking View for the Drive Upgrade running task, as shown in Figure 13-30.
The Drive upgrade running task window opens. The drives that are pending upgrade and
an estimated time of completion are visible, as shown in Figure 13-31.
824 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. You can view each drive’s firmware level in the Pools Internal Storage All Internal window
by enabling the drive firmware option after right-clicking in the column header line, as
shown in Figure 13-32.
With the Firmware Level column enabled, you can see the current level of each drive, as
shown in Figure 13-33.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 825
13.5.5 Manually updating the system
This example assumes that you have an 8-node cluster, as shown in Table 13-12.
After uploading the update utility test and software update package to the cluster by using
PSCP and running the utility test, complete the following steps:
1. Start by removing node 2, which is the partner node of the configuration node in iogrp 0,
by using the cluster GUI or CLI.
2. Log in to the service GUI to verify that the removed node is in the candidate status.
3. Select the candidate node and click Update Manually from the left pane.
4. Browse and find the code that you downloaded and saved to your PC.
5. Upload the code and click Update.
When the update completes, a message caption indicating software update completion
displays. The node then restarts, and appears again in the service GUI (after
approximately 20 - 25 minutes) in the candidate status.
6. Select the node and verify that it is updated to the new code.
7. Add the node back by using the cluster GUI or the CLI.
8. Select node 3 from iogrp1.
9. Repeat steps 1 - 7 to remove node 3, update it manually, verify the code, and add it back
to the cluster.
10.Proceed to node 5 in iogrp 2.
11.Repeat steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.
12.Move on to node 7 in iogrp 3.
13.Repeat steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.
Note: The update is 50% complete. You now have one node from each iogrp that is
updated with the new code manually. Always leave the configuration node for last
during a manual software update.
826 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
19.Repeat steps 1 - 7 to remove node 8, update it manually, verify the code, and add it back
to the cluster.
20.Select and remove node 1, which is the configuration node in iogrp 0.
Note: A partner node becomes the configuration node because the original
configuration node is removed from the cluster, which keeps the cluster manageable.
The removed configuration node becomes a candidate, and you do not have to apply the
code update manually. Add the node back to the cluster. It automatically updates itself and
then adds itself back to the cluster with the new code.
21.After all the nodes are updated, you must confirm the update to complete the process. The
confirmation restarts each node in order, which takes about 30 minutes to complete.
For a video guide about how to set up and use IBM Call Home Web, see Introducing IBM Call
Home Web.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 827
Another feature is the Critical Fix Notification function, which enables IBM to warn users that
a critical issue exists in the level of code that they are using. The system notifies users when
they log on to the GUI by using a web browser that is connected to the internet.
The decision about what is a critical fix is subjective and requires judgment, which is
exercised by the development team. As a result, clients might still encounter bugs in code that
were not deemed critical. They continue to review information about new code levels to
determine whether they must update, even without a critical fix notification.
Important: Inventory notification must be enabled and operational for these features to
work. It is a best practice to enable Call Home and Inventory reporting on your
IBM Spectrum Virtualize clusters.
Figure 13-35 on page 829 shows the Monitoring menu icon for System Hardware, Easy Tier
Reports, viewing events, or seeing real-time performance statistics.
828 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-35 Monitoring options
Use the management GUI to manage and service your system. Select Monitoring → Events
to list events that should be addressed and maintenance procedures that walk you through
the process of correcting problems. Information in the Events window can be filtered four
ways:
Recommended Actions
Shows only the alerts that require attention. Alerts are listed in priority order and should be
resolved sequentially by using the available fix procedures. For each problem that is
selected, you can perform the following tasks:
– Run a fix procedure.
– View the properties.
Unfixed Alerts
Displays only the alerts that are not fixed. For each entry that is selected, you can perform
the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 829
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.
Unfixed Messages and Alerts
Displays only the alerts and messages that are not fixed. For each entry that is selected,
you can perform the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.
Show All
Displays all event types whether they are fixed or unfixed. For each entry that is selected,
you can perform the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.
Some events require a certain number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are below the coalesce threshold, and are transient.
Important: The management GUI is the primary tool that is used to operate and service
your system. Real-time monitoring should be established by using SNMP traps, email
notifications, or syslog messaging in an automatic manner.
Use the views that are available in the management GUI to verify the status of the system, the
hardware devices, the physical storage, and the available volumes by completing the
following steps:
1. Select Monitoring → Events to see all problems that exist on the system (see
Figure 13-36 on page 831).
830 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-36 Messages in the event log
2. Select Recommended Actions from the drop-down list to display the most important
events to be resolved (see Figure 13-37). The Recommended Actions tab shows the
highest priority maintenance procedure that must be run. Use the troubleshooting wizard
so that the system can determine the proper order of maintenance procedures.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 831
In this example, there is a canister that has a fault (service error code 1034). At any time
and from any GUI window, you can directly go to this menu by clicking the Status Alerts
icon at the top of the GUI (see Figure 13-38).
If an error is reported, always use the fix procedures from the management GUI to resolve the
problem for both software configuration problems and hardware failures. The fix procedures
analyze the system to ensure that the required changes do not cause volumes to become
inaccessible to the hosts. The fix procedures automatically perform configuration changes
that are required to return the system to its optimum state.
The fix procedure displays information that is relevant to the problem, and it provides various
options to correct the problem. Where possible, the fix procedure runs the commands that are
required to reconfigure the system.
Note: After Version 7.4, you are no longer required to run the fix procedure for a failed
drive. Hot plugging a replacement drive automatically triggers the validation processes.
The fix procedure also checks that any other existing problems do not result in volume access
being lost. For example, if a PSU in a node enclosure must be replaced, the fix procedure
checks and warns you whether the integrated battery in the other PSU is not sufficiently
charged to protect the system.
Hint: Always use Run Fix, which resolves the most serious issues first. Often, other alerts
are corrected automatically because they were the result of a more serious issue.
832 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Resolving alerts in a timely manner
To minimize any impact to your host systems, always perform the recommended actions as
quickly as possible after a problem is reported. Your system is resilient to most single
hardware failures. However, if it operates for any period with a hardware failure, the possibility
increases that a second hardware failure can result in some volume data that is unavailable. If
several unfixed alerts exist, fixing any one alert might become more difficult because of the
effects of the others.
Select or remove columns as needed. You can also extend or shrink the width of columns to
fit your window resolution and size. This method is relevant for most windows in the
management GUI of an IBM FlashSystem system.
Every field of the event log is available as a column in the event log grid. Several fields are
useful when you work with IBM Support. The preferred method in this case is to use the Show
All filter, with events sorted by timestamp. All fields have the sequence number, event count,
and the fixed state. Clicking Restore Default View sets the grid back to the defaults.
You might want to see more details about each critical event. Some details are not shown in
the main grid. To access the properties and sense data of a specific event, double-click the
specific event anywhere in its row.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 833
The properties window opens (see Figure 13-40) with all the relevant sense data. This data
includes the first and last time of an event occurrence, number of times the event occurred,
worldwide port name (WWPN), worldwide node name (WWNN), enabled or disabled
automatic fix, and other information.
13.8 Monitoring
An important step is to correct any issues that are reported by your system as soon as
possible. Configure your system to send automatic notifications to a standard Call Home
server or to the new Cloud Call Home server when a new event is reported. To avoid having
to monitor the management GUI for new events, select the type of event for which you want to
be notified. For example, you can restrict notifications to only events that require action.
834 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
The following event notification mechanisms are available:
Call Home
An event notification can be sent to one or more email addresses. This mechanism notifies
individuals of problems. Individuals can receive notifications wherever they have email
access, including mobile devices.
Cloud Call Home
Cloud services for Call Home is the optimal transmission method for error data because it
ensures that notifications are delivered directly to the IBM Support Center.
SNMP
An SNMP traps report can be sent to a data center management system, such as
IBM Systems Director, which consolidates SNMP reports from multiple systems. With this
mechanism, you can monitor your data center from a single workstation.
Syslog
A syslog report can be sent to a data center management system that consolidates syslog
reports from multiple systems. With this option, you can monitor your data center from a
single location.
If your system is within warranty or if you have a hardware maintenance agreement, configure
your IBM FlashSystem system to send email events directly to IBM if an issue that requires
hardware replacement is detected. This mechanism is known as Call Home. When this event
is received, IBM automatically opens a problem report and, if appropriate, contacts you to
help resolve the reported problem.
Important: If you set up Call Home to IBM, ensure that the contact details that you
configure are correct and kept updated. Personnel changes can cause delays in IBM
making contact.
Cloud Call Home is designed to work with new service teams and improves connectivity and
ultimately should improve customer support.
Note: If the customer does not want to open the firewall, Cloud Call Home does not work
and the customer can disable Cloud Call Home. Call Home is used instead.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 835
The following procedure summarizes how to configure email notifications and emphasizes
what is specific to Call Home:
1. Prepare your contact information that you want to use for the email notification and verify
the accuracy of the data. From the GUI menu, select Settings → Support → Call Home.
2. Select Call Home, and then click Enable Notifications (see Figure 13-41). For more
information, see IBM Documentation.
For the correct functioning of email notifications, ask your network administrator if Simple
Mail Transfer Protocol (SMTP) is enabled on the management network and is not, for
example, blocked by firewalls. Be sure to test the accessibility to the SMTP server by using
the telnet command (port 25 for a non-secured connection, port 465 for Secure Sockets
Layer (SSL)-encrypted communication) by using any server in the same network segment.
836 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. After clicking Next on the Welcome window, enter the information about the location of the
system (see Figure 13-43) and contact information of the system administrator (see
Figure 13-44 on page 838) to be contacted by IBM Support. Always keep this information
current.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 837
Figure 13-44 shows the contact information of the owner.
838 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
In the next window, you can enable Inventory Reporting and Configuration Reporting, as
shown in Figure 13-45.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 839
4. Configure the SMTP server according to the instructions that are shown in Figure 13-46.
When the correct SMTP server is provided, you can test the connectivity by clicking Ping
to verify that it can be contacted. Then, click Apply and Next.
840 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5. A summary window opens. Verify all the information, and then click Finish. You are
returned to the Email Settings window, where you can verify the email addresses of
IBM Support (callhome1@de.ibm.com) and optionally add local users who also need to
receive notifications (see Figure 13-47).
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 841
The Inventory Reporting function is enabled by default for Call Home. Rather than
reporting a problem, an email is sent to IBM that describes your system hardware and
critical configuration information. Object names and other information, such as IP
addresses, are not included. By default, the inventory email is sent weekly, which allows
an IBM Cloud service to analyze the inventory email and inform you whether the hardware
or software that you are using requires an update because of any known issue, as
described in 13.6, “Health checker feature” on page 827.
Figure 13-47 on page 841 shows the configured email notification and Call Home settings.
6. After completing the configuration wizard, test the email function. To do so, enter Edit
mode, as shown in Figure 13-48. In the same window, you can define more email
recipients or alter any contact and location details as needed.
842 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
7. In Edit mode, you can change any of the previously configured settings. After you are
finished editing these parameters, adding more recipients, or testing the connection, save
the configuration so that the changes take effect (see Figure 13-49).
Note: The Test button appears for new email users after first saving and then editing again.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 843
Disabling and enabling notifications
At any time, you can temporarily or permanently disable email notifications, as shown in
Figure 13-50. This is best practice when performing activities in your environment that might
generate errors on IBM Spectrum Virtualize, such as SAN reconfiguration or replacement
activities. After the planned activities, remember to re-enable the email notification function.
The same results can be achieved by running the svctask stopmail and svctask startmail
commands.
Note: Clients who purchased Enterprise Class Support (ECS) are entitled to IBM Support
by using Remote Support Assistance to quickly connect and diagnose problems. However,
IBM Support might choose to use this feature on non-ECS systems at their discretion.
Therefore, configure and test the connection on all systems.
If you are enabling Remote Support Assistance, ensure that the following prerequisites are
met:
Cloud Call Home or a valid email server are configured (Cloud Call Home is used as the
primary method to transfer the token when you initiate a session, with email as backup).
A valid service IP address is configured on each node in the system.
If your IBM FlashSystem system is behind a firewall or if you want to route traffic from
multiple storage systems to the same place, you must configure a Remote Support Proxy
server. Before you configure Remote Support Assistance, the proxy server must be
installed and configured separately. During the setup for Support Assistance, specify the
IP address and the port number for the proxy server on the Remote Support Centers
window.
If you do not have firewall restrictions and the nodes are directly connected to the internet,
request your network administrator to allow connections to 129.33.206.139 and
204.146.30.139 on Port 22.
Uploading support packages and downloading software have direct connections to the
internet. A DNS must be defined on your system for both of these functions to work.
844 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To ensure that support packages are uploaded correctly, configure the firewall to allow
connections to the following IP addresses on port 443: 129.42.56.189, 129.42.54.189,
and 129.42.60.189.
To ensure that software is downloaded correctly, configure the firewall to allow connections
to the following IP addresses on port 22: 170.225.15.105, 170.225.15.104,
170.225.15.107, 129.35.224.105, 129.35.224.104, and 129.35.224.107.
Figure 13-51 shows how you can find Setup Remote Support Assistance if you closed the
window.
Choosing to set up Support Assistance opens a wizard to guide you through the following
configuration process:
1. Figure 13-54 on page 847 shows the first wizard window. To keep remote assistance
disabled, select I want support personnel to work on-site only. To enable remote
assistance, select I want support personnel to access my system both on-site and
remotely. Click Next.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 845
Note: Selecting I want support personnel to work on-site only does not entitle you
to expect IBM Support to attend onsite for all issues. Most maintenance contracts are
for customer-replaceable unit (CRU) support, where IBM diagnoses your problem and
sends a replacement component for you to install, if required.
If you prefer to have IBM perform replacement tasks for you, contact your local sales
person to investigate an upgrade to your current maintenance contract.
2. Figure 13-53 lists the IBM Support Center IP addresses and Secure Shell (SSH) port that
must be open in your firewall. You can also define a Remote Support Assistance Proxy if
you have multiple systems in the data center, which allows for a firewall configuration
being required only for the proxy server rather than every storage system. In this example,
we do not have a proxy server and leave the field blank. Click Next.
3. The next window prompts you about whether you want to open a tunnel to IBM
permanently, which allows IBM to connect to your system At Any Time, or On
Permission Only, as shown in Figure 13-54 on page 847. On Permission Only requires
a storage administrator to log on to the GUI and enable the tunnel when required. Click
Finish.
846 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-54 Support wizard access choice
4. After completing the remote support setup, you can view the status of any remote
connection, start a session, test the connection to IBM, and reconfigure the setup. As
shown in Figure 13-55, we successfully tested the connection. Click Start New Session
to open a tunnel through which IBM Support can connect.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 847
5. A window prompts you for how long you want the tunnel to remain open if no activity
occurs by setting a timeout value.
You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (see Figure 13-56):
IP Address
The address for the SNMP server.
Server Port
The remote port (RPORT) number for the SNMP server. The RPORT number must be a
value of 1 - 65535, where the default is port 162 for SNMP.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong. Typically, the default of public is used.
Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that require prompt action.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action such as a space efficient volume running out of space.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
848 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To add an SNMP server, select Actions → Add and complete the Add SNMP Server window,
as shown in Figure 13-57. To remove an SNMP server, click the line with the server that you
want to remove, and select Actions → Remove.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 849
Note: The following properties are optional:
Engine ID
Indicates the unique identifier (UID) in hexadecimal that identifies the SNMP server.
Security Name
Indicates which security controls are configured for the SNMP server. Supported
security controls are none, authentication, or authentication and privacy.
Authentication Protocol
Indicates the authentication protocol that is used to verify the system to the SNMP
server.
Privacy Protocol
Indicates the encryption protocol that is used to encrypt data between the system and
the SNMP server.
Privacy Passphrase
Indicates the user-defined passphrase that is used to verify encryption between the
system and SNMP server.
You can configure a syslog server to receive log messages from various systems and store
them in a central repository by selecting Settings → Notifications → Syslog, as shown in
Figure 13-58.
850 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Enter the following information, as shown in Figure 13-59.
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Protocol
The protocol to be used (UDP or TCP).
Server Port
The port to communicate with the syslog server.
Notifications
Choose one of the following items for event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 851
Important: Go to Recommended Actions to run the fix procedures on these
notifications.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Messages
Choose one of the following items for messages:
– CLI
Select this option to include any CLI or management GUI operations on the specified
syslog servers.
– Login
Select this option to send successful and failed authentication attempts to the specified
syslog servers.
The audit log tracks action commands that are issued through an SSH session, management
GUI, or Remote Support Assistance. It provides the following entries:
Identity of the user who ran the action command.
Name of the actionable command.
Timestamp of when the actionable command ran on the configuration node.
Parameters that ran with the actionable command.
Several specific service commands are not included in the audit log:
dumpconfig
cpdumps
cleardumps
finderr
dumperrlog
dumpintervallog
svcservicetak dumperrlog
svcservicetask finderr
Figure 13-60 on page 853 shows the access to the audit log. Click Audit Log in the left menu
to see which configuration CLI commands were run on the system.
852 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-60 Audit Log from the Access menu
Figure 13-61 shows an example of the audit log after a volume is created and mapped to a
host.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 853
Changing the view of the Audit Log grid is possible by right-clicking column headings or
clicking the sign in the upper right (see Figure 13-62). The grid layout and sorting is under the
user’s control, so you can view everything in the audit log, sort different columns, and reset
the default grid preferences.
854 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.10 Collecting support information by using the GUI, CLI, and
USB
If you encounter a problem and contact the IBM Support Center, you will be asked to provide
a support package. You can collect and upload this package by selecting Settings →
Support menu.
2. Click Upload Support Package and then Create New Package and Upload.
Assuming that the problem that was encountered was an unexpected node restart that
logged a 2030 error, collect the default logs and the most recent statesave from each node
to capture the most relevant data for support.
Note: When a node unexpectedly restarts, it first dumps its current statesave
information before it restarts to recover from an error condition. This statesave is critical
for IBM Support to analyze what occurred. Collecting a snap type 4 creates statesaves
at the time of the collection, which is not useful for understanding the restart event.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 855
3. The Upload Support Package window provides four options for data collection. If you are
contacted by IBM Support because your system called home or you manually opened a
call with IBM Support, you receive a Problem Management Record (PMR) number. Enter
that PMR number into the PMR field and select the snap type (often referred to as an
option 1, 2, 3, 4 snap) as requested by IBM Support (see Figure 13-64). In our example,
we entered our PMR number, selected snap type 3 (option 3) because this option
automatically collects the statesaves that were created at the time that the node restarted,
and clicked Upload.
Tip: To open a service request online, see the Service requests and PMRs.
856 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
4. The procedure to generate the snap on the system, including the most recent statesave
from each node canister, starts. This process might take a few minutes (see Figure 13-65).
The time that it takes to generate the snap and the size of the file that is generated depends
mainly on two things: the snap option that you selected, and the size of your system. An
option 1 snap takes much less time than an option 4 snap because nothing new must be
gathered for an option 1 snap, but an option 4 snap requires the system to collect new
statesaves from each node. In an 8-node cluster, this task can take quite some time, so you
should always collect the snap option that IBM Support recommends.
Table 13-13 shows the approximate file sizes for each SNAP option.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 857
13.10.2 Collecting logs by using the CLI
The CLI can be used to collect and upload a support package as requested by IBM Support
by performing the following steps:
1. Log in to the CLI and run the svc_snap command that matches the type of snap that is
requested by IBM Support:
– Standard logs (type 1):
svc_snap upload pmr=ppppp,bbb,ccc gui1
– Standard logs plus one existing statesave (type 2):
svc_snap upload pmr=ppppp,bbb,ccc gui2
– Standard logs plus most recent statesave from each node (type 3):
svc_snap upload pmr=ppppp,bbb,ccc gui3
– Standard logs plus new statesaves:
svc_livedump -nodes all -yes
svc_snap upload pmr=ppppp,bbb,ccc gui3
In this example, we collect the type 3 (option 3) and have it automatically uploaded to the
PMR number that is provided by IBM Support, as shown in Example 13-9.
If you do not want to automatically upload the snap to IBM, do not specify the upload
pmr=ppppp,bbb,ccc part of the commands. When the snap creation completes, it creates a
file name that uses the following format:
/dumps/snap.<panel_id>.YYMMDD.hhmmss.tgz
It takes a few minutes for the snap file to complete (longer if statesaves are included).
The generated file can then be retrieved from the GUI by selecting Settings →
Support → Manual Upload Instructions → Download Support Package, and then
clicking Download Existing Package, as shown in Figure 13-66 on page 859.
858 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-66 Downloaded Existing Package
2. Click in the Filter box and enter snap to see a list of snap files, as shown in Figure 13-67.
Find the exact name of the snap that was generated by running the svc_snap command
that was run earlier. Select that file, and click Download.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 859
3. Save the file to a folder of your choice on your workstation.
Note: This procedure collects a single snap from the node canister, not a cluster snap. It is
useful for determining the state of the node canister.
When a USB flash drive is plugged into a node canister, the canister code searches for a text
file that is named satask.txt in the root directory. If the code finds the file, it attempts to run a
command that is specified in the file. When the command completes, a file that is called
satask_result.html is written to the root directory of the USB flash drive. If this file does not
exist, it is created. If it exists, the data is inserted at the start of the file. The file contains the
details and results of the command that was run and the status and the configuration
information from the node canister. The status and configuration information matches the
detail that is shown on the service assistant home page windows.
Note: If there was a problem with the procedure, the html file still is generated, and
reasons why the procedure did not work are listed in it.
860 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
To upload the information, complete the following steps:
1. Using a web browser, go to Enhanced Customer Data Repository (ECuRep) (see
Figure 13-68).
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 861
4. Select one or more files, click Upload to continue, and follow the directions.
The SAT is available even when the management GUI is not accessible. The following
information and tasks can be accomplished with the SAT:
Status information about the connections and the node canister
Basic configuration information, such as configuring IP addresses
Service tasks, such as restarting the Common Information Model Object Manager
(CIMOM) and updating the WWNN
Details about node error codes
Details about the hardware, such as IP addresses and Media Access Control (MAC)
addresses
The SAT GUI is available by using a service assistant IP address that is configured on each
IBM FlashSystem node. It can also be accessed through the cluster IP addresses by
appending /service to the cluster management IP.
It is also possible to access the SAT GUI of the config node if you enter the Uniform Resource
Locator (URL) of the service IP address of the config node into any web browser and click
Service Assistant Tool (see Figure 13-70 on page 863).
862 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Figure 13-70 Service Assistant Tool login
If the clustered system is down, the only method of communicating with the node canisters is
through the SAT IP address directly. Each node can have a single service IP address on
Ethernet port 1, which should be configured on all nodes of the cluster.
To open the SAT GUI, enter one of the following URLs into a web browser:
Enter http(s)://<cluster IP address of your cluster>/service.
Enter http(s)://<service IP address of a node>/service.
Enter http(s)://<service IP address of config node> and click Service Assistant Tool.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 863
To access the SAT, complete the following steps:
1. If you are accessing SAT by using cluster IP address/service, the configuration node
canister SAT GUI login window opens. Enter the Superuser Password, as shown in
Figure 13-71.
2. After you are logged in, you see the Service Assistant Home window, as shown in
Figure 13-72. The SAT can view the status and run service actions on other nodes in
addition to the node to which the user is logged in.
864 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. The current node canister is displayed in the upper left corner of the GUI. As shown in
Figure 13-72 on page 864, this is node2. Select the node that you want in the Change
Node section of the window. You see the details in the upper left change to reflect the
selected node canister.
Note: The SAT GUI provides access to service procedures and shows the status of the
node canisters. These procedures should be carried out only if you are directed to do so by
IBM Support.
For more information about how to use the SAT, see IBM Documentation.
The monitoring capabilities that IBM Storage Insights provides are useful for things like
capacity planning, workload optimization, and managing support tickets for ongoing issues.
After you add your systems to IBM Storage Insights, you see the Dashboard, where you can
select a system that you want to see the overview for, as shown in Figure 13-73.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 865
Component health is shown at the upper center of the window. If there is a problem with one
of the Hardware, Logical or Connectivity components, errors are shown here, as shown in
Figure 13-74.
The error entries can be expanded to obtain more details by selecting the three dots at the
upper right corner of the component that has an error and then selecting View Details. The
relevant part of the more detailed System View opens, and what you see depends on which
component has the error, as shown in Figure 13-75.
From here, it is now obvious which components have the problem and exactly what is wrong
with them, so now you can log a support ticket with IBM if necessary.
Figure 13-76 Capacity area of the IBM Storage Insights system overview
866 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
In the Capacity view, the user can click View Pools, View Compress Volumes, View
Deduplicated Volumes, and View Thin-Provisioned Volumes. Clicking any of these items
takes the user to the detailed system view for the selection option. From there, you can click
Capacity to get a historical view of how the system capacity changed over time, as shown in
Figure 13-77. At any time, the user can select the timescale, resources, and metrics to be
displayed on the graph by clicking any options around the graph.
Add metrics
If you scroll down below the graph, you find a list view of the selected option. In this example,
we selected View Pools so the configured pools are shown with the relevant key capacity
metrics, as shown in Figure 13-78. Double-clicking a pool in the table display the properties
for it.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 867
13.12.2 Performance monitoring
From the system overview, you can scroll down and see the three key performance statistics
for your system, as shown in Figure 13-79. For the Performance overview, these statistics are
aggregated across the whole system, and you cannot drill down by Pool, Volume, or other
items.
To view more detailed performance statistics, enter the system view again, as described in
13.12.1, “Capacity monitoring” on page 866.
For this performance example, we select View Pools, and then select Performance from the
System View pane, as shown in Figure 13-80.
868 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
It is possible to customize what can be seen on the graph by selecting the metrics and
resources. In Figure 13-81, the Overall Response Time for one pool over a 12-hour period is
displayed.
Scrolling down the graph, the Performance List view is visible, as shown in Figure 13-82.
Metrics can be selected by clicking the filter button at the right of the column headers. If you
select a row, the graph is filtered for that selection only. Multiple rows can be selected by
holding down the Shift or Ctrl keys.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 869
13.12.3 Logging support tickets by using IBM Storage Insights
With IBM Storage Insights, you can log existing support tickets that greatly complement the
enhanced monitoring opportunities that the software provides. When an issue is detected and
you want to engage IBM Support, complete the following steps:
1. Select the system to open the System Overview and click Get Support, as shown in
Figure 13-83.
870 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
A window opens where you can create a ticket or update an existing ticket, as shown in
Figure 13-84.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 871
2. Select Create Ticket, and the ticket creation wizard opens. Details of the system are
automatically populated, including the customer number, as shown in Figure 13-85. Select
Next.
872 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
3. You can enter relevant details about your problem to the ticket, as shown in Figure 13-86.
It is also possible to attach images or files to the ticket, such as PuTTY logs and screen
captures. Once done, select Next.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 873
4. You can select a severity for the ticket. Examples of what severity you should select are
shown in Figure 13-87. Because in our example there are storage ports offline with no
impact, we select severity 2 because we lost redundancy.
5. Choose whether this is a hardware or a software problem. Select the relevant option (for
this example, the offline ports are likely caused by a physical layer hardware problem).
Once done, click Next.
874 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. Review the details of the ticket that will be logged with IBM, as shown in Figure 13-88.
Contact details must be entered so that IBM Support can respond to the correct person.
You also must choose which type of logs should be attached to the ticket. For more
information about the types of snap, see Table 13-13 on page 857.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 875
7. Once done, select Create Ticket. A confirmation window opens, as shown in
Figure 13-89, and IBM Storage Insights automatically uploads the snap to the ticket when
it is collected.
876 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
13.12.4 Managing existing support tickets by using IBM Storage Insights and
uploading logs
With IBM Storage Insights, you can track existing support tickets and upload logs to them. To
do so, complete the following steps:
1. From the System Overview window, select Tickets, as shown in Figure 13-90.
In this window, you see a large history of support tickets that were logged through
IBM Storage Insights for the system. Tickets that are not currently open are listed under
Closed Tickets, and currently open tickets are listed under Open Tickets.
2. To quickly add logs to a ticket without having to browse to the system GUI or use
IBM ECuRep, click Add Log Package to Ticket. A window opens that guides you through
the process, as shown in Figure 13-91. You can select which type of log package you want
and add a note to the ticket with the logs.
Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 877
3. After clicking Update Ticket, a confirmation opens, as shown in Figure 13-92. You can exit
the wizard. IBM Storage Insights runs in the background to gather the logs and upload
them to the ticket.
878 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
A
To ensure that the performance levels of your system are maintained, monitor performance
periodically to provide visibility into potential problems that exist or are developing so that they
can be addressed in a timely manner.
Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to, dissimilar workloads that are
competing for the same resources, overloaded resources, insufficient available resources,
poor performing resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network
(SAN) and IBM Spectrum Virtualize layout:
Host-to-system inter-switch link (ISL) oversubscription.
This area is the most significant input/output (I/O) load across ISLs. A best practice is to
maintain a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to
lead to I/O bottlenecks. This best practice also assumes a core-edge design, where the
hosts are on the edges and the IBM FlashSystem is the core.
Storage-to-system ISL oversubscription.
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this best practice
assumes a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription.
This area does not apply to IBM FlashSystem clusters composed of a unique control
enclosure. This area is the least significant load of the three possible oversubscription
bottlenecks. In standard setups, this load can be ignored. Although this area is not entirely
negligible, it does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the stretched cluster capability.
When the system is running in this manner, the number of ISL links becomes more
important. As with the storage-to-system ISL oversubscription, this load also has a
maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you
determine the number of ISLs to implement. If you need assistance, contact your IBM
representative and request technical assistance.
ISL trunking or port channeling.
For the best performance and availability, use ISL trunking or port channeling.
Independent ISL links can easily become overloaded and turn into performance
bottlenecks. Bonded or trunked ISLs automatically share load and provide better
redundancy in a failure.
880 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Number of paths per host multipath device.
The maximum supported number of paths per multipath device that is visible on the host is
eight (With HyperSwap, you may have up to 16 active paths). Although most vendor
multipathing software can support more paths, the IBM Storage System expects a
maximum of eight paths. In general, you see only an effect on performance from more
paths than eight. Although IBM Spectrum Virtualize can work with more than eight paths,
that configuration is unsupported.
Do not intermix dissimilar array types or sizes.
Although IBM Spectrum Virtualize supports an intermix of different types of storage within
storage pools, it is a best practice to always use the same array model, redundant array of
independent disks (RAID) mode. RAID size (RAID 5 6+P+S does not mix well with RAID 6
14+2), and drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide validation that design expectations are met, and identify opportunities for
improvement.
Note: For IBM FlashSystem 5030 and IBM FlashSystem 5100, only four nodes (two
control enclosures) are supported.
The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Although virtualization provides
significant flexibility in terms of the components that are used, it does not diminish the
necessity of designing the system around the components so that it can deliver the level of
performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the goal is that you always want to maximize the bandwidth that is available
to the IBM Storage System ports. An IBM FlashSystem system is one of the few devices that
can drive ports to their limits on average, so it is imperative that you put significant thought
into planning the SAN layout.
Essentially, performance improvements are gained by selecting the most appropriate internal
disk drive types, spreading the workload across a greater number of back-end resources
when using external storage, and adding more caching. These capabilities are provided by
the IBM Storage System cluster. However, the performance of individual resources eventually
becomes the limiting factor.
The statistics files for volumes, managed disks (MDisks), nodes, and drives are saved at the
end of the sampling interval. A maximum of 16 files (each) are stored before they are overlaid
in a rotating log fashion. This design provides statistics for the most recent 240-minute period
if the default 15-minute sampling interval is used. IBM Spectrum Virtualize supports
user-defined sampling intervals of 1 - 60 minutes. IBM Storage Insights requires and
recommends interval of 5 minutes.
For each type of object (volumes, MDisks, nodes, and drives), a separate file with statistic
data is created at the end of each sampling period and stored in /dumps/iostats.
Run the startstats command to start the collection of statistics, as shown in Example A-1.
This command starts statistics collection and gathers data at 5-minute intervals.
To verify the statistics collection interval, display the system properties again, as shown in
Example A-2.
It is not possible to stop statistics collection with the command stopstats starting with
Version 8.1.
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within IBM Spectrum Virtualize and IBM FlashSystem, they shorten
the amount of time that the historical data is available on IBM Spectrum Virtualize. For
example, rather than a 240-minute period of data with the default 15-minute interval, if you
adjust to 2-minute intervals, you have a 32-minute period instead.
Statistics are collected per node. The sampling of the internal performance counters is
coordinated across the cluster so that when a sample is taken, all nodes sample their internal
counters concurrently. Collect all files from all nodes for a complete analysis. Tools such as
IBM Spectrum Control and IBM Spectrum Insight® Pro perform this intensive data collection
for you.
882 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Statistics file naming
The statistics files that are generated are written to the /dumps/iostats/ directory. The file
name has the following formats:
Nm_stats_<node_id>_<date>_<time> for MDisks statistics
Nv_stats_<node_id>_<date>_<time> for Volumes statistics
Nn_stats_<node_id>_<date>_<time> for node statistics
Nd_stats_<node_id>_<date>_<time> for drives statistics
The node_id is the name of the node on which the statistics were collected. The date is in the
form <yymmdd>, and the time is in the form <hhmmss>. The following example shows an MDisk
statistics file name:
Nm_stats_113986_161019_151832
Example A-3 shows typical MDisk, volume, node, and disk drive statistics file names.
Note: For more information about the statistics files name convention, see IBM
Documentation.
Tip: The performance statistics files can be copied from the IBM FlashSystem nodes to a
local drive on your workstation by using pscp.exe (included with PuTTY) from an MS-DOS
command prompt, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load IBM FlashSystem 7200
superuser@9.71.42.30:/dumps/iostats/* c:\statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
You can obtain PuTTY from Download PuTTY: latest release (0.74) Download PuTTY:
latest release (0.74).
Each node collects various performance statistics (mostly at 5-second intervals) and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node.
As with system statistics, node statistics help you to evaluate whether the node is operating
within normal performance metrics.
The lsnodecanisterstats command provides performance statistics for the nodes that are
part of a clustered system, as shown in Example A-4. The output is truncated and shows only
part of the available statistics. You can also specify a node name in the command to limit the
output for a specific node.
884 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
1 node1 drive_io 45 492 201029211803
1 node1 drive_ms 13 31 201029211643
1 node1 vdisk_r_mb 0 14 201029211603
1 node1 vdisk_r_io 0 105 201029211603
...
3 node2 drive_w_ms 6 10 201029211713
3 node2 iplink_mb 0 0 201029211843
3 node2 iplink_io 0 0 201029211843
3 node2 iplink_comp_mb 0 0 201029211843
3 node2 cloud_up_mb 0 0 201029211843
3 node2 cloud_up_ms 0 0 201029211843
3 node2 cloud_down_mb 0 0 201029211843
3 node2 cloud_down_ms 0 0 201029211843
3 node2 iser_mb 0 0 201029211843
3 node2 iser_io 0 0 201029211843
IBM FlashSystem 7200:superuser>
Example A-4 on page 884 shows statistics for the two node members of system ITSO. For
each node, the following columns are displayed:
stat_name: The name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
The lsnodecanisterstats command can also be used with a node canister name or ID as an
argument. For example, you can enter the command lsnodecanisterstats node1 to display
the statistics of node name node1 only.
The lssystemstats command lists the same set of statistics that is listed with the
lsnodecanisterstats command, but represents all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Table A-1 gives the descriptions of the different counters that are presented by the
lssystemstats and lsnodecanisterstats commands.
Table A-1 List of counters for the lssystemstats and lsnodecanisterstats commands
Value Description
compression_cpu_pc Displays the percentage of allocated CPU capacity that is used for
compression.
cpu_pc Displays the percentage of allocated CPU capacity that is used for the
system.
fc_mb Displays the total number of megabytes transferred per second for Fibre
Channel (FC) traffic on the system. This value includes host I/O and any
bandwidth that is used for communication within the system.
886 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Value Description
fc_io Displays the total I/O operations that are transferred per second for FC
traffic on the system. This value includes host I/O and any bandwidth that
is used for communication within the system.
sas_mb Displays the total number of megabytes transferred per second for
serial-attached Small Computer System Interface (SCSI) (SAS) traffic on
the system. This value includes host I/O and bandwidth that is used for
background RAID activity.
sas_io Displays the total I/O operations that are transferred per second for SAS
traffic on the system. This value includes host I/O and bandwidth that is
used for background RAID activity.
iscsi_mb Displays the total number of megabytes transferred per second for internet
Small Computer Systems Interface (iSCSI) traffic on the system.
iscsi_io Displays the total I/O operations that are transferred per second for iSCSI
traffic on the system.
write_cache_pc Displays the percentage of the write cache usage for the node.
total_cache_pc Displays the total percentage for both the write and read cache usage for
the node.
vdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.
vdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to volumes during the sample period.
vdisk_ms Displays the average amount of time in milliseconds (ms) that the system
takes to respond to read and write requests to volumes over the sample
period.
mdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to MDisks during the sample period.
mdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to MDisks during the sample period.
mdisk_ms Displays the average amount of time in milliseconds that the system takes
to respond to read and write requests to MDisks over the sample period.
drive_mb Displays the average number of megabytes transferred per second for
read and write operations to drives during the sample period.
drive_io Displays the average number of I/O operations that are transferred per
second for read and write operations to drives during the sample period.
drive_ms Displays the average amount of time in milliseconds that the system takes
to respond to read and write requests to drives over the sample period.
vdisk_w_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.
vdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to volumes during the sample period.
vdisk_w_ms Displays the average amount of time in milliseconds that the system takes
to respond to write requests to volumes over the sample period.
mdisk_w_mb Displays the average number of megabytes transferred per second for
write operations to MDisks during the sample period.
mdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to MDisks during the sample period.
mdisk_w_ms Displays the average amount of time in milliseconds that the system takes
to respond to write requests to MDisks over the sample period.
drive_w_mb Displays the average number of megabytes transferred per second for
write operations to drives during the sample period.
drive_w_io Displays the average number of I/O operations that are transferred per
second for write operations to drives during the sample period.
drive_w_ms Displays the average amount of time in milliseconds that the system takes
to respond write requests to drives over the sample period.
vdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to volumes during the sample period.
vdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to volumes during the sample period.
vdisk_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to volumes over the sample period.
mdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to MDisks during the sample period.
mdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to MDisks during the sample period.
mdisk_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to MDisks over the sample period.
drive_r_mb Displays the average number of megabytes transferred per second for
read operations to drives during the sample period.
drive_r_io Displays the average number of I/O operations that are transferred per
second for read operations to drives during the sample period.
drive_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to drives over the sample period.
iplink_mb The total number of megabytes transferred per second for IP replication
traffic on the system. This value does not include iSCSI host I/O
operations.
iplink_comp_mb Displays the average number of compressed MBps over the IP replication
link during the sample period.
iplink_io The total I/O operations that are transferred per second for IP partnership
traffic on the system. This value does not include iSCSI host I/O
operations.
cloud_up_mb Displays the average number of megabits per second (Mbps) for upload
operations to a cloud account during the sample period.
cloud_up_ms Displays the average amount of time (in milliseconds) it takes for the
system to respond to upload requests to a cloud account during the
sample period.
cloud_down_mb Displays the average number of Mbps for download operations to a cloud
account during the sample period.
888 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Value Description
cloud_down_ms Displays the average amount of time (in milliseconds) that it takes for the
system to respond to download requests to a cloud account during the
sample period.
iser_mb Displays the total number of megabytes transferred per second for iSCSI
Extensions for Remote Direct Memory Access (RDMA) (iSER) traffic on
the system.
iser_io Displays the total I/O operations that are transferred per second for iSER
traffic on the system.
Figure A-1 IBM Spectrum Virtualize Dashboard displaying the System Performance overview
Figure A-2 IBM Spectrum Virtualize Dashboard displaying the Nodes Performance overview
You can also use real-time statistics to monitor CPU utilization, volume, interface, and the
MDisk bandwidth of your system and nodes. Each graph represents 5 minutes of collected
statistics and provides a means of assessing the overall performance of your system.
As shown in Figure A-4, the Performance monitoring window is divided into the following
sections that provide utilization views for the following resources.
CPU Utilization: The CPU Utilization graph shows the current percentage of CPU
utilization and peaks in utilization. It can also display compression CPU utilization for
systems with compressed volumes.
Volumes: Shows four metrics about the overall volume utilization graphics:
– Read
– Write
– Read latency
– Write latency
Interfaces: The Interfaces graph displays data points for FC, iSCSI, SAS, and IP Remote
Copy (RC) interfaces. You can use this information to help determine connectivity issues
that might affect performance:
– FC
– iSCSI
890 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
– SAS
– IP Remote Copy
MDisks: Also shows four metrics on the overall MDisks graphics:
– Read
– Write
– Read latency
– Write latency
You can use these metrics to help determine the overall performance health of the volumes
and MDisks on your system. Consistent unexpected results can indicate errors in
configuration, system faults, or connectivity issues.
The system’s performance is always visible at the bottom of the IBM Spectrum Virtualize
window.
Note: The indicated values in the graphics are averaged on a 5-second-based sample.
You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-5.
Figure A-5 Viewing statistics per node or for the entire system
You can also change the metric between MBps or IOPS, as shown in Figure A-6.
For each of the resources, various metrics are available, and you can select which ones to be
displayed. For example, as shown in Figure A-8, from the four available metrics for the
MDisks view (Read, Write, Read latency, and Write latency), only Read and Write IOPS are
selected.
IBM Spectrum Control is installed separately on a dedicated system, and is not part of the
IBM Spectrum Virtualize bundle.
For more information about using IBM Spectrum Control to monitor your storage subsystem,
see Harness the full power of your IT infrastructure.
892 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
As an alternative to IBM Spectrum Control, a cloud-based tool that is called IBM Storage
Insights is available that provides a single dashboard that gives you a clear view of all your
IBM block storage by showing performance and capacity information. You do not have to
install this tool in your environment because it is a cloud-based solution. Only an agent is
required to collect data of the storage devices.
For more information about IBM Storage Insights, see IBM Storage Insights.
Appendix B. Terminology
This appendix summarizes the IBM Spectrum Virtualize and IBM Storage terms that are
commonly used in this book.
For more information about the complete set of terms that relate to IBM FlashSystem
systems, see IBM Documentation.
Access mode
One of the modes in which a logical unit (LU) in a disk controller system can operate. The
three access modes are image mode, managed space mode, and unconfigured mode. See
also “Image mode” on page 908, “Managed mode” on page 911, and “Unconfigured mode”
on page 921.
Activation key
See “License key” on page 910
Array
An ordered collection, or group, of physical devices (disk drive modules) that are used to
define logical volumes or devices. An array is a group of drives that is designated to be
managed with a redundant array of independent disks (RAID).
Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 920.
Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 920.
Audit Log
An unalterable record of all commands or user interactions that are issued to the system.
Auxiliary volume
The auxiliary volume that contains a mirror of the data on the master volume. See also
“Master volume” on page 911, and “Relationship” on page 917.
896 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Back end
See “Front end and back end” on page 906.
Call Home
Call Home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
Canister
A canister is a single processing unit within a storage system.
Capacity
IBM applies the following definitions to capacity:
Available capacity
The amount of usable capacity that is not yet used in a system, pool, array, or managed
disk (MDisk).
Data reduction
A set of techniques that can be used to reduce the amount of usable capacity that is
required to store data. Examples of data reduction include data deduplication and
compression.
Data reduction savings
The total amount of usable capacity that is saved in a system, pool, or volume through the
application of an algorithm, such as compression or deduplication on the written data. This
saved capacity is the difference between the written capacity and the used capacity.
Effective capacity
The amount of provisioned capacity that can be created in a system or pool without
running out of usable capacity given the current data reduction savings being achieved.
This capacity equals the usable capacity that is divided by the data reduction savings
percentage.
Overhead capacity
An amount of usable capacity that is occupied by metadata in a system or pool and other
data that is used for system operations.
Overprovisioned ratio
The ratio of provisioned capacity to usable capacity in the pool or system.
Overprovisioning
The result of creating more provisioned capacity in a storage system or pool than there is
usable capacity. Overprovisioning occurs when thin provisioning or data reduction
techniques ensure that the used capacity of the provisioned volumes is less than their
provisioned capacity.
Physical Capacity
Physical capacity indicates the total capacity in all storage on the system. Physical
capacity includes all the storage the system can virtualize and assign to pools.
Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are IBM FlashCopy, Metro Mirror (MM), Global Mirror (GM), and
virtualization. See also “FlashCopy” on page 905, “Metro Mirror” on page 912, and
“Virtualized storage” on page 921.
Capacity recycling
Capacity recycling means the amount of provisioned capacity that can be recovered without
causing stress or performance degradation. This capacity identifies the amount of resources
that can be reclaimed and provisioned to other objects in an environment.
898 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Certificate
A digital document that binds a public key to the identity of the certificate owner, which
enables the certificate owner to be authenticated. A certificate is issued by a certificate
authority (CA) and is digitally signed by that authority.
Chain
A set of enclosures that is attached to provide redundant access to the drives inside the
enclosures. Each control enclosure can have one or more chains.
Change volume
A volume that is used in GM that holds earlier consistent revisions of data when changes are
made.
Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), IP, or another
long-distance communication protocol.
Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Rather than being created directly from MDisks, child pools are created
from existing capacity that is allocated to a parent pool. As with parent pools, volumes can be
created that specifically use the capacity that is allocated to the child pool. Child pools are
similar to parent pools with similar properties. Child pools can be used for volume copy
operation. See also “Parent pool” on page 913.
Clone
A copy of a volume on a server at a particular point in time (PiT). The contents of the copy can
be customized while the contents of the original volume are preserved.
Cloud account
An agreement with a cloud service provider (CSP) to use storage or other services at that
service provider. Access to the cloud account is granted by presenting valid credentials.
Cloud container
A cloud container is a virtual object that includes all of the elements, components, or data that
is common to a specific application or data.
Clustered system
A clustered system, which was known as a cluster, is a group of up to eight IBM Storage
Systems canisters (two in each system) that presents a single configuration, management,
and service interface to the user.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a flash drive. A cold extent also refers to an extent that must
be migrated onto an HDD if it is on a flash drive.
Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.
Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.
Consistency group
A consistency group is a group of copy relationships between virtual volumes or data sets that
are maintained with the same time reference so that all copies are consistent in time. A
consistency group can be managed as a single entity.
Container
A container is a software object that holds or organizes other software objects or entities.
Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the contingency
capacity is the unused real capacity that is maintained. For thin-provisioned volumes that are
not configured to automatically expand, it is the difference between the used capacity and the
new real capacity.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete, and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.
900 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Counterpart SAN
A counterpart SAN is the non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy.
IBM Storage canisters are typically connected to a “redundant SAN” that is made up of two
counterpart SANs. A counterpart SAN is often called a SAN fabric.
Cross-volume consistency
A consistency group property that ensures consistency between volumes when an
application issues dependent write operations that span multiple volumes.
Customer-replaceable unit
An assembly or part that can be replaced in its entirety by a user when any one of its
components fails.
Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to ensure the recoverability of applications.
Data deduplication
Data deduplication is a method of reducing storage needs by eliminating redundant data.
Only one instance of the data is retained on storage media. Other instances of the same data
are replaced with a pointer to the retained instance.
Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.
Data reduction
Data reduction is a set of techniques that can be used to reduce the amount of physical
storage that is required to store data. An example of data reduction includes data
deduplication and compression. See also “Data Reduction Pool” and “Capacity” on page 897.
Deduplication
See “Data deduplication” on page 901.
Discovery
The automatic detection of a network topology change, for example, new and deleted nodes
or links.
Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the IBM Storage cluster likely
have different performance attributes because of the type of disk or RAID array on which they
are installed. The MDisks can be on 15,000 RPM Fibre Channel (FC) or serial-attached Small
Computer System Interface (SCSI) (SAS) disk, nearline (NL) SAS, or Serial Advanced
Technology Attachment (SATA), or even flash drives. Therefore, a storage tier attribute is
assigned to each MDisk, and the default is generic_hdd.
Drive technology
A category of a drive that pertains to the method and reliability of the data storage techniques
being used on the drive. Possible values include enterprise (ENT) drive, NL drive, or
solid-state drive (SSD).
902 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Here are some terms that are associated with DIMMs:
Channel: The memory modules are installed into matching banks, which are usually
color-coded on the system board. These separate channels enable the memory controller
to access each memory module. For the Intel Cascade Lake architecture, there are six
DIMM Memory channels per CPU, and each memory channel has two DIMMs. The
memory bandwidth is tied to each of these channels, and the speed of access for the
memory controller is shared across the pair of DIMMs in that channel.
Slot: Generally, the physical slot that a DIMM can fit into, but in this context, a slot is
DIMM0 or DIMM1, which refers to the first or second slot within a channel on the system
board. There are two slots per memory channel on the IBM SAN Volume Controller (SVC)
SV2 hardware. On the system board, DIMM0 is the blue slot and DIMM1 is the black slot
within each channel.
Rank: A single-rank DIMM has one set of memory chips that is accessed while writing to
or reading from the memory. A dual-rank DIMM is like having two single-rank DIMMs on
the same module, with only one rank accessible at a time. A quad-rank DIMM is,
effectively, two dual-rank DIMMs on the same module. The 32G DIMMS are dual rank.
Easy Tier
Easy Tier is a volume performance function within the IBM Storage family that provides
automatic data placement of a volume’s extents in a multitiered storage pool. The pool
normally contains a mix of flash drives and HDDs. Easy Tier measures host I/O activity on the
volume’s extents and migrates hot extents onto the flash drives to ensure the maximum
performance.
Effective capacity
See “Capacity” on page 897.
Encryption key
The encryption key, also known as master access key, is created and stored on USB flash
drives or on a key server when encryption is enabled. The master access key is used to
decrypt the data encryption key.
Encryption of data-at-rest
Encryption of data-at-rest is the inactive encryption data that is stored physically on the
storage system.
Evaluation mode
Evaluation mode is an Easy Tier operating mode in which the host activity on all the volume
extents in a pool are “measured” only. No automatic extent migration is performed.
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
IBM Storage System. An event ID is used internally in the cluster to identify the error.
Excluded condition
The excluded condition is a status condition. It describes an MDisk that the IBM Storage
System decided is no longer sufficiently reliable to be managed by the cluster. The user must
issue a command to include the MDisk in the cluster-managed storage.
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB.
External storage
External storage refers to MDisks that are SCSI LUs that are presented by storage systems
that are attached to and managed by the clustered system.
Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.
Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.
Fibre Channel
FC is a technology for transmitting data between computer devices. It is especially suited for
attaching computer servers to shared storage devices and for interconnecting storage
controllers and drives. See also “Zoning” on page 923.
904 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Fibre Channel over IP
Fibre Channel over IP (FCIP) is network storage technology that combines the features of the
Fibre Channel Protocol (FCP) and the IP to connect distributed SANs over large distances.
Field-replaceable unit
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.
Fix procedure
A maintenance procedure that runs within the product application and provides step-by-step
guidance to resolve an error condition.
FlashCopy
FlashCopy refers to a point-in-time (PiT) copy where a virtual copy of a volume is created.
The target volume maintains the contents of the volume at the PiT when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.
FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume that is
occupied by or reserved for a particular data set, data space, or file.
FlashCopy relationship
See “FlashCopy mapping” on page 905.
FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 914.
Flash drive
A data storage device, which is typically removable and rewriteable, that uses solid-state
memory to store persistent data. See also “Flash module”.
Flash module
A modular hardware unit containing flash memory, one or more flash controllers, and
associated electronics. See also “Flash drive”.
Full snapshot
A type of volume snapshot that contains all the volume data. When a full snapshot is created,
an entire copy of the volume data is transmitted to the cloud.
Gigabyte
A gigabyte (GB) is, for processor storage, real and virtual storage, and channel volume, two to
the power of 30 or 1,073,741,824 bytes. For disk storage capacity and communications
volume, it is 1,000,000,000 bytes.
Global Mirror
GM is a method of asynchronous replication that maintains data consistency across multiple
volumes within or across multiple systems. GM is used where distances between the source
site and target site cause increased latency beyond what the application can accept.
GPFS cluster
A system of nodes that are defined as being available for use by GPFS file systems.
GPFS snapshot
A PiT copy of a file system or file set.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap
(64 kibibytes (KiB) or 256 KiB) in the IBM Storage System. A grain is also the unit to extend
the real size of a thin-provisioned volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).
Hop
One segment of a transmission path between adjacent nodes in a routed network.
906 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Host
A physical or virtual computer system that hosts computer applications, with the host and the
applications using storage.
Host cluster
A configured set of physical or virtual hosts that share one or more storage volumes to
increase scalability or availability of computer applications.
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or internet Small
Computer Systems Interface (iSCSI) hostnames for LUN mapping. For each host ID, SCSI
IDs are mapped to volumes separately. The intent is to have a one-to-one relationship
between hosts and host IDs, although this relationship cannot be policed.
Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. Host mapping is equivalent to LUN masking.
Host object
A logical representation of a host within a storage system that is used to represent the host
for configuration tasks.
Host zone
A zone that is defined in the SAN fabric in which the hosts can address the system.
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a flash drive.
IBM HyperSwap
Pertaining to a function that provides continuous, transparent availability against storage
errors and site failures, and is based on synchronous replication.
Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume. See also
“Managed mode” on page 911 and “Unconfigured mode” on page 921.
Image volume
An image volume is a volume in which a direct block-for-block conversion exists from the
MDisk to the volume.
I/O group
Each pair of SVC cluster nodes is known as an input/output (I/O) group. An I/O group has a
set of volumes that are associated with it that are presented to host systems. Each SVC node
is associated with exactly one I/O group. The nodes in an I/O group provide a failover and
failback function for each other.
Incremental snapshot
A type of volume snapshot where the changes to a local volume relative to the volume's
previous snapshot are stored on cloud storage.
908 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Internal storage
Internal storage refers to an array of MDisks and drives that are held in IBM Storage System
enclosures.
Internet Protocol
Internet Protocol (IP) is a protocol that routes data through a network or interconnected
networks. This protocol acts as an intermediary between the higher protocol layers and the
physical network.
iSCSI alias
An alternative name for the iSCSI-attached host.
iSCSI initiator
An initiator functions as an iSCSI client. An initiator typically serves the same purpose to a
computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI
devices (such as HDDs and tape changers), an iSCSI initiator sends SCSI commands over
an IP network.
iSCSI name
A name that identifies an iSCSI target adapter or an iSCSI initiator adapter. An iSCSI name
can be an iSCSI Qualified Name (IQN) or an extended-unique identifier (EUI). Typically, this
identifier has the following format: iqn.datecode.reverse domain.
iSCSI session
The interaction (conversation) between an iSCSI Initiator and an iSCSI Target.
Key server
A server that negotiates the values that determine the characteristics of a dynamic virtual
private network (VPN) connection that is established between two endpoints.
See “Encryption key manager / server” on page 903.
Latency
The time interval between the initiation of a send operation by a source task and the
completion of the matching receive operation by the target task. More generally, latency is the
time between a task initiating data transfer and the time that transfer is recognized as
complete at the data destination.
Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.
License key
An alphanumeric code that activates a licensed function on a product.
Local fabric
The local fabric is composed of SAN components (switches, cables, and other components)
that connect the components (nodes, hosts, and switches) of the local cluster together.
910 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Logical drive
See “Volume” on page 922.
LUN masking
A process where a host object can detect more LUNs than it is intended to use, and the
device-driver software masks the LUNs that are not to be used by this host.
Machine signature
A string of characters that identifies a system. A machine signature might be required to
obtain a license key.
Managed disk
An MDisk is a SCSI disk that is presented by a RAID controller and managed by IBM Storage
Systems. The MDisk is not visible to host systems on the SAN.
Managed mode
An access mode that enables virtualization functions to be performed. See also “Image
mode” on page 908 and “Virtualized storage” on page 921.
Management node
A node that is used for configuring, administering, and monitoring a system.
Master volume
In most cases, the volume that contains a production copy of the data and that an application
accesses. See also “Auxiliary volume” on page 896, and “Relationship” on page 917.
Metro Mirror
MM is a method of synchronous replication that maintains data consistency across multiple
volumes within the system. MM is used when the write latency that is caused by the distance
between the source site and target site is acceptable to application performance.
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the IBM Storage System as copy 0 and the secondary copy is
known within the IBM Storage System as copy 1.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) is an FC feature whereby multiple FC N_Port IDs can share a
single physical N_Port.
Node
A single processing unit within a system. For redundancy, multiple nodes are typically
deployed to make up a system.
Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and SAS expansion ports. Node canisters are recognized on IBM Storage System
products. In SVC, all these components are spread within the whole system chassis, so node
canisters in SVC are not considered, but the node as a whole.
Node rescue
The process by which a node with no valid software is installed on its HDD, and can copy
software from another node that is connected to the same FC fabric.
912 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
NVMe Qualified Name
NVMe Qualified Names (NQNs) are used to uniquely describe a host or NVM subsystem for
identification and authentication. The NQN for the NVM subsystem is specified in the Identify
Controller data structure. An NQN is permanent for the lifetime of the host or NVM
subsystem.
Object storage
Object storage is a general term that refers to the entity in which cloud object storage
organizes, manages, and stores units of storage or just objects.
Overprovisioned
See “Capacity” on page 897.
Overprovisioned ratio
See “Capacity” on page 897.
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Ownership Groups
The Ownership Groups feature provides a method of implementing a multi-tenant solution on
the system. Ownership groups enable the allocation of storage resources to several
independent tenants with the assurance that one tenant cannot access resources that are
associated with another tenant. Ownership groups restrict access for users in the ownership
group to only those objects that are defined within that ownership group.
Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes. See also “Child pool” on page 899.
Partner node
The other node that is in the I/O group to which this node belongs.
Partnership
In MM or GM operations, the relationship between two clustered systems. In a
clustered-system partnership, one system is defined as the local system and the other
system as the remote system.
Performance policy
A policy that specifies performance characteristics, for example quality of service (QoS). See
also “Pool”.
Point-in-time copy
A PiT copy is an instantaneous copy that the FlashCopy service makes of the source volume.
See also “FlashCopy service” on page 905.
Pool
See “Storage pool (MDisk group)” on page 919.
Pool pair
Two storage pools that are required to balance workload. Each storage pool is controlled by a
separate node.
Preferred node
When you create a volume, you can specify a preferred node. Many of the multipathing driver
implementations that the system supports use this information to direct I/O to the preferred
node. The other node in the I/O group is used only if the preferred node is not accessible. If
you do not specify a preferred node for a volume, the system selects the node in the I/O group
that has the fewest volumes to be the preferred node. After the preferred node is chosen, it
can be changed only when the volume is moved to a different I/O group. The management
GUI provides a wizard that moves volumes between I/O groups without disrupting host I/O
operations.
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.
Primary volume
In a stand-alone MM or GM relationship, the target of write operations that are issued by the
host application. See also “Relationship” on page 917.
Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.
Provisioned capacity
See “Capacity” on page 897.
914 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Provisioning group
A provisioning group is an object that represents a set of MDisks that share physical
resources. Provisioning groups are used for capacity reporting and monitoring of
overprovisioned storage resources.
Public fabric
A public fabric is where you configure one SAN per fabric so that it is dedicated for host
attachment, storage system attachment, and RC operations. This SAN is referred to as a
public SAN. You can configure the public SAN to enable IBM Storage System node-to-node
communication also. You can optionally use the -localportfcmask parameter of the chsystem
command to constrain the node-to-node communication to use only the private SAN.
Qualifier
A value that provides more information about a class, association, indication, method,
method parameter, instance, property, or reference.
A modifier that makes a name unique.
Queue depth
The number of input/output (I/O) operations that can be run in parallel on a device.
Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.
Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and then the
last disk (index 2). The tie is broken by the node that locks them first.
Quota
The amount of disk space and number of files and directories that are assigned as upper
limits for a specified user, group of users, or file set.
RAID controller
See “Node canister” on page 912.
Raw capacity
See “Capacity” on page 897.
Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.
See also “Capacity” on page 897.
RAID 0
A data striping technique, which is commonly called RAID Level 0 or RAID 0 because of its
similarity to common, RAID, data-mapping techniques. However, it includes no data
protection, so the appellation RAID is a misnomer. RAID 0 is also known as data striping.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.
RAID 10
A collection of two or more physical drives that present to the host an image of one or more
drives. In the event of a physical device failure, the data can be read or regenerated from the
other drives in the RAID due to data redundancy.
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all the array’s virtual disks (virtual disks (VDisks)) in the presence of two concurrent disk
failures.
Real capacity
The amount of storage that is allocated to a volume copy from a storage pool.
Rebuild area
Reserved capacity that is distributed across all drives in a RAID. If a drive in the array fails, the
lost array data is systematically restored into the reserved capacity, returning redundancy to
the array. The duration of the restoration process is minimized because all drive members
simultaneously participate in restoring the data. See also “Distributed redundant array of
independent disks” on page 902.
Recovery key
See “Encryption recovery key” on page 903.
916 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Redundant storage area network
A redundant SAN is a SAN configuration in which there is no single point of failure (SPOF).
Therefore, data traffic continues no matter what component fails. Connectivity between the
devices within the SAN is maintained (although possibly with degraded performance) when
an error occurs. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics). In this configuration, if one path of the
counterpart SAN is destroyed, the other counterpart SAN path keeps functioning. See also
“Counterpart SAN” on page 901.
Relationship
In MM or GM, a relationship is the association between a master volume and an auxiliary
volume. These volumes also have the attributes of a primary or secondary volume. See also
“Auxiliary volume” on page 896, “Master volume” on page 911, “Primary volume” on
page 914, and “Secondary volume” on page 917.
Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.
Remote Copy
See “Global Mirror” on page 906 and “Metro Mirror” on page 912.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and other
components) that connect the components (nodes, hosts, and switches) of the remote cluster
together. Significant distances can exist between the components in the local cluster and
those components in the remote cluster.
SCSI initiator
The SCSI initiator is the system component that initiates communications with attached
targets.
SCSI target
A device that acts as a subordinate to a SCSI initiator and consists of a set of one or more
LUs, each with an assigned LUN. The LUs on the SCSI target are typically I/O devices.
Secondary volume
Pertinent to RC, the volume in a relationship that contains a copy of data that is written by the
host application to the primary volume.
Sequential volume
A volume that uses extents from a single MDisk.
Serial-attached SCSI
Serial-attached SCSI (SAS) is a method that is used in accessing computer peripheral
devices that employs a serial (1 bit at a time) means of digital data transfer over thin cables.
The method is specified in the ANSI standard that is called SAS. In the business enterprise,
SAS is useful for access to mass storage devices, external HDDs.
Snapshot
A snapshot is an image backup type that consists of a PiT view of a volume.
Solid-state drive
An SSD or flash drive is a disk that is made from solid-state memory and therefore has no
moving parts. Most SSDs use NAND-based flash memory technology. It is defined to the IBM
Storage System as a disk tier generic_ssd.
Space efficient
See “Thin provisioning” on page 920.
Spare
An extra storage component, such as a drive or tape, that is predesignated for use as a
replacement for a failed component.
918 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Spare drive
A drive that is reserved in an array for rebuilding a failed drive in a RAID. If a drive fails in a
RAID, a spare drive from within that device adapter pair is selected to rebuild it.
Spare goal
The optimal number of spares that are needed to protect the drives in the array from failures.
The system logs a warning event when the number of spares that protect the array drops
below this number.
Space-efficient volume
See “Thin-provisioned volume” on page 920.
Stand-alone relationship
In FlashCopy, MM, and GM, relationships that do not belong to a consistency group and that
have a null consistency-group attribute.
Statesave
Binary data collection that is used for a problem determination by IBM service support.
Storage-class memory
Storage-class memory (SCM) is a type of NAND flash that includes a power source to ensure
that data is not lost due to a system crash or power failure. SCM treats non-volatile memory
as DRAM and includes it in the memory space of the server. Access to data in that space is
quicker than access to data in local, PCI-connected SSDs, direct-attached HDDs, or external
storage arrays. SCM read/write technology is up to 10 times faster than NAND flash drives
and is more durable.
Storage node
A component of a storage system that provides internal storage or a connection to one or
more external storage systems.
Striped
Pertaining to a volume that is created from multiple MDisks that are in the storage pool.
Extents are allocated on the MDisks in the order that is specified.
Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a RAID, is split into smaller chunks of storage that are known as extents. These extents are
then concatenated by using various policies to make volumes. See also “Asymmetric
virtualization” on page 896.
Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 896.
Syslog
A standard for transmitting and storing log messages from many sources to a centralized
location to enhance system management.
T10 DIF
T10 DIF is a Data Integrity Field (DIF) extension to SCSI to enable end-to-end protection of
data from a host application to physical media.
Thin-provisioning savings
See “Capacity” on page 897.
Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.
Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity.
Throttles
Throttling is a mechanism to control the amount of resources that are used when the system
is processing I/Os on supported objects. The system supports throttles on hosts, host
clusters, volumes, copy offload operations, and storage pools. If a throttle limit is defined, the
system either processes the I/O for that object or delays the processing of the I/O to free
resources for more critical I/O operations.
Throughput
A measure of the amount of information that is transmitted over a network in a period.
Throughput is measured in bits per second (bps), kilobits per second (Kbps), or megabits per
second (Mbps).
Tie-breaker
When a cluster is split into two groups of nodes, the role of tie-breaker in a quorum device
decides which group continues to operate as the system and handle all I/O requests.
920 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Transparent Cloud Tiering
Transparent Cloud Tiering (TCT) is a separately installable feature of IBM Spectrum Scale
that provides a native cloud storage tier.
Trial license
A temporary entitlement to use a licensed function.
Unconfigured mode
An access mode in which an external storage MDisk is not configured in the system, so no
operations can be performed. See also “Image mode” on page 908 and “Managed mode” on
page 911.
Unique identifier
A unique identifier (UID) is an identifier that is assigned to storage system LUs when they are
created. It is used to identify the LU regardless of the LUN, the status of the LU, or whether
alternative paths exist to the same device. Typically, a UID is used only once.
Usable capacity
The amount of capacity that is provided for storing data on a system, pool, array, or MDisk
after formatting and RAID techniques are applied.
Used capacity
The amount of usable capacity that is taken up by data or capacity in a system, pool, array, or
MDisk after data reduction techniques are applied.
VDisk-to-host mapping
See “Host mapping” on page 907.
Virtual capacity
The amount of storage that is available. In a thin-provisioned volume, the virtual capacity can
be different from the real capacity. In a standard volume, the virtual capacity and real capacity
are the same.
Virtual disk
See “Volume” on page 922.
Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 898.
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques that are applied to it
by a virtualization engine.
Volume
A volume is an IBM Storage System logical device that appears to host systems that are
attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O group. A
volume has a preferred node within the I/O group.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.
Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or an RC relationship. In these cases, the
system fails to delete the volume unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.
Volume snapshot
A collection of objects on a cloud storage account that represents the data of a volume at a
particular time.
Worldwide ID
A worldwide ID (WWID) is a name identifier that is unique worldwide and that is represented
by a 64-bit value that includes the IEEE-assigned OUI.
Worldwide name
Worldwide name (WWN) is a 64-bit, unsigned name identifier that is unique.
922 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Worldwide port name
Worldwide port name (WWPN) is a unique 64-bit identifier that is associated with an FC
adapter port. The WWPN is assigned in an implementation-independent and
protocol-independent manner. See also “Worldwide node name” on page 922.
Write-through mode
Write-through mode is a process in which data is written to a storage device while the data is
cached.
Written capacity
See “Capacity” on page 897.
Zoning
The grouping of multiple ports to form a virtual and private storage network. Ports that are
members of a zone can communicate with each other, but are isolated from ports in other
zones. See also “Fibre Channel” on page 904.
Note: If a task completes in the GUI, the associated CLI command is always displayed in
the details, as shown throughout this book.
Using SSH keys with a passphrase is more secure than a login with a username and
password because authenticating to a system requires the private key and the passphrase.
By using the other method, only the password is required to obtain access to the system.
When SSH keys are used without a passphrase, it becomes easier to log in to a system
because you must provide only the private key when performing the login and you are not
prompted for password. This option is less secure than using SSH keys with a passphrase.
To enable CLI access with SSH keys, complete the following steps:
1. Generate a public key and a private key as a pair.
2. Upload a public key to the IBM Spectrum Virtualize system by using the GUI.
3. Configure a client SSH tool to authenticate with the private key.
4. Establish a secure connection between the client and the system.
SSH is the communication vehicle between the management workstation and the IBM
Spectrum Virtualize system. The SSH client provides a secure environment from which to
connect to a remote machine. It uses the principles of public and private keys for
authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the storage system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system.
Each key pair is associated with a user-defined ID string that can consist of up to 256
characters. Up to 100 keys can be stored on the system. New IDs and keys can be added,
and unwanted IDs and keys can be deleted. To use the CLI, an SSH client must be installed
on that system. To use the CLI with SSH keys, the SSH client is required. An SSH key pair
also must be generated on the client system, and the client’s SSH public key must be stored
on the IBM Spectrum Virtualize systems.
926 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Download the following tools:
PuTTY SSH client: putty.exe
PuTTY key generator: puttygen.exe
Note: Larger SSH keys, such as 2048 bits, are also supported.
To generate keys: The blank area that is indicated by the message is the large blank
rectangle in the GUI inside the Key field. Continue to move the mouse pointer over the
blank area until the progress bar reaches the far right. This action generates random
characters based on the cursor location to create a unique key pair.
3. After the keys are generated, save them for later use. Click Save public key.
4. You are prompted to enter a name (for example, sshkey.pub) and a location for the public
key (for example, C:\Keys\). Enter this information and click Save.
Ensure that you record the SSH public key name and location because this information
must be specified later.
Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key. For example, add the
extension .pub to the name of the file to easily differentiate the SSH public key from the
SSH private key.
5. Click Save private key. A warning message is displayed (see Figure C-3). Click Yes to
save the private key without a passphrase.
928 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Note: It is possible to use a passphrase for an SSH key. Although this action increases
security, it generates an extra step to log in with the SSH key because it requires the
passphrase input.
6. When prompted, enter a name (for example, sshkey.ppk), select a secure place as the
location, and click Save.
Private Key Extension: The PuTTY key generator saves the PuTTY private key (PPK)
with the .ppk extension. This is a proprietary format for PuTTY and the keys are not
interchangeable with OpenSSH clients. There is a utility to convert keys between
PuTTY and OpenSSH if you wan to use the same keys between the two environments.
3. To upload the public key, click Browse, open the folder where you stored the public SSH
key, and select the key.
4. Click OK and the key is uploaded, as shown in Figure C-6.
930 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
5. Check in the GUI to ensure that the SSH key is imported successfully (see Figure C-7).
2. In the upper right, select SSH as the connection type. In the “Close window on exit”
section, select Only on clean exit (see Figure C-9 on page 932), which ensures that if
any connection errors occur that they are displayed on the user’s window.
4. In the Category window, on the left side of the PuTTY Configuration window (see
Figure C-10), select Connection → SSH to open the PuTTY SSH Configuration window.
In the SSH protocol version section, select 2.
5. In the Category window on the left, select Connection → SSH → Auth. More options are
displayed for controlling SSH authentication.
932 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
6. In the “Private key file for authentication” field in Figure C-11, browse to or enter the fully
qualified directory path and file name of the SSH client private key file that was created (in
this example, C:\Users\Tools\Putty\privatekey2019.ppk is used).
7. In the Category window, click Session to return to the “Basic options for your PuTTY
session” view.
8. Enter the following information in the fields in the right pane (see Figure C-12):
– Host Name (or IP address): Specify the hostname or system IP address of the IBM
Spectrum Virtualize system.
– Saved Sessions: Enter a session name.
9. Click Save to save the new session (Figure C-12).
11.If a PuTTY Security Alert opens as shown in Figure C-14, confirm it by clicking Yes.
934 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
12.As shown in Figure C-15, PuTTY now connects to the system automatically by using the
user ID that was specified earlier, without prompting for password.
System
The CLI is now configured for IBM Spectrum Virtualize system administration.
The OpenSSH suite consists of various tools. The following tools are used to generate the
SSH keys, transfer the SSH keys to a remote system, and establish a connection to IBM
Spectrum Virtualize device by using SSH:
ssh: OpenSSH SSH client
ssh-keygen: Tool to generate SSH keys
scp: Tool to transfer files between hosts
You also must specify the path and name for the SSH keys. The name that you provide is the
name of the private key. The public key has the same name, but with extension .pub. In
Example C-1 on page 935, the path is /.ssh/, the name of the private key is sshkey, and the
name of the public key is sshkey.pub.
Note: Using a passphrase for the SSH key is optional. If a passphrase is used, security is
increased, but more steps are required to log in with the SSH key because the user must
enter the passphrase.
To upload the public key by using the CLI, complete the following steps:
1. On the SSH client (for example, AIX or Linux host), run scp to copy the public key to the
IBM Storage System. The basic syntax for the command is:
scp <file> <user>@<hostname_or_IP_address>:<path>
The directory /tmp in the IBM Spectrum Virtualize active configuration node can be used
to store the public key temporarily. Example C-2 shows the command to copy the newly
generated public key to the IBM Spectrum Virtualize system.
2. Log in to the storage system by using SSH and run the chuser command (as shown in
Example C-3) to associate the public SSH key with a user.
When running the lsuser command as shown in Example C-3, it is indicated that a user
has a configured SSH key in the field ssh_key.
936 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
Connecting to an IBM Spectrum Virtualize system
Now that the SSH key is uploaded to the IBM Spectrum Virtualize system and assigned to a
user account, you can connect to the device by running the ssh command with the following
options:
ssh -i <SSH_private_key> <user>@<IP_address_or_hostname>
Example C-4 shows the SSH command that is running from an AIX server and connecting to
the storage system with an SSH private key and no password prompt.
Example: C-4 Connecting to IBM Storage System with an SSH private key
# ssh -i /.ssh/sshkey admin@192.168.100.1
IBM_Storage System:ITSO:admin>
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
IBM FlashSystem 5000 Family Products, SG24-8449
IBM FlashSystem 9100 Architecture, Performance, and Implementation, SG24-8425
IBM FlashSystem 9200 and 9100 Best Practices and Performance Guidelines,
SG24-8448
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521
Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and
V5030) with IBM Spectrum Virtualize V8.2.1, SG24-8162
Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.2.1, SG24-7938
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.2.1, SG24-7933
Introduction and Implementation of Data Reduction Pools and Deduplication, SG24-8430
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
942 Implementing the IBM FlashSystem with IBM Spectrum Virtualize V8.4
SCSI Small Computer System Interface VLAN virtual local area network
SCU storage capacity unit VM virtual machine
SDD Subsystem Device Driver VMFS Virtual Machine File System
SDDDSM Subsystem Device Driver Device VPD vital product data
Specific Module VPN virtual private network
SDN software-defined network VSAN virtual storage area network
SEM Secondary Expander Module VSR Variable Stripe RAID
SFF small form factor VVOL VMware vSphere Virtual Volume
SFP small form factor pluggable WWID worldwide ID
SLA service-level agreement WWN worldwide name
SLP Service Location Protocol WWNN worldwide node name
SME subject matter expert WWPN worldwide port name
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management
Protocol
SPOF single point of failure
SPOFs single points of failure
SRM System Resource Manager
SSD solid-state drive
SSH Secure Shell
SSIC System Storage Interoperation
Center
SSL Secure Sockets Layer
SSR IBM System Services
Representative
STAT Storage Tier Advisor Tool
SVC SAN Volume Controller
T0 time-zero
TB terabytes
TBps terabytes per second
TCO total cost of ownership
TCT Transparent Cloud Tiering
TLC Triple Level Cell
TPGS Target Port Group Support
TRAID traditional RAID
UDID unit device identifier
UID unique identifier
UPN User Principal Name
URL Uniform Resource Locator
VAAI vStorage APIs for Array Integration
VASA vSphere APIs for Storage
Awareness
VC virtual connection
VDisk virtual disk
VIOS Virtual I/O Server
SG24-8492-00
ISBN 0738459364
Printed in U.S.A.
®
ibm.com/redbooks