ETERNUS DX80 S2/DX90 S2 Disk Storage System Overview
ETERNUS DX80 S2/DX90 S2 Disk Storage System Overview
ETERNUS DX80 S2/DX90 S2 Disk Storage System Overview
Preface
Fujitsu would like to thank you for purchasing our ETERNUS DX80 S2/DX90 S2 Disk storage system. The ETERNUS DX80 S2/DX90 S2 Disk storage system is designed to be connected to Fujitsu (PRIMEQUEST or PRIMERGY) or non-Fujitsu servers. This manual describes the basic knowledge that is required to use the ETERNUS DX80 S2/DX90 S2 Disk storage system. This manual is intended for use of ETERNUS DX80 S2/DX90 S2 Disk storage system in regions other than Japan. Please carefully review the information outlined in this manual. Sixth Edition August 2012
All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates. HP-UX is a trademark of Hewlett-Packard Company in the U.S. and other countries. Linux is a trademark or registered trademark of Linus Torvalds in the U.S. and other countries. Red Hat, PRM, and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc. in the USA and other countries. SUSE is a registered trademark of SUSE Linux AG., a subsidiary of Novell, Inc. AIX is a trademark of IBM Corp. VMware, VMware logos, Virtual SMP, and VMotion are either registered trademarks or trademarks of VMware, Inc. in the U.S. and/or other countries. The company names, product names and service names mentioned in this document are registered trademarks or trademarks of their respective companies.
3
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 1 Overview
This chapter provides an overview and describes the features of the ETERNUS DX80 S2/DX90 S2 Disk storage system.
Chapter 2 Specifications
This chapter describes the specifications, the function specifications, and the operating environment of the ETERNUS DX80 S2/DX90 S2 Disk storage system.
4
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Naming Conventions
Product names
The following abbreviations are used for Microsoft Windows Server.
Official name Microsoft Windows Server 2003, Datacenter Edition for Itanium-based Systems Microsoft Windows Server 2003, Enterprise Edition Microsoft Windows Server 2003, Enterprise x64 Edition Microsoft Windows Server 2003 R2, Enterprise Edition Microsoft Windows Server 2003 R2, Enterprise x64 Edition Microsoft Windows Server 2003, Standard Edition Microsoft Windows Server 2003, Standard x64 Edition Microsoft Windows Server 2003 R2, Standard Edition Microsoft Windows Server 2003 R2, Standard x64 Edition Microsoft Windows Storage Server 2003 R2, Standard Edition Microsoft Windows Server 2008 Datacenter Microsoft Windows Server 2008 Datacenter (64-bit) Microsoft Windows Server 2008 R2 Datacenter (64-bit) Microsoft Windows Server 2008 Enterprise Microsoft Windows Server 2008 Enterprise (64-bit) Microsoft Windows Server 2008 R2 Enterprise (64-bit) Microsoft Windows Server 2008 Standard Microsoft Windows Server 2008 Standard (64-bit) Microsoft Windows Server 2008 R2 Standard (64-bit) Microsoft Windows Server 2008 for Itanium-Based Systems Microsoft Windows Server 2012 Datacenter (64-bit) Microsoft Windows Server 2012 Enterprise (64-bit) Microsoft Windows Server 2012 Standard (64-bit) Windows Server 2012 Windows Server 2008 Windows Server 2003 Abbreviation
5
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
6
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Warning Notations
Warning signs are shown throughout this manual in order to prevent injury to the user and/or material damage. These signs are composed of a symbol and a message describing the recommended level of caution. The following explains the symbol, its level of caution, and its meaning as used in this manual
CAUTION
This symbol indicates the possibility of minor or moderate personal injury, as well as damage to the ETERNUS DX Disk storage system and/or to other users and their property, if the ETERNUS DX Disk storage system is not used properly.
To avoid damaging the ETERNUS DX Disk storage system, pay attention to the following points when cleaning the ETERNUS DX Disk storage system: - Make sure to disconnect the power when cleaning. - Be careful that no liquid seeps into the ETERNUS DX Disk storage system when using cleaners, etc. - Do not use alcohol or other solvents to clean the ETERNUS DX Disk storage system.
7
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Table of Contents
Chapter 1 Chapter 2
2.1
2.1.1 2.1.2
Overview Specifications
12 16
2.2 2.3
Chapter 3
3.1 3.2 3.3 3.4
Connection Configurations
27
Host Connections (SAN) ........................................................................................... 27 Remote Connections (SAN/WAN) ............................................................................ 28 LAN ........................................................................................................................... 29 Power Synchronization ............................................................................................. 29
Chapter 4
4.1 4.2 4.3 4.4
4.4.1 4.4.2 4.4.3
System Configuration
30
RAID Levels .............................................................................................................. 30 RAID Groups ............................................................................................................ 35 Volumes .................................................................................................................... 37 Drives ........................................................................................................................ 38
User Capacity of Drives ....................................................................................................................... 40 User Capacity for Each RAID Level ..................................................................................................... 41 Drive Installation .................................................................................................................................. 41
4.5
4.5.1 4.5.2
Chapter 5
5.1
5.1.1
Basic Functions
46
8
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Table of Contents
5.2
5.2.1 5.2.2 5.2.3 5.2.4
Security ..................................................................................................................... 50
Account Management .......................................................................................................................... 50 User Authentication ............................................................................................................................. 51 Host Affinity .......................................................................................................................................... 53 Data Encryption ................................................................................................................................... 54
5.3
5.3.1 5.3.2 5.3.3 5.3.4
5.4
5.4.1 5.4.2
5.5
5.5.1 5.5.2 5.5.3 5.5.4 5.5.5 5.5.6 5.5.7
5.6
5.6.1
Chapter 6
6.1
6.1.1 6.1.2
Optional Functions
73
6.2
6.2.1 6.2.2 6.2.3
9
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
List of Figures
Figure 1.1 Figure 3.1 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6 Figure 4.7 Figure 4.8 Figure 4.9 Figure 4.10 Figure 4.11 Figure 4.12 Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 5.5 Figure 5.6 Figure 5.7 Figure 5.8 Figure 5.9 Figure 5.10 Figure 5.11 Figure 5.12 Figure 5.13 Figure 5.14 Figure 5.15 Figure 5.16 Figure 5.17 Figure 5.18 Figure 5.19 Figure 5.20 Figure 5.21 Figure 6.1 Figure 6.2 Figure 6.3 Figure 6.4 Figure 6.5 Figure 6.6 Figure 6.7 Figure 6.8 Figure 6.9 Figure 6.10 Figure 6.11
External view ..................................................................................................................................... Connection configuration ................................................................................................................... RAID0 concept................................................................................................................................... RAID1 concept................................................................................................................................... RAID1+0 concept .............................................................................................................................. RAID5 concept................................................................................................................................... RAID5+0 concept .............................................................................................................................. RAID6 concept................................................................................................................................... Example of a RAID group .................................................................................................................. Volume concept ................................................................................................................................. Drive combination 1 ........................................................................................................................... Drive combination 2 ........................................................................................................................... Drive combination 3 ........................................................................................................................... Hot spares ......................................................................................................................................... Data block guard function .................................................................................................................. Disk check ......................................................................................................................................... Redundant Copy function .................................................................................................................. Rebuild/Copyback function ................................................................................................................ Account management........................................................................................................................ Host affinity ........................................................................................................................................ Data encryption.................................................................................................................................. Example of RAID Migration 1 ............................................................................................................ Example of RAID Migration 2 ............................................................................................................ Example of Logical Device Expansion 1............................................................................................ Example of Logical Device Expansion 2............................................................................................ Example of LUN Concatenation ........................................................................................................ Wide Striping ..................................................................................................................................... Eco-mode mechanism ....................................................................................................................... Power consumption visualization....................................................................................................... Event notification ............................................................................................................................... Assigned CM ..................................................................................................................................... Host response (connection operation mode)..................................................................................... Device time synchronization .............................................................................................................. Power control using Wake On LAN ................................................................................................... Storage Migration .............................................................................................................................. Example of Thin Provisioning ............................................................................................................ Flexible Tier (automatic storage layering).......................................................................................... Example of Advanced Copy .............................................................................................................. REC ................................................................................................................................................... Restore OPC ..................................................................................................................................... EC or REC Reverse........................................................................................................................... Multiple copy...................................................................................................................................... Multiple copy (including SnapOPC+) ................................................................................................. Multiple copy (using the Consistency mode) ..................................................................................... Cascade copy .................................................................................................................................... Cascade copy (using three copy sessions) .......................................................................................
12 27 31 31 32 32 33 34 35 37 42 42 43 44 46 47 48 49 50 53 56 58 59 60 61 62 63 64 65 67 68 69 70 71 72 74 75 76 80 83 83 84 84 85 85 86
10
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
List of Tables
Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 2.6 Table 2.7 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 4.5 Table 4.6 Table 4.7 Table 4.8 Table 4.9 Table 5.1 Table 5.2 Table 5.3 Table 6.1 Table 6.2 Table 6.3 Table 6.4
ETERNUS DX80 S2 specifications.................................................................................................... ETERNUS DX90 S2 specifications.................................................................................................... ETERNUS DX80 S2/DX90 S2 function specifications....................................................................... Supported servers and OSes (FC interface) ..................................................................................... Supported servers and OSes (iSCSI interface) ................................................................................. Supported servers and OSes (FCoE interface) ................................................................................. Supported servers and OSes (SAS interface) ................................................................................... User capacity for each RAID level ..................................................................................................... Recommended number of drives per RAID group............................................................................. RAID configurations that can be registered in a Thin Provisioning Pool or a Flexible Tier Pool ....... Volumes that can be created ............................................................................................................. Drive characteristics .......................................................................................................................... User capacity per drive ...................................................................................................................... Formula for calculating user capacity for each RAID level ................................................................ Recommended number of hot spares for each drive type................................................................. Hot spare selection ............................................................................................................................ Type of client public key .................................................................................................................... Host affinity function specifications.................................................................................................... Data encryption function specifications ............................................................................................. Controlling software ........................................................................................................................... List of functions (copy methods) ........................................................................................................ REC data transfer mode .................................................................................................................... Available cascade copy combinations ...............................................................................................
16 17 18 21 24 25 26 34 35 36 37 39 40 41 44 45 51 54 55 77 77 80 86
11
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 1 Overview
This chapter provides an overview and describes the features of the ETERNUS DX Disk storage system. Figure 1.1 External view
Drives
The ETERNUS DX Disk storage system supports high performance online disks (*2), cost effective Nearline disks with large amounts of capacity (*3), and SSDs that provide super-fast access. Up to 120 drives can be installed in the ETERNUS DX80 S2. Up to 240 drives can be installed in the ETERNUS DX90 S2. 2.5" drives and 3.5" drives can be installed together in the same ETERNUS DX Disk storage system.
*2: *3: Disks with high performance and high reliability for accessing data frequently. SAS disks are provided as online disks. Disks with high capacity and high reliability for data backup. Nearline SAS disks are provided as Nearline disks.
Host interfaces
Host interfaces can be selected from FC 8Gbit/s, iSCSI 10Gbit/s, iSCSI 1Gbit/s, FCoE 10Gbit/s, and SAS 6Gbit/s. Up to eight ports can be installed in a single ETERNUS DX Disk storage system. Different types of host interfaces can exist together in the same ETERNUS DX Disk storage system.
Cache capacity
The maximum capacity of cache memory that can be installed in a single ETERNUS DX80 S2 is 4GB. The maximum capacity of cache memory that can be installed in a single ETERNUS DX90 S2 is 8GB.
12
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 1 Overview
Model upgrade
As a support for system scalability after installation, an ETERNUS DX Disk storage system can be upgraded to a higher-end model. The ETERNUS DX80 S2 can be upgraded to the ETERNUS DX90 S2. The ETERNUS DX80 S2/DX90 S2 can be upgraded to the ETERNUS DX410 S2/DX440 S2.
Redundant configurations
Important components are duplicated to maintain high fault tolerance. This allows hot swapping of failed components without interrupting operations.
Data integrity
The ETERNUS DX Disk storage system adds check codes to all data that is saved. The data is verified at multiple checkpoints on transmission paths to ensure data integrity.
Data Encryption
The ETERNUS DX Disk storage system has the ability to encrypt data as it is being written. Encryption with firmware is supported by default. Together with the world standard 128bit AES method (*4), Fujitsu's own high performance encryption method is also supported. In addition, Self Encrypting Drives (SEDs) are available. Since each drive performs self encryption instead of the firmware, loads that are usually caused by encryption using firmware are removed and data can be encrypted without reducing performance. SEDs use the 256bit AES method.
*4: Advanced Encryption Standard: Federal Information Processing Standards method
13
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 1 Overview
Full Copy (High speed replication of virtual machines) Block Zeroing (Improved initialization) Hardware Assisted Locking (Improved exclusion control) Thin Provisioning Space Reclamation (Efficient release of unused space)
vStorage APIs for Array Integration Application Program Interface
Virtualization
Virtualization technology enables a larger capacity than the physical disk capacity to be shown to the server. Multiple physical disks are collectively managed as a disk pool and the necessary capacity is flexibly allocated according to write requests from the server. The ETERNUS DX Disk storage system has a function that balances writing areas on a volume basis to prevent concentrated access to a specific RAID group in multiple RAID groups that configure a disk pool.
Automatic layering
The ETERNUS DX Disk storage system supports automatic storage layering. This function detects data access frequency and redistributes data between drives with various drive types according to the policy that is set. The most suitable cost effective performance can be realized by moving frequently accessed data to high performance SSDs and less frequently accessed data to cost effective Nearline disks in collaboration with ETERNUS SF Storage Cruiser. Server settings do not need to be changed after redistribution.
Connectivity
OSes such as UNIX, Linux, Windows, and VMware are supported. The ETERNUS DX Disk storage system can be connected to various UNIX servers and industry standard servers by Non-Fujitsu manufacturers as well as Fujitsu servers such as PRIMEQUEST and PRIMERGY.
Backup
Data can be replicated at any point with high speed by using the Advanced Copy functions in conjunction with software such as ETERNUS SF Advanced Copy Manager. Data can be replicated between multiple ETERNUS DX Disk storage systems without affecting the performance of the server by using the remote copy functions.
14
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 1 Overview
ETERNUS SF Express
ETERNUS SF Express is a storage system introduction and operation support software for the user who had put off the introduction of the storage system up to now because of "Difficulty" and "Introduction and operation cost increase". ETERNUS SF Express is an easy to use software addition to the ETERNUS DX Disk storage system, facilitating management of the ETERNUS DX Disk storage system and leveraging the ETERNUS DX Disk storage system functionality such as Snapshots, Cloning, or Replication.
RoHS compliance
The ETERNUS DX Disk storage system complies with RoHS, as mandated by the European Parliament and Council. RoHS limits the use in electric and electronic equipment of six specific chemicals: lead, hexavalent chromium, mercury, cadmium, polybrominated biphenyl (PBB), and polybrominated diphenyl ether (PBDE). In addition, lead-free soldering is used for all printed-wiring boards.
15
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Chapter 2 Specifications
This chapter explains the specifications and the operating environment for the ETERNUS DX Disk storage systems.
2.1
2.1.1
Number of host interfaces Number of drive enclosures (max.) Number of drives 2.5" drives 3.5" drives
16
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Item Drive capacity (Speed) 2.5" SAS disks Non-selfencrypting Selfencrypting 2.5" Nearline SAS disks 2.5" SSDs 3.5" SAS disks 3.5" Nearline SAS disks 3.5" SSDs Drive interfaces Interfaces for remote monitoring and operation management Power control interface *1: *2: *3: *4: *5:
ETERNUS DX80 S2 300GB, 450GB, 600GB, 900GB (10,000rpm) 300GB (15,000rpm) 300GB, 450GB, 600GB, 900GB (10,000rpm) 1TB (7,200rpm) 100GB, 200GB, 400GB 300GB, 450GB, 600GB (15,000rpm) 1TB, 2TB, 3TB (7,200rpm) 100GB, 200GB, 400GB Serial Attached SCSI (6Gbit/s) Ethernet (1000Base-T/100Base-TX/10Base-T)*4 RS232C*5
Physical capacity is calculated based on the assumption that 1TB=1,000GB and 1GB=1,000MB. Logical capacity is calculated based on the assumption that 1TB=1,024GB, 1GB=1,024MB and that drives are formatted in a RAID5 configuration. The available capacity depends on the RAID configuration. 3.5" type and 2.5" type drive enclosures can be installed together. Two ports for each controller One port for each controller. Power synchronization is performed via a power synchronized unit.
2.1.2
Number of host interfaces Number of drive enclosures (max.) Number of drives 2.5" drives 3.5" drives
17
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Item Drive capacity (Speed) 2.5" SAS disks Non-selfencrypting Selfencrypting 2.5" Nearline SAS disks 2.5" SSDs 3.5" SAS disks 3.5" Nearline SAS disks 3.5" SSDs Drive interfaces Interfaces for remote monitoring and operation management Power control interface *1: *2: *3: *4: *5:
ETERNUS DX90 S2 300GB, 450GB, 600GB, 900GB (10,000rpm) 300GB (15,000rpm) 300GB, 450GB, 600GB, 900GB (10,000rpm) 1TB (7,200rpm) 100GB, 200GB, 400GB 300GB, 450GB, 600GB (15,000rpm) 1TB, 2TB, 3TB (7,200rpm) 100GB, 200GB, 400GB Serial Attached SCSI (6Gbit/s) Ethernet (1000Base-T/100Base-TX/10Base-T)*4 RS232C*5
Physical capacity is calculated based on the assumption that 1TB=1,000GB and 1GB=1,000MB. Logical capacity is calculated based on the assumption that 1TB=1,024GB, 1GB=1,024MB and that drives are formatted in a RAID5 configuration. The available capacity depends on the RAID configuration. 3.5" type and 2.5" type drive enclosures can be installed together. Two ports for each controller One port for each controller. Power synchronization is performed via a power synchronized unit.
2.2
Function Specifications
This section contains the specifications of the functions for the ETERNUS DX Disk storage system. Table 2.3 ETERNUS DX80 S2/DX90 S2 function specifications
Item Supported RAID levels RAID groups Number of RAID groups (max.)*2 Number of volumes per RAID group Volumes Number of connectable hosts (max.)*3 Thin Provisioning*4 Number of volumes (max.) Volume capacity (max.) per storage system per port Number of pools (max.) Pool capacity (max.) Total capacity of Thin Provisioning Volumes 0 ETERNUS DX80 S2
*1,
1, 1+0, 5, 5+0, 6
4096
120
18
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Item Flexible Tier (automatic storage layering) *4 Advanced Copy*4 Number of pools (max.) Pool capacity (max.) Total capacity of Flexible Tier Volumes Local copy Types Number of copy generations (SnapOPC+) Remote copy Type Interfaces
EC, OPC, QuickOPC, SnapOPC, SnapOPC+*5 512 REC*5 FC (8Gbit/s, 4Gbit/s, 2Gbit/s) iSCSI (10Gbit/s) iSCSI (1Gbit/s) 16*6
Number of connectable storage systems (max.) Number of copy sessions (max.) Number of copy sessions per volume (max.) Number of copy sessions for a single area (max.)*7 Copy capacity (max.)*8 SDP capacity (max.) Number of REC buffers (max.)*10
2048
Size per REC buffer (max.)*10 REC buffer size per storage system (max.)*10 Number of REC disk buffer RAID groups per REC buffer*10 Supported RAID levels for REC disk buffers*10 *1: *2: *3:
*4:
*5: *6:
Use of RAID0 is not recommended because it is not redundant. For RAID0 configurations, data may be lost due to the failure of a single drive. The maximum number of RAID groups that can be registered (for RAID1). The maximum number of host information (HBAs) that can be registered. A WWN is registered as the host information when the HBA of the connected server is FC/FCoE. A SAS address is registered for a SAS HBA and the iSCSI name and IP address are registered as a set for an iSCSI HBA. When there are two host interface ports for each ETERNUS DX Disk storage system, the maximum number of connectable hosts is 512. Since the maximum number of connectable hosts for each port is 256, the maximum number of connectable hosts for each ETERNUS DX Disk storage system is 256 the number of ports when the number of host interface ports is 4 ports or less. To use this function, the additional license is required (optional function). For the RAID configurations that can be registered for the Thin Provisioning function or the Flexible Tier function, refer to "Table 4.3 RAID configurations that can be registered in a Thin Provisioning Pool or a Flexible Tier Pool" (page 36). For details on the types of Advanced Copy, refer to "6.2 Backup (Advanced Copy)" (page 76). When the Consistency mode is used, the maximum number of REC buffers is the same as the maximum number of connectable storage systems.
19
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
*7: *8:
*9:
*10: *11:
For details on the maximum number of copy sessions for a single area, refer to the multiple copy section in "6.2.3 Available Advanced Copy Combinations" (page 83). This value is the total capacity of data that can be copied simultaneously. The copy capacity differs depending on the settings and the copy conditions. The following formula can be used to calculate the copy capacity: Executable copy capacity [GB] = (S[MB] 1024 8[KB] N) M 2 (round down the result) S: copy table size, N: the number of sessions, M: bitmap ratio For details about the setting, refer to "ETERNUS Web GUI Users Guide". This value was calculated when: - The maximum copy table size was set for the ETERNUS DX Disk storage system. - The maximum bitmap ratio was set for the ETERNUS DX Disk storage system. - The maximum number of the sessions was set for the ETERNUS DX Disk storage system. - Restore OPC was not used. For details on REC buffers and REC disk buffers, refer to the Consistency mode section in "6.2.2 Remote Copy" (page 80). Four (2+2) or eight (4+4) drives are required as configuration drives.
20
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
2.3
Supported OSes
This section explains the operating environment that is required for the ETERNUS DX Disk storage system operation. Servers and OSes that are supported by the ETERNUS DX Disk storage system are shown below. For the possible combinations of servers, Host Bus Adapters (HBAs), and driver software that can be used, refer to "Server Support Matrix" by accessing the URL that is described in "README" on the Documentation CD provided with the ETERNUS DX Disk storage system.
FC interface
Table 2.4 Supported servers and OSes (FC interface)
Server Manufacturer Fujitsu Product name Mission critical IA servers PRIMEQUEST 400/500/500A series Windows Server 2003 Windows Server 2008 Red Hat Enterprise Linux AS (v.4) Red Hat Enterprise Linux 5 Mission critical IA servers PRIMEQUEST 1000 series Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 UNIX servers SPARC Enterprise UNIX servers PRIMEPOWER Sun Fire Oracle Solaris 10 Oracle Solaris 11 Oracle Solaris 8 Oracle Solaris 9 Oracle Solaris 10 OS
21
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Server Manufacturer Fujitsu Product name Industry standard servers PRIMERGY Windows Server 2003 Windows Server 2008 Windows Server 2012
OS
Red Hat Enterprise Linux AS/ES (v.4) Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 XenServer 5.6 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 Oracle Sun Fire Oracle Solaris 8 Oracle Solaris 9 Oracle Solaris 10 Oracle Solaris 11 SPARC Enterprise IBM IBM RS/6000 IBM P series IBM System p IBM Power Systems rp Series rx Series Oracle Solaris 10 Oracle Solaris 11 AIX 6.1 AIX 7.1 HP-UX 11iV1 HP-UX 11iV2 HP-UX 11iV3 Egenera Egenera Bladeframe Pan Manager 5.2
HP
22
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Server Manufacturer Others Product name Other industry standard servers Windows Server 2003 Windows Server 2008 Windows Server 2012
OS
Red Hat Enterprise Linux AS/ES (v.4) Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 9 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 Oracle Solaris 10 Oracle Linux 5 Oracle Linux 6 Oracle VM Server 3 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 XenServer 5.6 XenServer 6 FalconStor NSS
23
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
iSCSI interface
Table 2.5 Supported servers and OSes (iSCSI interface)
Server Manufacturer Fujitsu Product name Mission critical IA servers PRIMEQUEST 1000 series Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 UNIX servers SPARC Enterprise Industry standard servers PRIMERGY Oracle Solaris 10 Oracle Solaris 11 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 Oracle Sun Fire SPARC Enterprise HP Others rp Series rx Series Other industry standard servers Oracle Solaris 10 Oracle Solaris 11 Oracle Solaris 10 Oracle Solaris 11 HP-UX 11iV3 Oracle Solaris 10 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 OS
24
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
FCoE interface
Table 2.6 Supported servers and OSes (FCoE interface)
Server Manufacturer Fujitsu Product name UNIX servers SPARC Enterprise Industry standard servers PRIMERGY Oracle Solaris 10 Oracle Solaris 11 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 Oracle Others SPARC Enterprise Other industry standard servers Oracle Solaris 10 Oracle Solaris 11 Oracle Solaris 10 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 OS
25
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
SAS interface
Table 2.7 Supported servers and OSes (SAS interface)
Server Manufacturer Fujitsu Product name UNIX servers SPARC Enterprise Industry standard servers PRIMERGY Oracle Solaris 10 Oracle Solaris 11 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux AS/ES (v.4) Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 Oracle Others SPARC Enterprise Other industry standard servers Oracle Solaris 10 Oracle Solaris 10 Windows Server 2003 Windows Server 2008 Windows Server 2012 Red Hat Enterprise Linux AS/ES (v.4) Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 VMware vSphere 4 VMware vSphere 4.1 VMware vSphere 5 OS
26
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
This chapter explains the possible connections for ETERNUS DX Disk storage system operation. Figure 3.1 Connection configuration
Storage Area Network (SAN) LAN for operation management
Remote support center Management/Monitoring server
Server (Host)
CA CA LAN Switch RA ETERNUS DX Disk storage system PWC Administration terminal Power synchronized unit
Power control
CA: Host interface This is an interface that is used to connect to the server HBA or switch. RA: Remote interface This is an interface that is used for remote copy. LAN: This is used to connect a terminal for monitoring operations and device settings. This is also used to connect to the remote support center using the remote support function. PWC: PWC port This is used to enable the ETERNUS DX Disk storage system to be powered on and off with servers.
3.1
27
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
FC (Fibre Channel)
FC enables high speed data transfer over long distances by using optical fibers and coaxial cables. FC is used for database servers where enhanced scalability and high performance are required.
iSCSI
iSCSI is a communication protocol that transfers SCSI commands by encapsulating them in IP packets over Ethernet. Since iSCSI can be installed at a lower cost and the network configuration is easier to change than FC, iSCSI is commonly used by divisions of large companies and by small and medium-sized companies where scalability and cost-effectiveness are valued over performance.
FCoE
Since Fibre Channel over Ethernet (FCoE) encapsulates FC frames and transfers them over Ethernet, a LAN environment and an FC-SAN environment can be integrated. When there are networks for multiple I/O interfaces (e.g. in a data center), the networks can be integrated and managed.
SAS
SAS (Serial Attached SCSI) is a serial transfer host interface that is as reliable as the normal (parallel) SCSI interface. SAS is commonly used for small-sized systems where performance and cost-effectiveness are valued over scalability.
3.2
FC (Fibre Channel)
Data can be copied and operated by taking advantage of the high speed and reliability of FC. By compressing the data that is to be transferred using a network device, data can be transferred at high speeds.
iSCSI
Remote copy can be performed without FCIP converters when an IP line is used.
28
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
3.3
LAN
For ETERNUS DX Disk storage systems, operations such as RAID configuration, operation management, and system maintenance are performed via the LAN. The functions of the management/monitoring server on the LAN, which include SNMP (device monitoring), SMTP (sending e-mails), NTP (time correction), syslog (sending logs), and RADIUS (user authentication), can also be used. Any errors that occur in the ETERNUS DX Disk storage system are notified to the remote support center when remote support is used.
3.4
Power Synchronization
Powering the ETERNUS DX Disk storage system on and off can be automatically controlled with a server.
29
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
This chapter explains points to note before configuring a system using the ETERNUS DX Disk storage system.
4.1
RAID Levels
This section explains RAID group configuration and the supported RAID levels and usage (RAID level selection criteria).
CAUTION
Remember that a RAID0 configuration is not redundant. This means that if a RAID0 drive fails, the data will not be recoverable. Therefore, using a RAID1, RAID1+0, RAID5, RAID5+0, or RAID6 configuration is recommended.
30
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
RAID0 (striping)
Data is split in unit of blocks and stored across multiple drives. Figure 4.1 RAID0 concept
Data writing request
A C
B D
HDD0
HDD1
RAID1 (mirroring)
RAID1 stores the same data on two duplicated drives at the same time. If one drive fails, other drive continues operation. Figure 4.2 RAID1 concept
Data writing request
A B C D
HDD0
A B C D
HDD1
31
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
D D C
HDD3
Mirroring
C B
HDD2
Mirroring
HDD7
B A
HDD1
Mirroring
HDD6
A
Mirroring
HDD5
HDD0 HDD4
A E I M
HDD0
B F J P M, N, O, P
HDD1
C G P I, J, K, L N
HDD2
D P E, F, G, H K O
HDD3
P A, B, C, D H L P
HDD4
Parity for data A to D: P A, B, C, D Parity for data E to H: P E, F, G, H Parity for data I to L: P I, J, K, L Parity for data M to P: P M, N, O, P
32
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Striping (RAID0)
A B
C D
A E P I, J M
HDD0
B P E, F I N
HDD1
P A, B F J P M, N
HDD2
C G P K, L O
HDD3
D P G, H K P
HDD4
P C, D H L P O, P
HDD5
RAID5
RAID5
Striping (RAID0) Striping with distributed parity (RAID5)
33
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
A E I M
HDD0
B F J P1 M, N, O, P
HDD1
C G P1 I, J, K, L P2 M, N, O, P
HDD2
D P1 E, F, G, H P2 I, J, K, L N
HDD3
P1 A, B, C, D P2 E, F, G, H K O
HDD4
P2 A, B, C, D H L P
HDD5
Parity for data A to D: Parity for data E to H: Parity for data I to L: Parity for data M to P:
RAID level
Performance may differ according to the number of drives and the processing method from the host.
34
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
4.2
RAID Groups
This section explains RAID groups. A RAID group is a group of drives. It is a unit that configures RAID. Multiple RAID groups with the same RAID level or multiple RAID groups with different RAID levels can be set together in the ETERNUS DX Disk storage system. After a RAID group is created, RAID levels can be changed and drives can be added. The same size drives (2.5", 3.5") and the same kind of drives (SAS, Nearline SAS, SSD, or SED) must be used to configure a RAID group. Figure 4.7 Example of a RAID group
SAS 600GB
SAS 600GB
SAS 600GB
SAS 600GB
SAS 600GB
SSD 200GB
SSD 200GB
SSD 200GB
SSD 200GB
RAID group 1
RAID group 2
Table 4.2 shows the recommended number of drives that configure a RAID group. Table 4.2 Recommended number of drives per RAID group
Number of configuration drives 2 4 to 32 3 to 16 6 to 32 5 to 16 2(1D+1M) 4(2D+2M), 6(3D+3M), 8(4D+4M), 10(5D+5M) 3(2D+1P), 4(3D+1P), 5(4D+1P), 6(5D+1P) 3(2D+1P) 2, 4(3D+1P) 2, 5(4D+1P) 2, 6(5D+1P) 2 5(3D+2P), 6(4D+2P), 7(5D+2P) Recommended number of drives (*1)
35
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Sequential access performance hardly varies with the number of drives for the RAID group. Random access performance tends to be proportional to the number of drives for the RAID group. Use of higher capacity drives will increase the time required for the drive rebuild process to complete. The higher the number of drives in a RAID5, RAID5+0, or RAID6 configuration, the longer the period of time for data restoration and rebuilding processes from parities. To use the Thin Provisioning function or the Flexible Tier function, the drive area of the virtual volume is managed using a pool. The following table shows the RAID configurations that can be registered in a Thin Provisioning Pool or a Flexible Tier Pool. Table 4.3 RAID configurations that can be registered in a Thin Provisioning Pool or a Flexible Tier Pool
Number of configuration drives 4(4D) 2(1D+1M) 4(2D+2M), 8(4D+4M), 16(8D+8M), 24(12D+12M) 4(3D+1P), 5(4D+1P), 8(7D+1P), 9(8D+1P), 13(12D+1P) 6(4D+2P), 8(6D+2P), 10(8D+2P)
Use of RAID0 is not recommended because it is not redundant. For RAID0 configurations, data may be lost due to the failure of a single drive.
For details about the Thin Provisioning function, refer to "6.1.1 Thin Provisioning" (page 73). For details about the Flexible Tier function, refer to "6.1.2 Flexible Tier (Automatic Storage Layering)" (page 74). An assigned CM is allocated to each RAID group. For details, refer to "5.5.4 Assigned CMs" (page 68).
36
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
4.3
Volumes
This section explains volumes. Logical drive areas in RAID groups are called volumes. A volume is the basic RAID unit that can be recognized by the server. Figure 4.8 Volume concept
Volume 1 Volume 2
Volume 3
RAID group 1
RAID group 2
A volume may be up to 128TB. However, the maximum capacity of volume varies depending on the OS of the server. A volume can be expanded or moved if required. Multiple volumes can be concatenated and treated as a single volume. The types of volumes that are listed in the table below can be created in the ETERNUS DX Disk storage system. Table 4.4 Volumes that can be created
Type Standard/Open Usage This volume is used for normal usage, such as file systems and databases. The server recognizes it as a single logical unit. The area of this volume is used as the copy destination for SnapOPC/SnapOPC+. There is a SDV for each copy destination. This volume is used to configure the Snap Data Pool (SDP) area. The SDP capacity equals the total capacity of the SDPVs. A volume is supplied from a SDP when the amount of updates exceeds the capacity of the SDV. This virtual volume is created in a Thin Provisioning Pool area. This volume is a target volume for layering. Data is automatically redistributed in small block units according to the access frequency. An FTV belongs to a Flexible Tier Pool. Maximum capacity 128TB (*1) Approximately 0.1% of the SDV virtual capacity
2TB
128TB (*2)
128TB (*2)
37
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Usage This volume is created by concatenating distributed areas in from 2 to 64 RAID groups. Processing speed is fast because data access is distributed. This volume is a dedicated volume that is required to use the Offloaded Data Transfer (ODX) function of Windows Server 2012. When data is updated while a copy is being processed, this area is used to save the source data. The volume type is Standard/Open, TPV, or FTV.
1TB
*1: *2:
When multiple volumes are concatenated using the LUN Concatenation function, the maximum capacity is also 128TB. The maximum total capacity of volumes and the maximum pool capacity in the ETERNUS DX Disk storage system are also 128TB.
After a volume is created, formatting automatically starts. A server can access the volume while it is being formatted. Wait for the format to complete if high performance access is required for the volume.
Volumes have different stripe sizes that depend on the RAID level and the stripe depth parameter. The available user capacity can be fully utilized if an exact multiple of the stripe size is set for the volume size. If an exact multiple of the stripe size is not set for the volume size, unusable areas may remain. Refer to "ETERNUS Web GUI User's Guide" for details about the stripe size.
4.4
Drives
The ETERNUS DX Disk storage system supports the latest drives that have the high-speed Serial Attached SCSI (6Gbit/s) interface. SAS disks, Nearline SAS disks, and SSDs can be installed in the ETERNUS DX Disk storage system. Some drive types have a data encryption function. 2.5" and 3.5" drive sizes are available. Since 2.5" drives are lighter and require less power than 3.5" drives, the total weight and power consumption when 2.5" drives are installed is less than when the same number of 3.5" drives is installed. When the data I/O count is compared based on the number of drives in an enclosure (2.5" drives: 24, 3.5" drives: 12), the Input Output Per Second (IOPS) performance for each enclosure in a 2.5" drive configuration is superior to a 3.5" drive configuration since more 2.5" drives can be installed in an enclosure than 3.5" drives.
38
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Nearline SAS disks are used to store data that does not need the access performance of SAS disks. They are far more cost effective than SAS disks. (It is recommended that SAS disks be used for data that is constantly accessed or when high performance/reliability is required.) If the ambient temperature exceeds the operating environment conditions, Nearline SAS disk performance may be reduced. Nearline SAS disks can be used as Advanced Copy destinations and for the storage of archived data. When Nearline SAS disks are used as an Advanced Copy destination, delayed access responses and slower copy speeds may be noticed, depending on the amount of I/O and the number of copy sessions.
SSDs
SSDs are reliable drives with high performance. SSDs are used to store high performance databases and other frequently accessed data. SSDs use flash memory as their storage media and provide better random access performance than SAS and Nearline SAS disks. Containing no motors or other moving parts, SSDs are highly resistant to impact and have low power consumption requirements. Since SSDs use Single Level Cell (SLC) type flash memory and have a high level wear leveling function, the number of rewrites does not reach its limit within the product warranty period. Table 4.5 Drive characteristics
Type SAS disks Nearline SAS disks SSDs Good Reasonable Very good Reliability Performance Good Reasonable Very good Low High Price per bit Reasonable
39
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Some functions cannot be used with some types of drives. Eco-mode cannot be set for SSDs. Do not use different types of drives in a RAID group. Use the same type of drives when adding capacity to a RAID group (RAID Migration, Logical Device Expansion). For details on each function, refer to "Chapter 5 Basic Functions" (page 46).
Encryption-compliant
Self Encrypting Drives (SEDs) are offered for the 2.5" SAS disks.
When using SEDs, the firmware version of the ETERNUS DX Disk storage system must be V10L20 or later. If the firmware version is earlier than V10L20, SED access performance may be reduced. The current firmware version can be checked via ETERNUS Web GUI or ETERNUS CLI. When upgrading firmware is required, contact your sales representative.
4.4.1
Product name (*1) 100GB SSD 200GB SSD 400GB SSD 300GB SAS disk 450GB SAS disk 600GB SAS disk 900GB SAS disk 1TB Nearline SAS disk 2TB Nearline SAS disk 3TB Nearline SAS disk
40
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
*1:
The capacity that is listed above for the product names of the drives is based on the assumption that 1MB = 1,0002 bytes, while the user capacity per drive is based on the assumption that 1MB = 1,024 2 bytes. Furthermore, OS file management overhead will reduce the actual usable capacity. The user capacity does not change even when the drive sizes (2.5"/3.5") are different, or whether or not the drive is encryption-compliant (SED).
4.4.2
4.4.3
Drive Installation
No restrictions on the installation location of drives apply if the same types of drives are used to create RAID groups. To improve reliability, the installation location of drives that configure a RAID group must be considered. When a RAID level that performs mirroring (RAID1, RAID1+0) is created, installing drives that are configured in pairs to different enclosures improves reliability. RAID1+0 is used in the following examples to explain the drive combinations for RAID levels that configure mirrored pairs. The drive number is determined by the DE-ID of the drive enclosure and the slot number in which the drive is installed. Starting from the smallest drive number in the configuration, half of the drives are allocated into one group and the remaining drives are allocated into the other group. Each drive in the different groups are paired for mirroring.
41
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Example 1: All drives are installed in a single drive enclosure Figure 4.9 Drive combination 1
DE#00
A'
B'
C'
D'
Mirroring
Example 2: Paired drives are installed in two different drive enclosures Figure 4.10 Drive combination 2
DE#00
Mirroring DE#01
A'
B'
C'
D'
42
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Example 3: Paired drives are installed in three different drive enclosures Figure 4.11 Drive combination 3
DE#00
Mirroring DE#01
A'
Mirroring DE#02
B'
C'
43
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
4.5
Hot Spares
Hot spares are used as spare drives for when drives in a RAID group fail, or when drives are in error status. Figure 4.12 Hot spares
Hot spare
4.5.1
Assign "Dedicated Hot Spares" to RAID groups that contain important data, in order to preferentially improve their access to hot spares.
4.5.2
44
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Register hot spares to ensure steady operation of the ETERNUS DX Disk storage system. If a free hot spare is available and one of the RAID group drives has a problem, data of this drive is automatically replicated in the hot spare. If a mixture of SAS disks, Nearline SAS disks, SSDs, and SEDs is installed in the ETERNUS DX Disk storage system, separate hot spares will be required for each type of drive. There are two types of SAS disks; SAS disks with a speed of 10,000rpm and SAS disks with a speed of 15,000rpm. If a drive error occurs and a hot spare is configured in a RAID group with different speed drives, the performance of all the drives in the RAID group is determined by the drive with the slowest speed. When using SAS disks with different speeds, prepare hot spares that correspond to the different speed drives if required. The capacity of each hot spare must be equal to the largest capacity of the same-type drives.
When multiple Global Hot Spares are installed, the following criteria are used to select which hot spare will replace a failed drive: Table 4.9
1 2 3 4 *1:
Selection order
When there are multiple hot spares with a larger capacity than the failed drive, the hot spare with the smallest capacity among them is used first.
45
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
This chapter explains the basic functions of ETERNUS DX Disk storage systems.
5.1
Data Protection
The ETERNUS DX Disk storage system has functions to securely protect user data when an error occurs.
5.1.1
1
CC
3
A0
A0
A1
A2
A0
CC
A1
CC
A2
Controller
CC
A1
CC
A2
CC
Eight bytes check code is appended to every 512 bytes user data.
Cache memory
2
Written data
A0
2 1
Check Code append Check Code confirmation Check Code confirmation & removal
CC
A1
CC
A2
CC
2 3
46
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.1.2
Disk Patrol
The Disk Patrol function regularly diagnoses and monitors the operational status of all disks that are installed in the ETERNUS DX Disk storage system. Disks are checked (read check) regularly as a background process. For disk checking, read check is performed sequentially for a part of the data in all the disks. If an error is detected, data is restored using disks in the RAID group and the data is written back to another block of the disk in which the error occurred. Figure 5.2 Disk check
Data is read and checked. Error detection
Error D1
RAID group
Data reconstruction
D1
Disks that are stopped by Eco-mode are checked when the disks start running again.
47
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.1.3
Redundant Copy
When the Disk Patrol function decides that preventative maintenance is required for a drive, the Redundant Copy function uses the remaining drives to re-create the data of the maintenance target drive and writes the data to the hot spare. The Redundant Copy function enables data to be restored while maintaining data redundancy. Figure 5.3 Redundant Copy function
RAID5 (Redundant)
Sign of failure
Creates data from the drives other than the maintenance target drive, and writes data into the hot spare. Hot spare
Disconnects the maintenance target drive and switches it to the hot spare.
RAID5 (Redundant)
Disconnected
If a bad sector is detected when a drive is checked, a replacement track is automatically allocated. This drive is not recognized as having a drive failure during this process. However, the drive will be disconnected by the Redundant Copy function if the spare sector is insufficient and the problem cannot be solved by allocating a replacement track.
48
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.1.4
Rebuild/Copyback
When a drive fails and RAID group redundancy is broken, Rebuild/Copyback restores the drive status back to normal status as a background process. If a free hot spare is available when one of the RAID group drives has a problem, data of this drive is automatically replicated in the hot spare. This ensures data redundancy. Figure 5.4 Rebuild/Copyback function
Rebuild
Creates data from the drives other than the failed drive and writes the data into the hot spare. Hot spare
RAID5 (Redundant) Replaces the failed drive with the new drive.
Copyback
After replacing has been completed, copies the data from the hot spare to the new drive. RAID5 (Redundant)
RAID5 (Redundant)
Hot spare
49
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.2
Security
The ETERNUS DX Disk storage system provides various enhanced security functions.
5.2.1
Account Management
Proper user account management is very important to configure a system where security is paramount. The ETERNUS DX Disk storage system allocates roles and access authority when a user account is created, and sets which functions can be used depending on the user privileges. Since the authorized functions of the storage administrator are classified according to the usage and only minimum privileges are given to the administrator, security is improved and operational mistakes and management hours can be reduced. Figure 5.5 Account management
A B C
Monitor
Device Status
Admin
Device Settings : : User Account Settings Security Settings Maintenance information
StorageAdmin
Device Status RAID Group Settings Volume Settings Host Settings : :
By setting which function can be used by each user, unnecessary access is reduced.
AccountAdmin
User Account Settings Authentication Settings Role Settings
SecurityAdmin
Device Status Security Settings Maintenance information
Maintainer
: : Device Settings Maintenance information Maintenance
50
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.2.2
User Authentication
Internal Authentication and External Authentication are available as logon authentication methods. RADIUS authentication can be used for External Authentication.
Internal Authentication
Internal Authentication is performed using the authentication function of the ETERNUS DX Disk storage system. The following authentication functions are available when the ETERNUS DX Disk storage system is connected via a LAN using operation management software.
SSL authentication
ETERNUS Web GUI supports https connections using SSL/TLS. Since data on the network is encrypted, security can be ensured. Server certifications that are required for connection are obtained by installing authenticated certifications or by being automatically created in the ETERNUS DX Disk storage system.
SSH authentication
Since ETERNUS CLI supports SSH connections, data on the network can be encrypted and sent/ received. The server key for SSH varies depending on the ETERNUS DX Disk storage system. When the server certification is updated, the server key is updated as well. Password authentication and client public key authentication are available as authentication methods for SSH connections. The following table shows the supported client public keys. Table 5.1 Type of client public key
Type of public key OpenSSH style RSA for SSH v1 IETF style DSA for SSH v2 IETF style RSA for SSH v2 Complexity (bits) 1024, 2048, 4096 1024, 2048, 4096 1024, 2048, 4096
The following iSCSI authentication is available for a host connection and remote copy.
iSCSI authentication
The Challenge Handshake Authentication Protocol (CHAP) is supported for iSCSI connections. For CHAP Authentication, unidirectional CHAP or bidirectional CHAP can be selected. When unidirectional CHAP is used, the target authenticates the initiator to prevent fraudulent access. When bidirectional CHAP is used, the target authenticates the initiator to prevent fraudulent access and the initiator authenticates the target to prevent impersonation. CHAP is supported by host connections (iSCSI-CA) and REC connections (iSCSI-RA). In addition, Internet Storage Name Service (iSNS) is supported as the iSCSI name solution.
51
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
External Authentication
External Authentication uses the user account information (user name, password, and role name) that is registered on an external authentication server.
RADIUS authentication
Use of the Remote Authentication Dial-In User Service (RADIUS) protocol enables the consolidation of authentication information for remote access. An authentication request is sent to the RADIUS authentication server that is outside the ETERNUS system network. The authentication method can be selected from CHAP and PAP. Two RADIUS authentication servers can be connected to balance user account information and to create a redundant configuration. The RADIUS server authenticates the user and responds with the ETERNUS DX Disk storage system role(s) identified by the Vendor Specific Attribute (VSA).
52
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.2.3
Host Affinity
The host affinity function prevents data from being damaged due to inadvertent storage access. By defining a server that can access the volume, security can be ensured when multiple servers are connected. A server can access the volume by associating the server that is allowed to access the volume with the volume. Volumes that are accessed can be set for each host interface port. Figure 5.6 Host affinity
Permission for Server A LUN#0 Volume#0 ... LUN#255 Volume#255 Permission for Server B LUN#0 Volume#256 ... LUN#255 Volume#511 Server A LUN#0 : LUN#255 Port Server B LUN#0 : LUN#255 Switch Server C LUN#0 : LUN#255 Volume#768 Server D LUN#0 : LUN#255 Port : Volume#1023 Port Volume#512 : Volume#767 ETERNUS DX Disk storage system Port Volume#256 : Volume#511 Volume#0 : Volume#255
Permission for Server C LUN#0 Volume#512 ... LUN#255 Volume#767 Permission for Server D LUN#0 Volume#768 ... LUN#255 Volume#1023
By using the host affinity function, a host interface port can be shared by multiple servers in a cluster system or in a system in which servers with different OSes exist together.
53
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
When using a FC switch, an FCoE switch, or a SAS switch, perform zoning settings for each switch. For details on zone settings, refer to "User's Guide -Server Connection-" or the manual that is provided with the switch. When using a LAN switch for iSCSI connections, use a LAN switch that has the VLAN function and allocate a separate segment to each server. For details on settings, refer to "User's Guide -Server Connection-" or the manual that is provided with the LAN switch. The following table shows the specifications of the host affinity function: Table 5.2 Host affinity function specifications
Functional specification Number of connectable hosts (max.) (*1) Number of LUNs that can be set (max.) (*2) per CA port per host 1024 256 512 (*3) 256 512 (*4) 1024 (*5) Maximum setting
*1:
The maximum number of host information (HBAs) that can be registered. A WWN is registered as the host information when the HBA of the connected server is FC/FCoE. A SAS address is registered for a SAS HBA and the iSCSI name and IP address are registered as a set for an iSCSI HBA. When there are two host interface ports for each ETERNUS DX Disk storage system, the maximum number of connectable hosts is 512. Since the maximum number of connectable hosts for each port is 256, the maximum number of connectable hosts for each ETERNUS DX Disk storage system is 256 the number of ports when the number of host interface ports is 4 ports or less. The maximum number of LUNs that can be set varies depending on the connection operation mode of the host response settings. For details on the mode, refer to "5.5.5 Connection Operation Mode" (page 68). This value is for AIX mode, Linux mode, or HP-UX mode. This value is for AIX mode or Linux mode. This value is for HP-UX mode.
5.2.4
Data Encryption
Encrypting data as it is being written to the drive prevents information leakage caused by fraudulent decoding. Even if a drive is removed and stolen by malicious third parties, data cannot be decoded. The encryption function only encrypts the data stored on the drives, so server access results in the transmission of plain text. Therefore, this function prevents data leakage from drives that are physically removed, but does not prevent data leakage from server access. The following two types of data encryption are supported: Self Encrypting Drive Data encryption with the encryption function of a Self Encrypting Drive (SED) that performs self encryption
54
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Volume conversion encryption Data encryption with the encryption function of the firmware for the ETERNUS DX Disk storage system on a volume basis Encryption using SEDs is recommended because SEDs do not affect system performance. Table 5.3 Data encryption function specifications
Self Encrypting Drive (SED) Authentication key Drive AES256 FIPS 140 Volume conversion encryption Encryption key Volume, Pool (RAID group) AES128 (*1)/Fujitsu original N/A
Functional specification Type of key Encryption unit Encryption method FIPS authentication (*2) *1: *2:
AES (Advanced Encryption Standard: Federal Information Processing Standards) method Federal Information Processing Standard (FIPS)
The Fujitsu original encryption method uses a Fujitsu original algorithm that has been specifically created for ETERNUS DX Disk storage systems. The following section describes the features of each encryption function.
55
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Server
??? ???
???
Server
5.3
56
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
57
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.3.1
RAID Migration
RAID Migration is a function that transfers a volume to a different RAID group while guaranteeing the integrity of the data. By using RAID Migration, RAID levels and volumes can be hot switched. This allows easy redistribution of volumes among RAID groups in response to customer needs. RAID Migration can be carried out while the system is running, and may also be used to switch data to a different RAID level (e.g. changing from RAID5 to RAID1+0). Examples of RAID Migration are shown below. Example when transferring volumes from a RAID5(3+1) 300GB SAS disk configuration to a RAID5(3+1) 450GB SAS disk configuration: Figure 5.8 Example of RAID Migration 1
RAID5(3+1) 300GB 300GB 300GB 300GB Unused 300GB x 4 300GB 300GB 300GB 300GB
Unused 450GB x 4 450GB 450GB 450GB 450GB Migrate to another RAID group and add the capacity
58
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Example when transferring volumes from a RAID5(3+1) configuration to a RAID1+0(3+3) configuration: Figure 5.9 Example of RAID Migration 2
RAID5(3+1) 300GB 300GB 300GB 300GB Unused 300GB x 4 300GB 300GB 300GB 300GB
Volume 0
Volume 0
300GB
300GB
300GB
300GB
300GB
300GB
Volume 0
59
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.3.2
300GB
Volume 0 Volume 1
RAID5(4+1)
300GB
300GB
300GB
60
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
RAID5(3+1) configuration converted to a RAID6(4+2) configuration by the addition of extra two drives Figure 5.11 Example of Logical Device Expansion 2
RAID5(3+1)
300GB 300GB 300GB 300GB
Unused
300GB 300GB
Volume 0 Volume 1
RAID6(4+2)
300GB 300GB 300GB 300GB 300GB 300GB
61
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.3.3
LUN Concatenation
LUN Concatenation is a function that is used to add new area to a volume for expanding the volume capacity available to the server. This function enables the reuse of free area in a RAID group and can be used to solve capacity shortages. The maximum capacity of a volume that is expanded by LUN Concatenation is 128TB. The following example shows the concatenation of an unused area of a different RAID group into Volume 2 in order to expand the capacity of Volume 2. Figure 5.12 Example of LUN Concatenation
RAID5(3+1) 300GB 300GB 300GB 300GB RAID5(3+1) 300GB 300GB 300GB 300GB
Volume 0 Volume 2
Volume 0 Volume 2
Volume 1
LUN Concatenation cannot be performed when the drive type for the RAID group to which the concatenation source volume belongs and the drive type to which the concatenation destination volume belongs do not match. When the concatenation source volume is configured with SAS disks or Nearline SAS disks, volumes that are configured with SAS disks or Nearline SAS disks can be concatenated. When the concatenation source volume is configured with SSDs, volumes that are configured with SSDs can be concatenated. When the concatenation source volume is configured with SEDs, volumes that are configured with SEDs can be concatenated.
62
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.3.4
Wide Striping
Wide Striping is a function that concatenates multiple RAID groups by striping and uses many drives simultaneously to improve performance. This function is effective when high Random Write performance is required. I/O accesses from the server are distributed to multiple drives by increasing the number of drives that configure a LUN, which improves the processing performance. Figure 5.13 Wide Striping
RAID group#0 RAID group#1
RAID group#2
RAID group#3 The WSV is divided into same capacity units and allocated to each RAID group. The concatenated area is seen as a single LUN by the server.
The ETERNUS DX Disk storage system firmware version must be V10L30 or later to use the Wide Striping function. The firmware version can be checked via ETERNUS Web GUI or ETERNUS CLI. When upgrading firmware is required, contact your sales representative.
63
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.4
5.4.1
Eco-mode
Using Eco-mode allows the rotation of disks that have limited access time to be stopped for specified periods to reduce power consumption. Disk spin-up and spin-down schedules can be set for each RAID group, Thin Provisioning Pool (TPP) (*1), or Flexible Tier Pool (FTRP) (*2). These schedules can also be set to allow backup operations.
*1: *2: A Thin Provisioning Pool is a virtual area that is created when the Thin Provisioning function is used. For details on the Thin Provisioning function, refer to "6.1.1 Thin Provisioning" (page 73). An FTRP is a layered area that is created when the Flexible Tier function is used. The pool can have up to three layers. For details on the Flexible Tier function, refer to "6.1.2 Flexible Tier (Automatic Storage Layering)" (page 74).
Disk spin-up
Backup Phase
12
Disk spin-down
Working Phase
12
Off
Off
5
On
Off
PM (12:00 to 24:00)
AM (0:00 to 5:00)
AM (5:00 to 12:00)
Online disks
Online disks
Online disks
Disks stopped
Disks stopped
64
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.4.2
ETERNUS SF Storage Cruiser Collects power consumption and temperature data for each storage system.
Server
5.5
Operation Monitoring
This section explains the functions related to operation management and device monitoring for the ETERNUS DX Disk storage system. A failed part can be promptly detected and diagnosed by operation management software. This enables the problem to be appropriately dealt with. Collecting and analyzing detailed performance data improves the performance of the system.
5.5.1
65
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
ETERNUS CLI
ETERNUS CLI supports Telnet or SSH connections. The ETERNUS DX Disk storage system can be set and monitored using commands and command scripts. Most of the functions that can be executed using ETERNUS Web GUI can be executed.
ETERNUS SF
"ETERNUS SF" Storage Foundation Software can manage an "ETERNUS series" centered storage environment. Since the complicated storage configuration designing and setting operations can be performed with an easy-to-use GUI, a storage system can be installed easily without needing to have high level skills. ETERNUS SF ensures stable operation by managing the entire storage system.
SMI-S
Storage systems can be managed collectively using the general storage management application that supports Version 1.4 of Storage Management Initiative Specification (SMI-S). SMI-S is a storage management interface standard developed and maintained by the Storage Network Industry Association (SNIA). SMI-S can monitor the device status and change configurations such as RAID groups, volumes, and Advanced Copy (EC/REC/SnapOPC+).
5.5.2
Event Notification
When an error occurs in the ETERNUS DX Disk storage system, this function notifies the event information to the administrator. The administrator can realize an error occurred without monitoring the screen all the time. The methods to notify an event are e-mail, SNMP Trap, syslog, remote support, and host sense. The notification methods and levels can be set as required. E-mail When an event occurs, an e-mail is sent to the specified e-mail address. SNMP Trap Using the SNMP agent function, management information is sent to the SNMP manager (monitoring server). SNMP v1/v2c/v3 is supported. syslog By registering the syslog destination server in the ETERNUS DX Disk storage system, various events that are detected by the ETERNUS DX Disk storage system are sent to the syslog server as event logs.
66
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Remote support The errors that occur in the ETERNUS DX Disk storage system are notified to the remote support center. Additional information (logs and system configuration information) for checking the error are also sent. This shortens the time to collect information. Host sense The ETERNUS DX Disk storage system returns host senses (sense codes) to notify specific status to the server. Detailed information such as error contents can be obtained from the sense code. Figure 5.16 Event notification
Mail server
SNMP manager
Syslog server
SNMP Trap
syslog
Host sense
Server (host)
5.5.3
67
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.5.4
Assigned CMs
A controller that controls access is assigned to each RAID group and manages the load balance in the ETERNUS DX Disk storage system. The controller that controls a RAID group is called an assigned CM. Figure 5.17 Assigned CM
Switch
Switch
RAID group#2
If auto assignment is selected for an assigned CM when RAID groups are created, the RAID group number is used to determine the assigned CM. When the RAID group number is an even number, "CM#0" is allocated as the assigned CM. For odd numbers, "CM#1" is allocated. When the load is unbalanced between the controllers, change the assigned CM.
5.5.5
68
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
AIX mode
Host response settings (for Server B) HP-UX mode Command response Command response Monitoring time ..... Monitoring time Command response Command response Convert Host response settings (for Server A) status ..... status Monitoring Response time ..... Monitoring Response time ..... Convert Conversion pattern ..... Conversion pattern Response status ..... Response status ..... Command response Command response Conversion pattern ..... Conversion pattern ..... Monitoring time ..... Monitoring time ..... Convert Response status ..... Response status ..... AIX mode Conversion pattern ..... Conversion pattern ..... ..... ..... .....
Server A
Server B
Server C
If the host response settings are not set correctly, a volume may not be recognized or the desired performance may not be possible. Make sure to select appropriate host response settings. The maximum number of volumes (number of LUNs that are set) that can be recognized by the server varies depending on the connection operation mode of the host response settings. The volume settings to allow each server to access the volume can be performed in the host response settings. For details on the function, refer to "5.2.3 Host Affinity" (page 53).
69
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.5.6
If an error occurs in a system that has a different date and time for each device, analyzing the cause of this error may be difficult. Make sure to set the date and time correctly when using Eco-mode. The stop and start process of the disk motors does not operate according to the Eco-mode schedule if the date and time in the ETERNUS DX Disk storage system are not correct. Figure 5.19 Device time synchronization
NTP server
NTP
Date and Time yyyy Time Zone GMT + 09.00 Daylight Saving Time mm dd xx:xx:xx
70
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.5.7
71
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
5.6
Data Migration
This section explains the function that migrates data from an old storage system to the ETERNUS DX Disk storage system.
5.6.1
Storage Migration
Storage Migration is a function that migrates the volume data from an old storage system to volumes in a new storage system without using a host in cases such as when replacing a storage system. The source storage system and destination ETERNUS DX Disk storage system are physically connected using cables. Data read from the target volume in the Migration Source is written to the Destination volume in the ETERNUS DX Disk storage system. Since Storage Migration is controlled by ETERNUS DX Disk storage system controllers, no additional software is required. Figure 5.21 Storage Migration
Migration source storage system
FC
Storage Migration
The ETERNUS DX Disk storage system firmware version must be V10L16 or later to use the Storage Migration function. The firmware version can be checked via ETERNUS Web GUI or ETERNUS CLI. When upgrading firmware is required, contact your sales representative.
72
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
This chapter explains the optional functions of the ETERNUS DX Disk storage system. "Thin Provisioning", "Flexible Tier (automatic storage layering)", and "Advanced Copy" are available as optional functions. The relevant license must be purchased to use these optional functions.
6.1
6.1.1
Thin Provisioning
The Thin Provisioning function virtualizes and allocates storage capacity. This reduces physical storage capacity and efficiently uses unused capacity. The user can start ETERNUS DX Disk storage system operation with a small disk capacity, by allocating large virtual disks. Physical disks can be added according to the required capacity without affecting the server. In order to avoid physical disk capacity shortages, thresholds are monitored and physical capacity change is visualized. This enables the user to know when the storage capacity is insufficient and additional physical disks can be added to prevent operations being stopped. By using the Thin Provisioning balancing function, the physical allocated capacity of the Thin Provisioning Volume (TPV) can be balanced between RAID groups, and the I/O access to the TPV can be reallocated to the RAID groups in the pool.
73
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Thin Provisioning
The physical disk can be managed by a pool, and unused capacity is shared with virtual volumes that belong to the pool. Free area in the physical disk is shared with each volume. Free area Free area Used area
Vol.1
Vol.2
Vol.3
6.1.2
SAS disks, Nearline SAS disks, SSDs, and SEDs can be used for the Flexible Tier function. Note that the firmware version of the ETERNUS DX Disk storage system must be V10L30 or later to use SEDs with the Flexible Tier function. The firmware version can be checked via ETERNUS Web GUI or ETERNUS CLI. When upgrading firmware is required, contact your sales representative.
74
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Server
* Priority on cost performance * Priority on long term storage * For saving infrequently accessed data
Tier 1 Tier 2 Tier 3
Monitor Order
* Priority on access performance * For frequently accessed data SSDs Online disks Nearline disks
Time
Moving to an SSD Response time is shortened ETERNUS DX Disk storage system High speed tier (Tier 1: SSDs) Medium speed tier (Tier 2: Online disks) Low speed tier (Tier 3: Nearline disks)
75
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
6.2
Volume Tape
Volume
Operation
Operation
Time Reduce the system down time by using the high-speed backup with Advanced Copy function.
There are two types of Advanced Copy: a local copy that is performed within a single ETERNUS DX Disk storage system and a remote copy that is performed between ETERNUS DX Disk storage systems. The methods that are available for the local copy function are "One Point Copy (OPC)", "QuickOPC", "SnapOPC", "SnapOPC+", and "Equivalent Copy (EC)". "Remote Equivalent Copy (REC)" is available for the remote copy function.
76
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
The following table shows the types of controlling software that are used for Advanced Copy functions: Table 6.1 Controlling software
Controlling software ETERNUS Web GUI or ETERNUS CLI Volume Shadow copy Service (VSS) (*1) Features Copy functions are available without needing any optional software. VSS can meet various requirements such as data backup in a live data environment, due to its ability to link with ISV backup software or ISV business applications that support the Microsoft Windows Server VSS function. ACM supports various OSes and ISV applications. All Advanced Copy functions are available. This also allows data backup in a live data environment via links with Oracle, SQL Server, Exchange Server, and Symfoware Server software. ETERNUS SF Express makes it easier to manage the ETERNUS DX Disk storage system and to backup data.
To use the Advanced Copy functions in a VSS environment, "ETERNUS VSS Hardware Provider" must be downloaded and installed on the server. For details of "ETERNUS VSS Hardware Provider" and how to install it, refer to the following web-site: http://www.fujitsu.com/global/services/computing/storage/eternus/tools/vsshp.html
"ETERNUS SF AdvancedCopy Manager", "ETERNUS SF Express", or a VSS environment is required to use the Advanced Copy functions with other operations. The following table shows the copy functions (copy methods) that are available by registering the license: Table 6.2 List of functions (copy methods)
Model Number of usable sessions Controlling software ETERNUS Web GUI or ETERNUS CLI SnapOPC+ SnapOPC+ VSS ETERNUS SF Advanced Copy Manager SnapOPC SnapOPC+ QuickOPC OPC EC SnapOPC SnapOPC+ QuickOPC OPC EC REC ETERNUS SF Express
Copy License
No license
8 (*1) 1024
SnapOPC+ SnapOPC SnapOPC+ QuickOPC OPC EC SnapOPC SnapOPC+ QuickOPC OPC EC REC
License
DX90 S2
2048
SnapOPC+
SnapOPC+ QuickOPC
*1:
When an Advanced Copy Feature License is not purchased, up to eight SnapOPC+/QuickOPC sessions can be used. Try using the Advanced Copy functions to evaluate them before purchase or to plan for their use after purchase.
77
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Copying is performed for each LUN. Copying can also be performed for each logical disk (such as the partition and volume (the name differs depending on the OS)) when using ETERNUS SF AdvancedCopy Manager.
When a volume is copied on a per LUN basis, RAID Migration that expands the migration destination capacity cannot be performed. When RAID Migration is performed for a volume to expand the migration destination capacity, the volume cannot be copied on a per LUN basis. For volumes with a copy session, restrictions apply to volume capacity expansion, such as LUN Concatenation and Thin Provisioning volume capacity expansion. For details, refer to "ETERNUS Web GUI Users Guide".
6.2.1
Local Copy
The Advanced Copy functions offer the following copy methods: "Mirror Suspend", "Background Copy", and "Copy-on-Write". The "Equivalent Copy (EC)" function uses the "Mirror Suspend" method, the "One Point Copy (OPC)" function uses the "Background Copy" method, and the "SnapOPC" function uses the "Copy-on-Write" method. There is also a "QuickOPC" function for the OPC method, which only copies data that has been updated since the previous update. The SnapOPC+ function only copies data that is to be updated and performs generation management of the copy source volume.
QuickOPC
QuickOPC copies all data as initial copy in the same way as OPC. After the initial copy has completed, only updated data (differential data) is copied. QuickOPC is suitable for the following usages: - Performing a backup of data that is not updated regularly - Performing system test data replication - Restoration from a backup
78
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
SnapOPC/SnapOPC+ (*1)
As updates occur in the source data, SnapOPC/SnapOPC+ saves the data prior to change to the copy destination (Snap Data Volume (SDV)). Prepare a Snap Data Pool (SDP) before performing SnapOPC/SnapOPC+. When an amount of data that exceeds the SDV capacity is saved, volumes are supplied from a SDP. SnapOPC/SnapOPC+ is suitable for the following usages: - Performing temporary backup for tape backup - Performing a backup of data that is not updated regularly (generation management is available for SnapOPC+)
*1: The difference between SnapOPC and SnapOPC+ is that SnapOPC+ manages the history of updated data as opposed to SnapOPC. While SnapOPC manages the updated data in units of sessions and saves the same data redundantly, SnapOPC+ has updated data as history information and can provide multiple generation-backups.
EC (Equivalent Copy)
EC makes a mirror copy of the copy source to the copy destination beforehand, and then suspends the copy and treats all data as independent data. When copying is resumed, only updated data in the copy source is copied to the copy destination. If the copy destination data has been changed, copy the copy source data again. EC is suitable for the following usages: - Performing a backup - Performing system test data replication
Prepare an encrypted SDP when an encrypted SDV is used. If the SDP capacity is insufficient, a copy cannot be performed. In order to avoid this situation, an operation that notifies the operation administrator of event information according to the remaining SDP capacity is recommended. For more details on event notification, refer to "5.5.2 Event Notification" (page 66).
79
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
6.2.2
Remote Copy
Remote copy is a function that copies data between different storage systems in remote locations, using the "Remote Equivalent Copy: REC". REC is an enhancement of the EC mirror suspend method that performs EC remotely. Mirroring, snapshots, and backup between multiple storage systems can be performed using this function. This function protects data against disaster by duplicating the database and backing up data to a remote location. The older ETERNUS Disk storage system models can be connected.
Destination site
Management server
SAN
WAN
SAN
The REC data transfer mode has two modes: the synchronous transmission mode and the asynchronous transmission mode. These modes can be selected according to the intended use of REC. Table 6.3 REC data transfer mode
I/O response Affected by transmission delay Not affected by transmission delay Updated log status in the case of disaster Data is completely backed up until the point when a disaster occurs. Data is backed up until a few seconds before a disaster occurs.
80
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
81
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
When REC is performed over a WAN, a bandwidth of at least 50Mbit/s is required if data is not compressed on the line. When data is compressed, a 50Mbit/s or less bandwidth is sufficient. When REC is performed over a WAN, the round-trip time of data transmission must be 50ms or less for the synchronous transmission mode and 100ms or less for the asynchronous transmission mode. A setup in which the round-trip time is 10ms or less is recommended for the synchronous transmission mode. When a concurrent firmware update is performed, copy sessions must be suspended. When the REC Consistency mode is used between different storage system models, an equal or larger number of controllers than the copy source storage system is recommended for the copy destination storage system. When the number of controllers in the copy source storage system is eight (*1) and the number of controllers in the copy destination storage system is two or less, the REC Consistency mode cannot be used. The ETERNUS DX90 S2, the ETERNUS DX410 S2/DX440 S2, the ETERNUS DX410/DX440, the ETERNUS DX8100 S2/DX8700 S2, and the ETERNUS DX8100/DX8400/DX8700 support REC disk buffers. When an older ETERNUS Disk storage system is used as the copy destination, REC cannot be performed between encrypted volumes and unencrypted volumes.
*1: The maximum number of controllers that can be installed in the ETERNUS DX8700/DX8700 S2 and the ETERNUS8000 models 2100 and 2200
82
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
6.2.3
Restore OPC
For OPC, QuickOPC, SnapOPC, and SnapOPC+, restoration of the copy source from the copy destination is complete immediately upon request. Figure 6.5 Restore OPC
OPC, QuickOPC, SnapOPC, or SnapOPC+
Copy source
Copy destination
Restoration from the copy destination to the copy source (Restore OPC)
EC or REC Reverse
Restoration of the copy source from the copy destination is possible by switching the EC or REC copy source and destination. Figure 6.6 EC or REC Reverse
EC or REC
Copy source
Reverse
Copy destination
83
ETERNUS DX80 S2/DX90 S2 Disk storage system Overview
Copyright 2012 FUJITSU LIMITED P3AM-4812-06ENZ0
Multiple copy
Multiple copy destinations can be set for a single copy source area to obtain multiple backups. Up to eight OPC, QuickOPC, SnapOPC, EC, or REC sessions can be set for a multiple copy. Figure 6.7 Multiple copy
ETERNUS DX Disk storage system ETERNUS DX Disk storage system
Copy destination 1
Copy destination 2
Copy destination 6
Copy destination 7
Copy destination 4
Copy destination 5
Copy destination 8
Up to 256 SnapOPC+ copy session generations can be set for a single copy source area when seven or less multiple copy sessions are already set. Figure 6.8 Multiple copy (including SnapOPC+)
ETERNUS DX Disk storage system ETERNUS DX Disk storage system
Copy destination 3
Copy source
Copy destination 6
Copy destination 4
Copy destination 7