3par - Training Day 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

Hardware Architecture

Divakar - Deployment Support Engineer


3DC
Inserv Models and Specifications
F200 F400 T400 T800

HP 3PAR Gen3 ASIC with


Yes Yes Yes Yes
Thin Built In™

Controller Nodes 2 4-2 4-2 8-2

Built-In Remote Copy Ports 2 4-2 4-2 8-2

Fibre Channel Host Ports 0-12 0-24 0-64 0-128

iSCSI Host Ports 0-8 0-16 0-16 0-32

Disk Drives 16-192 16-384 16-640 16-1,280

Drive Chassis 16 drives (max) in 3U 16 drives (max) in 3U 40 drives (max) in 4U 40 drives (max) in 4U

50 GB SSD 50 GB SSD 50 GB SSD 50 GB SSD


146/300/450/600 GB FC 146/300/450/600 GB FC 146/300/450/600 GB FC 146/300/450/600 GB FC
Drive Types (mixable)
and/or and/or and/or and/or
1/2 TB SATA 1/2 TB SATA 1/2 TB SATA 1/2 TB SATA

Max Capacity (approx.) 128 TB 384 TB 400 TB 800 TB

HP 3PAR 2M or third- HP 3PAR 2M or third-


Cabinets party Standard 19-inch party Standard 19-inch HP 3PAR 2M Cabinet(s) HP 3PAR 2M Cabinet(s)
Cabinet Cabinet
3PAR Hardware Architecture

3PAR InSpire® F-Series 3PAR InSpire® T-Series


Architecture Architecture

A finely, massively, and


automatically load
balanced cluster

3PAR
ASIC

Host Connectivity
T400,
Legend

F200, Data Cache

F400 Disk Connectivity T800


Passive Backplane
Architecture – Storage Server Components
Redundant Power
Supplies (Drive Cage)

Drive Chassis (4U)

Redundant Power
Drive Magazine Supplies

Backplane

Controller Node (4U)


Redundant Batteries

Redundant PDUs

Service Processor

Cabinet
Full-Mesh Controller Backplane
The 3PAR InServ backplane is a passive circuit board that contains
slots for Controller Nodes. Each Controller Node slot is connected
to every other Controller Node slot by a high-speed link (800
Megabytes per second in each direction, or 1.6 Gigabyte per
second total), forming a full-mesh interconnect network between
the Controller Nodes.

There are two T-class backplane types: a 4-Node backplane (T400


model), which supports 2 to 4 Controller Nodes, and an 8-Node
backplane (T800 model), which supports 2 to 8 Controller Nodes.

In addition, a completely separate full-mesh network of RS-232


serial links provides a redundant low-speed channel of
communication for control information between the nodes
which can be used in the event of a failure of the main links
Full-Mesh Controller Backplane
InServ Controller Node
Controller Node contains a high-
performance Application Specific
Integrated Circuit (ASIC) designed by
3PAR. The 3PAR Gen3 ASIC is optimized
for data movement between three I/O
buses, three memory-bank Data Cache,
and the seven high-speed links to the
other Controller Nodes over the full-
mesh backplane

InServ controller nodes can use Fibre


Channel, Gigabit Ethernet, and iSCSI
ports to connect the storage sever to
your network, host computers, storage
sever components, and to other
storage servers. Inside each controller
node there are slots for network
adapters, control cache DIMMs, and
data cache DIMMs.
Node is hot-pluggable
InServ Controller Node
The number of controller nodes each storage server model

Storage Server Model Number of Controller Nodes


InServ S400 and T400 2 or 4
InServ S800 and T800 2, 4, 6, or 8
InServ E200 and F200 2
InServ F400 4

Customers can start with two Controller Nodes in a small, “modular array”
configuration and grow incrementally to eight Nodes in a non-disruptive
manner—giving them powerful flexibility and performance
The Controller Nodes are each powered by two (1+1 redundant) power
supplies and backed up by a string of two batteries.
InServ Controller Node Numbering
The controller nodes assume the number of the bay they occupy in the storage server
backplane. The bays are numbered from 0 to <n>, from left to right, and from top to bottom

F400 Node Numbering


If an InServ T800 backplane contains only
two controller nodes, the controller nodes
occupy the bottom 2 bays of the backplane
enclosure and are numbered controller
T800 Node Numbering node 6 and controller node 7.
Drive Chassis
Drive Chassis, also referred to as Drive Cages, are intelligent, switched, hyper-
dense disk enclosures that serve as the capacity building block within an InServ
Storage Server.
Drive Chassis provide a common disk enclosure that can house all
supported drive types. This unique flexibility eliminates any incremental
expense associated with purchasing and managing separate drive chassis
for different drive types

There are three models of drive cages: DC2, DC3, and DC4. The InServ S-Class
Storage Servers and T-Class Storage Servers may contain both DC2 and DC4
drive cages. The InServ E-Class Storage Servers and F-Class Storage Servers
only contain DC3 drive cages.

The DC2 drive cage is a 40 disk, 2 Gbps drive cage.


The DC3 drive cage is a 16 disk, 2 Gbps or 4 Gbps drive cage.
F-Class DC3: up to 4 Gbps
E-Class DC3: up to 2 Gbps
The DC4 drive cage is a 40 disk, 4 Gbps drive cage.
DC2 and DC4 Drive Chassis
The DC2 and DC4 drive cages house ten drive bays numbered 0
through 9. Each drive bay accommodates a single drive magazine that
holds four disks

Drive Magazine in DC2 and DC4


An electronic circuit board mounted on
a mechanical structure that is inserted
into a drive bay in a drive cage. A drive
magazine holds up to four physical disks.
DC2 and DC4 Ports and Cabling
Daisy chaining is not supported for the DC2 or DC4 drive cages

The DC2 and DC4 drive cages contain two FCAL modules for connecting the drive
cage to the controller nodes. The left-hand FCAL module has two ports: A0 and B0,
and the right-hand FCAL module has two ports: A1, and B1.
DC3 Drive Chassis
The DC3 drive cage contains 16 drive bays at the front, each accommodating
the appropriate plug-in drive magazine module. The 16 drive bays are
arranged in four rows of four drives.

Rear View
DC3 Ports and Cabling
Physical Disks
A physical disk is a hard drive. Disks can be either Fibre Channel (FC) ,Near Line (NL) or SSD.
Physical disks are located in storage server drives on drive magazines or in drive modules,
and the magazines and modules are contained in drive cages

Hitachi, Seagate, STEC

FC Drives 300GB, 400GB, 450GB and 600GB


NL Drives 750GB, 1TB and 2TB
SS Drives 50GB (only in T and F Class)
Service Processor
A device inserted into a rack that enables 3PAR service personnel to locally and
remotely monitor and service 3PAR Storage Servers.

The data collected by the Service Processor (SP) is used to maintain,


troubleshoot, and upgrade the SP and storages servers at the operating site.
Depending on the SP’s connection mode, the SP either communicates with the
3PAR Connex server or with the 3PAR Collector server
Battery
Battery tray
The storage server controller node
cabinet includes one or two battery
trays that hold the Battery Backup
Units (BBU).

Battery backup unit (BBU)


A unit containing two batteries. One battery per controller node is required
for all storage server configurations
Power Distribution Unit

A blue illuminated lamp denotes that power is being supplied to a power


bank. When the blue lamp is not illuminated, the power bank is not receiving
AC input.
Power On Procedure
The system takes approximately five minutes to become fully operational
providing it was gracefully shut down. If the system was powered off abruptly,
powering on could take considerably longer.
Turn on AC power to the cabinet(s) by turning on all the PDU circuit breakers

Verify that the blue LED on the front of the service processor is illuminated.

Verify that all drive chassis LEDs are solid green and all controller node status LEDs
are blinking green once per second.

Power on the Attached Hosts


Power Off Procedure
Shutdown all the hosts
SSH to the Service Processor or physically connect a maintenance PC to the serial
connection.
Log in to the service processor by entering your login name and password.
If necessary, enter spmaint to get to the spmaint main menu.

Select option 4, InServ Product Maintenance.


Powering Off the Storage Server
Select option 6, Halt an InServ cluster/node.
Select the desired InServ Storage Server.
Select option a, all and respond to the confirmation prompts.
Press X to return to the 3PAR Service Processor Menu.

CAUTION: Failure to wait until all controller nodes are in a halted could cause
the system to view the shutdown as uncontrolled and place the system in a
checkld state upon power up. This can seriously impact host access to data.
Minimum Configuration
F200 and 400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 1 Magazine per Chassis (4 disks)

T400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 2 Magazine per Chassis
Hardware Architecture

Divakar - Deployment Support Engineer


3DC
Inserv Models and Specifications
F200 F400 T400 T800

HP 3PAR Gen3 ASIC with


Yes Yes Yes Yes
Thin Built In™

Controller Nodes 2 4-2 4-2 8-2

Built-In Remote Copy Ports 2 4-2 4-2 8-2

Fibre Channel Host Ports 0-12 0-24 0-64 0-128

iSCSI Host Ports 0-8 0-16 0-16 0-32

Disk Drives 16-192 16-384 16-640 16-1,280

Drive Chassis 16 drives (max) in 3U 16 drives (max) in 3U 40 drives (max) in 4U 40 drives (max) in 4U

50 GB SSD 50 GB SSD 50 GB SSD 50 GB SSD


146/300/450/600 GB FC 146/300/450/600 GB FC 146/300/450/600 GB FC 146/300/450/600 GB FC
Drive Types (mixable)
and/or and/or and/or and/or
1/2 TB SATA 1/2 TB SATA 1/2 TB SATA 1/2 TB SATA

Max Capacity (approx.) 128 TB 384 TB 400 TB 800 TB

HP 3PAR 2M or third- HP 3PAR 2M or third-


Cabinets party Standard 19-inch party Standard 19-inch HP 3PAR 2M Cabinet(s) HP 3PAR 2M Cabinet(s)
Cabinet Cabinet
3PAR Hardware Architecture

3PAR InSpire® F-Series 3PAR InSpire® T-Series


Architecture Architecture

A finely, massively, and


automatically load
balanced cluster

3PAR
ASIC

Host Connectivity
T400,
Legend

F200, Data Cache

F400 Disk Connectivity T800


Passive Backplane
Architecture – Storage Server Components
Redundant Power
Supplies (Drive Cage)

Drive Chassis (4U)

Redundant Power
Drive Magazine Supplies

Backplane

Controller Node (4U)


Redundant Batteries

Redundant PDUs

Service Processor

Cabinet
Full-Mesh Controller Backplane
The 3PAR InServ backplane is a passive circuit board that contains
slots for Controller Nodes. Each Controller Node slot is connected
to every other Controller Node slot by a high-speed link (800
Megabytes per second in each direction, or 1.6 Gigabyte per
second total), forming a full-mesh interconnect network between
the Controller Nodes.

There are two T-class backplane types: a 4-Node backplane (T400


model), which supports 2 to 4 Controller Nodes, and an 8-Node
backplane (T800 model), which supports 2 to 8 Controller Nodes.

In addition, a completely separate full-mesh network of RS-232


serial links provides a redundant low-speed channel of
communication for control information between the nodes
which can be used in the event of a failure of the main links
Full-Mesh Controller Backplane
InServ Controller Node
Controller Node contains a high-
performance Application Specific
Integrated Circuit (ASIC) designed by
3PAR. The 3PAR Gen3 ASIC is optimized
for data movement between three I/O
buses, three memory-bank Data Cache,
and the seven high-speed links to the
other Controller Nodes over the full-
mesh backplane

InServ controller nodes can use Fibre


Channel, Gigabit Ethernet, and iSCSI
ports to connect the storage sever to
your network, host computers, storage
sever components, and to other
storage servers. Inside each controller
node there are slots for network
adapters, control cache DIMMs, and
data cache DIMMs.
Node is hot-pluggable
InServ Controller Node
The number of controller nodes each storage server model

Storage Server Model Number of Controller Nodes


InServ S400 and T400 2 or 4
InServ S800 and T800 2, 4, 6, or 8
InServ E200 and F200 2
InServ F400 4

Customers can start with two Controller Nodes in a small, “modular array”
configuration and grow incrementally to eight Nodes in a non-disruptive
manner—giving them powerful flexibility and performance
The Controller Nodes are each powered by two (1+1 redundant) power
supplies and backed up by a string of two batteries.
InServ Controller Node Numbering
The controller nodes assume the number of the bay they occupy in the storage server
backplane. The bays are numbered from 0 to <n>, from left to right, and from top to bottom

F400 Node Numbering


If an InServ T800 backplane contains only
two controller nodes, the controller nodes
occupy the bottom 2 bays of the backplane
enclosure and are numbered controller
T800 Node Numbering node 6 and controller node 7.
Drive Chassis
Drive Chassis, also referred to as Drive Cages, are intelligent, switched, hyper-
dense disk enclosures that serve as the capacity building block within an InServ
Storage Server.
Drive Chassis provide a common disk enclosure that can house all
supported drive types. This unique flexibility eliminates any incremental
expense associated with purchasing and managing separate drive chassis
for different drive types

There are three models of drive cages: DC2, DC3, and DC4. The InServ S-Class
Storage Servers and T-Class Storage Servers may contain both DC2 and DC4
drive cages. The InServ E-Class Storage Servers and F-Class Storage Servers
only contain DC3 drive cages.

The DC2 drive cage is a 40 disk, 2 Gbps drive cage.


The DC3 drive cage is a 16 disk, 2 Gbps or 4 Gbps drive cage.
F-Class DC3: up to 4 Gbps
E-Class DC3: up to 2 Gbps
The DC4 drive cage is a 40 disk, 4 Gbps drive cage.
DC2 and DC4 Drive Chassis
The DC2 and DC4 drive cages house ten drive bays numbered 0
through 9. Each drive bay accommodates a single drive magazine that
holds four disks

Drive Magazine in DC2 and DC4


An electronic circuit board mounted on
a mechanical structure that is inserted
into a drive bay in a drive cage. A drive
magazine holds up to four physical disks.
DC2 and DC4 Ports and Cabling
Daisy chaining is not supported for the DC2 or DC4 drive cages

The DC2 and DC4 drive cages contain two FCAL modules for connecting the drive
cage to the controller nodes. The left-hand FCAL module has two ports: A0 and B0,
and the right-hand FCAL module has two ports: A1, and B1.
DC3 Drive Chassis
The DC3 drive cage contains 16 drive bays at the front, each accommodating
the appropriate plug-in drive magazine module. The 16 drive bays are
arranged in four rows of four drives.

Rear View
DC3 Ports and Cabling
Physical Disks
A physical disk is a hard drive. Disks can be either Fibre Channel (FC) ,Near Line (NL) or SSD.
Physical disks are located in storage server drives on drive magazines or in drive modules,
and the magazines and modules are contained in drive cages

Hitachi, Seagate, STEC

FC Drives 300GB, 400GB, 450GB and 600GB


NL Drives 750GB, 1TB and 2TB
SS Drives 50GB (only in T and F Class)
Service Processor
A device inserted into a rack that enables 3PAR service personnel to locally and
remotely monitor and service 3PAR Storage Servers.

The data collected by the Service Processor (SP) is used to maintain,


troubleshoot, and upgrade the SP and storages servers at the operating site.
Depending on the SP’s connection mode, the SP either communicates with the
3PAR Connex server or with the 3PAR Collector server
Battery
Battery tray
The storage server controller node
cabinet includes one or two battery
trays that hold the Battery Backup
Units (BBU).

Battery backup unit (BBU)


A unit containing two batteries. One battery per controller node is required
for all storage server configurations
Power Distribution Unit

A blue illuminated lamp denotes that power is being supplied to a power


bank. When the blue lamp is not illuminated, the power bank is not receiving
AC input.
Power On Procedure
The system takes approximately five minutes to become fully operational
providing it was gracefully shut down. If the system was powered off abruptly,
powering on could take considerably longer.
Turn on AC power to the cabinet(s) by turning on all the PDU circuit breakers

Verify that the blue LED on the front of the service processor is illuminated.

Verify that all drive chassis LEDs are solid green and all controller node status LEDs
are blinking green once per second.

Power on the Attached Hosts


Power Off Procedure
Shutdown all the hosts
SSH to the Service Processor or physically connect a maintenance PC to the serial
connection.
Log in to the service processor by entering your login name and password.
If necessary, enter spmaint to get to the spmaint main menu.

Select option 4, InServ Product Maintenance.


Powering Off the Storage Server
Select option 6, Halt an InServ cluster/node.
Select the desired InServ Storage Server.
Select option a, all and respond to the confirmation prompts.
Press X to return to the 3PAR Service Processor Menu.

CAUTION: Failure to wait until all controller nodes are in a halted could cause
the system to view the shutdown as uncontrolled and place the system in a
checkld state upon power up. This can seriously impact host access to data.
Minimum Configuration
F200 and 400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 1 Magazine per Chassis (4 disks)

T400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 2 Magazine per Chassis
Software Architecture

Divakar - Deployment Support Engineer


3DC
3PAR Software's
InForm Operating System Suite
Fine Grained Reservation-less Dedicate-
Persistent Cache Autonomic Groups Host Personas
Virtualization on-Write

InForm Management
Wide Striping Thin Copy Reclamation RAID MP (Multi-Parity) Scheduler
Console

Full Copy Rapid Provisioning LDAP Access Guard

InForm Additional Software InForm Host Software


3PAR Mgr for VMware Recovery Mgr VMware
Thin Provisioning Thin Persistence
vCenter

Thin Conversion Virtual Lock Recover Mgr SQL Recovery Mgr Exchange

Recovery Mgr Oracle Multi Path IO W2K3


Virtual Domains Remote Copy

Virtual Copy Dynamic Optimization System Reporter Multi Path IO IBM AIX

System Tuner Adaptive Optimization Host Explorer


InForm OS
The InForm Software Suite is the core set of storage management software

InForm Operating System, independent instances of the operating system


running on each controller node.

InForm Command Line Interface, command line user interface for


monitoring, managing, and configuring 3PAR InServ Storage Servers

InForm Management Console, graphical user interface for monitoring,


managing, and configuring 3PAR InServ Storage Servers
Access Guard, provides volume security at logical and physical levels by enabling you to
secure hosts and ports to specific virtual volumes

3PAR Autonomic Groups, allow domains, hosts, and volumes to be grouped into a set
that is managed as a single object. Autonomic groups also allow for easy updates
when new hosts are added or new volumes are provisioned. If you add a new host to
the set, volumes from the volume set are autonomic ally provisioned to the new host
without any administrative intervention. If you add a new volume or a new domain to
a set, the volume or domain inherits all the privileges of the set.
Inform OS
Persistent Cache, allows InServ Storage Servers to maintain a high level of
performance and availability during node failure conditions, and during hardware
and software upgrades. This feature allows the host to continue to write data and
receive acknowledgments from the storage server if the backup node is unavailable.
Persistent Cache automatically creates multiple backup nodes for logical disks that
have the same owner.
Optional Software Features
Virtual Domains are used for access control. Virtual Domains allow you to limit the
privileges of users to only subsets of volumes and hosts in an InServ Storage Server
and ensures that virtual volumes associated with a specific domain are not exported
to hosts outside of that domain

Thin Provisioning allows you to allocate virtual volumes to application servers yet
provision only a fraction of the physical storage behind these volumes. By enabling a
true capacity-on-demand model, a storage administrator can use 3PAR Thin
Provisioning to create Thinly-Provisioned Virtual Volumes (TPVVs) that maximize asset
use.

Thin Conversion converts a fully-provisioned volume to a Thinly-Provisioned Virtual


Volume (TPVV). Virtual volumes with large amounts of allocated but unused space are
converted to TPVVs that are much smaller than the original volume. To use the Thin
Conversion feature you must have an InServ F-Class or T-Class Storage Server, a 3PAR Thin
Provisioning license, and a 3PAR Thin Conversion license.
Optional Software Features
Thin Persistence keeps InServ Thinly-Provisioned Virtual Volumes (TPVVs) small by
detecting pages of zeros during data transfers and not allocating space for the zeros.
This feature works in real-time and analyzes the data before it is written to the
destination TPVV. To use the Thin Persistence feature you must have an InServ F-Class
or T-Class Storage Server, a 3PAR Thin Provisioning license, a 3PAR Thin Conversion
license, and a 3PAR Thin Persistence license.

Remote Copy is a host-independent, array-based data mirroring solution that


enables affordable data distribution and disaster recovery for applications. With this
optional utility, you can copy virtual volumes from one InServ Storage Server to a second
InServ Storage Server. 3PAR Remote Copy currently requires the use of the InForm CLI.
Virtual Copy allows you to take instant virtual copy snapshots of existing volumes.
It uses copy-on-write technology so that virtual copies consume minimal capacity.
Virtual copies are presentable to any host with read and write capabilities. In
addition, virtual copies can be made from other virtual copies, providing endless
flexibility for test, backup, and business-intelligence applications.
Optional Software Features
Dynamic Optimization allows you to improve the performance of virtual volumes
without interrupting access. Use this feature to avoid over provisioning for peak system
usage by optimizing the layout of your virtual volumes. With 3PAR Dynamic Optimization
you can change virtual volume parameters, RAID levels, set sizes, and disk filters by
associating the virtual volume with a new CPG.

What are currently Enabled?


cst_T400 cli% showlicense
License key was generated on Thu Jan 28 16:45:46 2010
License features currently enabled:
Domains
Dynamic Optimization
Inform OS – Cluster
InServ Storage Servers utilize a cluster-based approach

The cluster of Controller Nodes presents to the Hosts a single, highly available,
high performance Storage System.

InServ Storage Server features a high-speed, full mesh, passive system backplane
that joins multiple Controller Nodes (the high-performance data movement
engines of the InSpire Architecture) to form a cache-coherent, active-active cluster.
Inform OS resides on each of the Controller Nodes Local Disk Drive
Data Layout

Divakar - Deployment Support Engineer


3DC
Physical Disks
Logical Disks

Common Provisioning Group


Virtual Volume
showvlun output
VLUN Virtual Volume

Domain Lun VVname Host -Host_WWN/iSCSI_Name- Port Type


- 0 vq2ua485-OMG-0 vq2ua485 10000000C9564198 6:03:01 host
- 1 vq2ua485-OMG-1 vq2ua485 10000000C9564198 6:03:01 host
- 2 vq2ua485-OMG-2 vq2ua485 10000000C9564198 6:03:01 host
- 3 vq2ua485-OMG-3 vq2ua485 10000000C9564198 6:03:01 host
- 4 vq2ua485-OMG-4 vq2ua485 10000000C9564198 6:03:01 host
- 5 vq2ua485-OMG-5 vq2ua485 10000000C9564198 6:03:01 host
- 0 vq2ua485-OMG-0 vq2ua485 10000000C956419E 7:03:01 host
- 1 vq2ua485-OMG-1 vq2ua485 10000000C956419E 7:03:01 host
- 2 vq2ua485-OMG-2 vq2ua485 10000000C956419E 7:03:01 host
- 3 vq2ua485-OMG-3 vq2ua485 10000000C956419E 7:03:01 host

Truncated Output
showvv output

Id Name Domain Type CopyOf BsId Rd State AdmMB SnapMB userMB


0 admin - Base --- 0 RW started 0 0 10240
40 vq2ua485-OMG-4 - Base,tpvv --- 40 RW started 128 25600 102400
41 vq2ua485-OMG-5 - Base,tpvv --- 41 RW started 128 8192 25600
47 vq2ua511-APP-1 - Base,tpvv --- 47 RW started 256 102912 102400
48 vq2ua511-APP-2 - Base,tpvv --- 48 RW started 256 102912 102400
49 vq2ua511-APP-3 - Base,tpvv --- 49 RW started 256 102912 102400
50 vq2ua511-APP-4 - Base,tpvv --- 50 RW started 256 102912 102400
51 vq2ua511-APP-5 - Base,tpvv --- 51 RW started 256 102912 102400

Truncated Output
showvv –r output
Virtual Volume CPG

Id Name Domain Type CPGname AWrn% ALim% AdmMB RAdmMB SnapMB RSnapMB UserMB RUserMB

0 admin - Base --- - - 0 0 0 0 10240 20480

40 vq2ua485-OMG-4 - Base,tpvv CPG-R5-1 0 0 128 256 25600 30720 102400 0

41 vq2ua485-OMG-5 - Base,tpvv CPG-R5-1 0 0 128 256 8192 9830 25600 0

47 vq2ua511-APP-1 - Base,tpvv CPG-R5-1 0 0 256 512 102912 123494 102400 0


showcpg output

----- showcpg -----


------ SA ------ --------- SD ---------
Id Name Domain Warn% TPVVs CPVVs LDs TotMB UseMB LDs TotMB UseMB

0 CPG-R5-1 - - 32 0 4 16384 7296 115 1921280 1897472

1 CPG-R5-2 - - 32 0 4 16384 6144 78 1369600 1340416

2 CPG-R5-3 - - 16 0 4 16384 2816 40 680960 656384

----- showspace -cpg ----- Output Truncated Output

CPG ---EstFree_MB---- --SA_MB--- -----SD_MB-----

Name RawFree LDFree Total Used Total Used

CPG-R5-1 15447552 12872960 16384 7296 1921280 1897472

CPG-R5-2 15447552 12872960 16384 6144 1369600 1340416

CPG-R5-3 15447552 12872960 16384 2816 680960 656384

Truncated Output
showvvmap
Virtual Volume LD’s

showvvmap vq2ua485-OMG-4

Space Start(MB) Length(MB) LdId LdName LdOff(MB)


sa 0 0 64 5 tp-0-sa-0.0 1024
1 64 64 6 tp-0-sa-0.1 1024
sd 0 0 256 39 tp-0-sd-12.0 1280
1 256 256 72 tp-0-sd-26.1 5120
2 512 256 39 tp-0-sd-12.0 1792
3 768 256 72 tp-0-sd-26.1 5632
4 1024 256 497 tp-0-sd-77.0 4608
5 1280 256 498 tp-0-sd-77.1 4864
6 1536 256 497 tp-0-sd-77.0 5120
7 1792 256 498 tp-0-sd-77.1 5376
8 2048 256 497 tp-0-sd-77.0 5376

Truncated Output
showld output

Id Name Domain RAID State O Own SizeMB UsedMB Use Lgct LgId WThru MapV
0 log6.0 - 1 normal 6/- 20480 0 log 0 --- Y N
1 log7.0 - 1 normal 7/- 20480 0 log 0 --- Y N
2 pdsld0.0 - 1 normal 6/7 16384 0 P,F 0 --- Y N
3 admin.usr.0 - 1 normal 6/7 5120 5120 V 0 --- N Y
4 admin.usr.1 - 1 normal 7/6 5120 5120 V 0 --- N Y
5 tp-0-sa-0.0 - 1 normal 6/7 4096 3648 C,SA 0 --- N Y
6 tp-0-sa-0.1 - 1 normal 7/6 4096 3648 C,SA 0 --- N Y
7 tp-0-sd-0.0 - 5 normal 6/7 16640 16640 C,SD 0 --- N Y
8 tp-0-sd-0.1 - 5 normal 7/6 16640 16640 C,SD 0 --- N Y
9 tp-0-sa-1.0 - 1 normal 6/7 4096 0 C,SA 0 --- N N
10 tp-0-sa-1.1 - 1 normal 7/6 4096 0 C,SA 0 --- N N

Truncated Output
showld –d output

Id Name Domain CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail CAvail
0 log6.0 - --- 1 6/- 20480 40960 1 256 2 0 cage port
1 log7.0 - --- 1 7/- 20480 40960 1 256 2 0 cage port
2 pdsld0.0 - --- 1 6/7 16384 32768 32 256 2 0 cage port
3 admin.usr.0 - --- 1 6/7 5120 10240 20 256 2 0 cage port
4 admin.usr.1 - --- 1 7/6 5120 10240 20 256 2 0 cage port
5 tp-0-sa-0.0 - CPG-R5-1 1 6/7 4096 8192 16 256 2 0 cage port
6 tp-0-sa-0.1 - CPG-R5-1 1 7/6 4096 8192 16 256 2 0 cage port
7 tp-0-sd-0.0 - CPG-R5-1 5 6/7 16640 19968 13 128 6 0 cage port
8 tp-0-sd-0.1 - CPG-R5-1 5 7/6 16640 19968 13 128 6 0 cage port
9 tp-0-sa-1.0 - CPG-R5-1 1 6/7 4096 8192 16 256 2 0 cage port
10 tp-0-sa-1.1 - CPG-R5-1 1 7/6 4096 8192 16 256 2 0 cage port

Truncated Output
showvvpd
Virtual Volume PD’s

showvvpd vq2ua485-OMG-4
Id Cage_Pos SA SD usr total
0 0:00:00 0 4 0 4
1 0:00:01 0 3 0 3
2 0:00:02 1 4 0 5
3 0:00:03 1 4 0 5
4 0:01:00 1 4 0 5
5 0:01:01 0 4 0 4
6 0:01:02 0 4 0 4
7 0:01:03 1 4 0 5
8 0:02:00 1 4 0 5
9 0:02:01 0 4 0 4
10 0:02:02 0 4 0 4

Truncated Output
showpd –c and showpd –I output
----- Normal Chunklets ------ ---- Spare Chunklets ----
- Used -- - ---- Unused ---- - Used - -- ---- Unused ----
PD-ID Total OK Fail Free Uninit Fail OK Fail Free Uninit Fail
0 1110 725 0 351 0 0 0 0 34 0 0
1 1110 706 0 370 0 0 0 0 34 0 0
2 1110 731 0 346 0 0 0 0 33 0 0
3 1110 705 0 372 0 0 0 0 33 0 0
4 1110 735 0 342 0 0 0 0 33 0 0
5 1110 704 0 373 0 0 0 0 33 0 0
Truncated Output
1 Chunklet = 256MB: 1110x256=284160

ID CagePos Device_id Vendor FW_Revision Serial FW_status Type K_RPM


0 0:00:00 ST3300007FC SEAGATE XR36 3KR4CPHG current FC 10
1 0:00:01 ST3300007FC SEAGATE XR36 3KR4CAVQ current FC 10
2 0:00:02 ST3300007FC SEAGATE XR36 3KR4CPQ7 current FC 10
3 0:00:03 ST3300007FC SEAGATE XR36 3KR4CNPL current FC 10
4 0:01:00 ST3300007FC SEAGATE XR36 3KR4F3FX current FC 10
5 0:01:01 ST3300007FC SEAGATE XR36 3KR4FA1R current FC 10
Truncated Output
3PAR InServ Data Layout

Drive Chassis are


point-to-point connected
to controllers nodes to provide
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|

0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5

“cage level” availability. Hence,


the ability to withstand the loss of OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

an Entire drive enclosure without


losing access to your data.

F Series can optionally daisy chain


2 chassis's to the same node port
pair.
3PAR InServ Data Layout

For T-class, Drive Magazines have


4 of the same drives
FC 15, SATA, SSD, in one OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

magazine.
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|

0 1 2 0 1 2
OK > > |
O OK > > |
O
0 1 0 0 1 0

Drive magazines can be mixed


OK OK
/ ! / !
3 4 5 3 4 5

inside the same drive chassis. But


same drive magazines must be
installed at the same slot of each
drive chassis. (ie: slot0, all FC, slot
1 all SATA)
3PAR InServ Data Layout

Each Physical Drive (PD)


is cut up into “Chunklets” E E C E E C
< < |O| < < |O|
…. …. …. ….
0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5

each 256 MB in size.


Each VV created is automatically OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

widely striped across chunklets 256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

on all disk spindles of the same 256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

type (Fiber channel, SATA, etc.


256 256 256 256 256 256 256 256 256 256

Creating a massively parallel 256

256
256

256
256

256
256

256
256

256
256

256
256

256
256

256
256

256
256

256

system 256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256 256 256
3PAR InServ Data Layout

RAID sets ( 1+1 R1 for


example will stripe the 2 OK

OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

/ ! / !

members of the RAID set


3 4 5 3 4 5

into separate chassis.


E E C E E C

Massively striping data


< < |O| < < |O|
…. …. …. ….
0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5

256 256 256 256 256 256 256 256

and enabling high 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

availability. 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


3PAR InServ Data Layout

RAID Sets are bound


together to form logical OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

disks
L L L L
D D OK

OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O
D D
/ ! / !
3 4 5 3 4 5

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


3PAR InServ Data Layout

Logical Disks are bound


L L
to, serviced by and load D
E
<
…. E
<
…. C
|O|
D
E
<
…. E
<
…. C
|O|

0 1 2 0 1 2
OK > > |
O OK > > |
O

balanced across all


OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5

Nodes L
D
L
D
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|

0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


3PAR InServ Data Layout

All LDs are bound together to form L L


a virtual volume (VV). VV is OK

OK
/ !
0
3
D
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O OK

OK
/ !
0
3
1
4
D
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|

0
|
O

presented to the host as a VLUN.


L L
D D
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|

0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0

Host is usually zones to at least


/ ! / !
3 4 5 3 4 5

256 256 256 256 256 256 256 256

one port of a node pair (0,1, 2,3, 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

256 256 256 256 256 256 256 256

4,5, 6,7). Host access to VLUN is 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

via active/active configuration, 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

thus allowing for high speed 256 256

256 256
256 256

256 256
256 256

256 256
256 256

256 256

access.
256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256

256 256 256 256 256 256 256 256


Supported RAID Levels
Storage system supports the following RAID types:
RAID 0
RAID 10 (RAID 1)
RAID 50 (RAID 5) (3D+1P)
RAID Multi-parity (MP) (6D+2P) Property of RAID is
implemented at the LD Level

What IS?

Row Size
Set Size
Step Size

The number of sets


in a row is known as
the row size
System Internals
The raw storage ends up in 4 places and 4 places only.
1. free chunklets
2. spare assignment - Some chunklets are identified as spares when the
storage server is first set up at installation.
3. failed chunklets
4. logical disks

Where does the logical disk space go to?


1. system lds -> preserved lds, admin volume, logging ld

2. cpg lds -> SA/SD space -> tpvv's cpvv's

3. user base lds -> user volumes including manually administrated SA/SD space as
well as unmapped lds

You might also like