3par - Training Day 1
3par - Training Day 1
3par - Training Day 1
Drive Chassis 16 drives (max) in 3U 16 drives (max) in 3U 40 drives (max) in 4U 40 drives (max) in 4U
3PAR
ASIC
Host Connectivity
T400,
Legend
Redundant Power
Drive Magazine Supplies
Backplane
Redundant PDUs
Service Processor
Cabinet
Full-Mesh Controller Backplane
The 3PAR InServ backplane is a passive circuit board that contains
slots for Controller Nodes. Each Controller Node slot is connected
to every other Controller Node slot by a high-speed link (800
Megabytes per second in each direction, or 1.6 Gigabyte per
second total), forming a full-mesh interconnect network between
the Controller Nodes.
Customers can start with two Controller Nodes in a small, “modular array”
configuration and grow incrementally to eight Nodes in a non-disruptive
manner—giving them powerful flexibility and performance
The Controller Nodes are each powered by two (1+1 redundant) power
supplies and backed up by a string of two batteries.
InServ Controller Node Numbering
The controller nodes assume the number of the bay they occupy in the storage server
backplane. The bays are numbered from 0 to <n>, from left to right, and from top to bottom
There are three models of drive cages: DC2, DC3, and DC4. The InServ S-Class
Storage Servers and T-Class Storage Servers may contain both DC2 and DC4
drive cages. The InServ E-Class Storage Servers and F-Class Storage Servers
only contain DC3 drive cages.
The DC2 and DC4 drive cages contain two FCAL modules for connecting the drive
cage to the controller nodes. The left-hand FCAL module has two ports: A0 and B0,
and the right-hand FCAL module has two ports: A1, and B1.
DC3 Drive Chassis
The DC3 drive cage contains 16 drive bays at the front, each accommodating
the appropriate plug-in drive magazine module. The 16 drive bays are
arranged in four rows of four drives.
Rear View
DC3 Ports and Cabling
Physical Disks
A physical disk is a hard drive. Disks can be either Fibre Channel (FC) ,Near Line (NL) or SSD.
Physical disks are located in storage server drives on drive magazines or in drive modules,
and the magazines and modules are contained in drive cages
Verify that the blue LED on the front of the service processor is illuminated.
Verify that all drive chassis LEDs are solid green and all controller node status LEDs
are blinking green once per second.
CAUTION: Failure to wait until all controller nodes are in a halted could cause
the system to view the shutdown as uncontrolled and place the system in a
checkld state upon power up. This can seriously impact host access to data.
Minimum Configuration
F200 and 400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 1 Magazine per Chassis (4 disks)
T400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 2 Magazine per Chassis
Hardware Architecture
Drive Chassis 16 drives (max) in 3U 16 drives (max) in 3U 40 drives (max) in 4U 40 drives (max) in 4U
3PAR
ASIC
Host Connectivity
T400,
Legend
Redundant Power
Drive Magazine Supplies
Backplane
Redundant PDUs
Service Processor
Cabinet
Full-Mesh Controller Backplane
The 3PAR InServ backplane is a passive circuit board that contains
slots for Controller Nodes. Each Controller Node slot is connected
to every other Controller Node slot by a high-speed link (800
Megabytes per second in each direction, or 1.6 Gigabyte per
second total), forming a full-mesh interconnect network between
the Controller Nodes.
Customers can start with two Controller Nodes in a small, “modular array”
configuration and grow incrementally to eight Nodes in a non-disruptive
manner—giving them powerful flexibility and performance
The Controller Nodes are each powered by two (1+1 redundant) power
supplies and backed up by a string of two batteries.
InServ Controller Node Numbering
The controller nodes assume the number of the bay they occupy in the storage server
backplane. The bays are numbered from 0 to <n>, from left to right, and from top to bottom
There are three models of drive cages: DC2, DC3, and DC4. The InServ S-Class
Storage Servers and T-Class Storage Servers may contain both DC2 and DC4
drive cages. The InServ E-Class Storage Servers and F-Class Storage Servers
only contain DC3 drive cages.
The DC2 and DC4 drive cages contain two FCAL modules for connecting the drive
cage to the controller nodes. The left-hand FCAL module has two ports: A0 and B0,
and the right-hand FCAL module has two ports: A1, and B1.
DC3 Drive Chassis
The DC3 drive cage contains 16 drive bays at the front, each accommodating
the appropriate plug-in drive magazine module. The 16 drive bays are
arranged in four rows of four drives.
Rear View
DC3 Ports and Cabling
Physical Disks
A physical disk is a hard drive. Disks can be either Fibre Channel (FC) ,Near Line (NL) or SSD.
Physical disks are located in storage server drives on drive magazines or in drive modules,
and the magazines and modules are contained in drive cages
Verify that the blue LED on the front of the service processor is illuminated.
Verify that all drive chassis LEDs are solid green and all controller node status LEDs
are blinking green once per second.
CAUTION: Failure to wait until all controller nodes are in a halted could cause
the system to view the shutdown as uncontrolled and place the system in a
checkld state upon power up. This can seriously impact host access to data.
Minimum Configuration
F200 and 400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 1 Magazine per Chassis (4 disks)
T400
− Minimum System
• 2 nodes
• 4 Drive Chassis
• 2 Magazine per Chassis
Software Architecture
InForm Management
Wide Striping Thin Copy Reclamation RAID MP (Multi-Parity) Scheduler
Console
Thin Conversion Virtual Lock Recover Mgr SQL Recovery Mgr Exchange
Virtual Copy Dynamic Optimization System Reporter Multi Path IO IBM AIX
3PAR Autonomic Groups, allow domains, hosts, and volumes to be grouped into a set
that is managed as a single object. Autonomic groups also allow for easy updates
when new hosts are added or new volumes are provisioned. If you add a new host to
the set, volumes from the volume set are autonomic ally provisioned to the new host
without any administrative intervention. If you add a new volume or a new domain to
a set, the volume or domain inherits all the privileges of the set.
Inform OS
Persistent Cache, allows InServ Storage Servers to maintain a high level of
performance and availability during node failure conditions, and during hardware
and software upgrades. This feature allows the host to continue to write data and
receive acknowledgments from the storage server if the backup node is unavailable.
Persistent Cache automatically creates multiple backup nodes for logical disks that
have the same owner.
Optional Software Features
Virtual Domains are used for access control. Virtual Domains allow you to limit the
privileges of users to only subsets of volumes and hosts in an InServ Storage Server
and ensures that virtual volumes associated with a specific domain are not exported
to hosts outside of that domain
Thin Provisioning allows you to allocate virtual volumes to application servers yet
provision only a fraction of the physical storage behind these volumes. By enabling a
true capacity-on-demand model, a storage administrator can use 3PAR Thin
Provisioning to create Thinly-Provisioned Virtual Volumes (TPVVs) that maximize asset
use.
The cluster of Controller Nodes presents to the Hosts a single, highly available,
high performance Storage System.
InServ Storage Server features a high-speed, full mesh, passive system backplane
that joins multiple Controller Nodes (the high-performance data movement
engines of the InSpire Architecture) to form a cache-coherent, active-active cluster.
Inform OS resides on each of the Controller Nodes Local Disk Drive
Data Layout
Truncated Output
showvv output
Truncated Output
showvv –r output
Virtual Volume CPG
Id Name Domain Type CPGname AWrn% ALim% AdmMB RAdmMB SnapMB RSnapMB UserMB RUserMB
Truncated Output
showvvmap
Virtual Volume LD’s
showvvmap vq2ua485-OMG-4
Truncated Output
showld output
Id Name Domain RAID State O Own SizeMB UsedMB Use Lgct LgId WThru MapV
0 log6.0 - 1 normal 6/- 20480 0 log 0 --- Y N
1 log7.0 - 1 normal 7/- 20480 0 log 0 --- Y N
2 pdsld0.0 - 1 normal 6/7 16384 0 P,F 0 --- Y N
3 admin.usr.0 - 1 normal 6/7 5120 5120 V 0 --- N Y
4 admin.usr.1 - 1 normal 7/6 5120 5120 V 0 --- N Y
5 tp-0-sa-0.0 - 1 normal 6/7 4096 3648 C,SA 0 --- N Y
6 tp-0-sa-0.1 - 1 normal 7/6 4096 3648 C,SA 0 --- N Y
7 tp-0-sd-0.0 - 5 normal 6/7 16640 16640 C,SD 0 --- N Y
8 tp-0-sd-0.1 - 5 normal 7/6 16640 16640 C,SD 0 --- N Y
9 tp-0-sa-1.0 - 1 normal 6/7 4096 0 C,SA 0 --- N N
10 tp-0-sa-1.1 - 1 normal 7/6 4096 0 C,SA 0 --- N N
Truncated Output
showld –d output
Id Name Domain CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail CAvail
0 log6.0 - --- 1 6/- 20480 40960 1 256 2 0 cage port
1 log7.0 - --- 1 7/- 20480 40960 1 256 2 0 cage port
2 pdsld0.0 - --- 1 6/7 16384 32768 32 256 2 0 cage port
3 admin.usr.0 - --- 1 6/7 5120 10240 20 256 2 0 cage port
4 admin.usr.1 - --- 1 7/6 5120 10240 20 256 2 0 cage port
5 tp-0-sa-0.0 - CPG-R5-1 1 6/7 4096 8192 16 256 2 0 cage port
6 tp-0-sa-0.1 - CPG-R5-1 1 7/6 4096 8192 16 256 2 0 cage port
7 tp-0-sd-0.0 - CPG-R5-1 5 6/7 16640 19968 13 128 6 0 cage port
8 tp-0-sd-0.1 - CPG-R5-1 5 7/6 16640 19968 13 128 6 0 cage port
9 tp-0-sa-1.0 - CPG-R5-1 1 6/7 4096 8192 16 256 2 0 cage port
10 tp-0-sa-1.1 - CPG-R5-1 1 7/6 4096 8192 16 256 2 0 cage port
Truncated Output
showvvpd
Virtual Volume PD’s
showvvpd vq2ua485-OMG-4
Id Cage_Pos SA SD usr total
0 0:00:00 0 4 0 4
1 0:00:01 0 3 0 3
2 0:00:02 1 4 0 5
3 0:00:03 1 4 0 5
4 0:01:00 1 4 0 5
5 0:01:01 0 4 0 4
6 0:01:02 0 4 0 4
7 0:01:03 1 4 0 5
8 0:02:00 1 4 0 5
9 0:02:01 0 4 0 4
10 0:02:02 0 4 0 4
Truncated Output
showpd –c and showpd –I output
----- Normal Chunklets ------ ---- Spare Chunklets ----
- Used -- - ---- Unused ---- - Used - -- ---- Unused ----
PD-ID Total OK Fail Free Uninit Fail OK Fail Free Uninit Fail
0 1110 725 0 351 0 0 0 0 34 0 0
1 1110 706 0 370 0 0 0 0 34 0 0
2 1110 731 0 346 0 0 0 0 33 0 0
3 1110 705 0 372 0 0 0 0 33 0 0
4 1110 735 0 342 0 0 0 0 33 0 0
5 1110 704 0 373 0 0 0 0 33 0 0
Truncated Output
1 Chunklet = 256MB: 1110x256=284160
0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
magazine.
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|
0 1 2 0 1 2
OK > > |
O OK > > |
O
0 1 0 0 1 0
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
widely striped across chunklets 256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
on all disk spindles of the same 256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
256
system 256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
256 256 256 256 256 256 256 256 256 256
3PAR InServ Data Layout
OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
/ ! / !
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK
/ !
0
3
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
disks
L L L L
D D OK
OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK 0 1 2
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
D D
/ ! / !
3 4 5 3 4 5
0 1 2 0 1 2
OK > > |
O OK > > |
O
Nodes L
D
L
D
E
<
…. E
<
…. C
|O|
E
<
…. E
<
…. C
|O|
0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
/ ! / !
3 4 5 3 4 5
OK
/ !
0
3
D
1
4
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O OK
OK
/ !
0
3
1
4
D
2
5
E
<
….
>
0
E
<
….
>
1
C
|O|
0
|
O
0 1 2 0 1 2
OK > > |
O OK > > |
O
OK 0 1 0 OK 0 1 0
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
256 256
access.
256 256 256 256 256 256 256 256
What IS?
Row Size
Set Size
Step Size
3. user base lds -> user volumes including manually administrated SA/SD space as
well as unmapped lds