Introduction To Mainframe Hardware
Introduction To Mainframe Hardware
Introduction To Mainframe Hardware
com
Agenda
Definition of Mainframe
What is a Mainframe?
A style of operation, applications, and operating system
facilities.
Computers that can support thousands of applications and
input/output devices to simultaneously serve thousands of
users.
A mainframe is the central data repository, or hub, in a
corporations data processing center, linked to users through
less powerful devices such as workstations, terminals or
media devices.
Can run multiple, but isolated operating systems
concurrently.
Centralized control of resources.
Optimized for I/O in all business-related data processing
applications supporting high speed networking and terabytes
of disk storage.
4
WHO uses
Mainframes?
And Why?
OTHER FINANCIAL
INST.
RETAILERS
HEALTHCARE
UTILITIES /
ENERGY
Banco Itau
Bank of NZ
Bank of Montreal
[Slovenia]
[Peru]
Bank of China
Central Bank
of Russia
GOVERNMENTS
TRANSPORTATION
[Brazil]
Philippines
Savings Bank
OTHERS
[Gaming]
[Brazil]
[India]
Industrial
Bank of Korea
HOSTING
PROVIDERS
[Kazakhstan]
[USA]
[Belarusian Railways]
[Russian
Hydrometeorological
Research Institute]
[Germany]
[Vietnam]
[Manufacturer/USA]
Insurance
Retail
Healthcare
Public
Sector
Core Banking
Internet Rate
Quotes
On-line
Catalog
Patient Care
Systems
Electronic Tax
Processing
Wholesale
Banking
Payments
Supply Chain
Management
Online
Claims
Submission &
Payments
Web based
Social
Security
Customer
Care & Insight
Claims
Processing
Customer
Analysis
Fee payments
What is a workload?
The relationship between a group of applications and/or systems that are related across several
business functions to satisfy one or more business processes.
Academic Initiative
1,067 Schools enrolled in the IBM System z Academic Initiative program, reaching students
in 67 countries (more than half outside the US).
Mainframe contests continue to draw high school, college, and university student
participation around the world with 43,825 students from 32 countries.
3,246 students from 40 countries have taken the The IBM System z Mastery Test. This test
is offered to students globally in 2011 at no cost.
http://www-05.ibm.com/tr/industries/education/mainframe-index.html
IBM Academic Initiative System z resources (system access, course materials, education)
are available globally for universities and colleges.
Systemzjobs.com (Job board) is a no-fee service that connects students and experienced
professionals seeking System z job opportunities with IBM System z clients, partners, and
businesses.
WHY the
Mainframe delivers
10
Mainframe Software
Customer Information Control System (CICS) - a
transaction processing environment that allows
applications to share information and to satisfy
customer requirements for information.
Information Management System Transaction
Manager (IMS TM) a transaction processing
environment and an application server
Information Management System Database
Server (IMS DB) - a database system that is capable
of extremely high performance.
DB2 - a full, functional relational database that
leverages all of the features of Mainframe and
z/OS to provide high availability and high
performance.
WebSphere Application Server (WAS) an
application server that runs on z/OS, and provides
true application portability between Mainframe and
distributed systems
13
High business
growth
z HIGH
SCALABILITY
Green strategy
Running out of
energy & space
Lowers power
consumption for
each work unit
z GREEN
SAVINGS
Customer savings
Underutilized distributed servers
1 Source:
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
App
Flexibility to respond
Mainframe
Hardware support:
10% of circuits are
used for virtualization
PR/SM
Hypervisor in
firmware
Distributed Platforms
Limited per-core virtual server scalability
Physical server sprawl is needed to scale
15
Cache
Cache
Workload
Processor
Speed
Workload
Workload
Workload
Workload
Workload
RAS
RAS
Workload
Processor
Speed
Note: System representations are not to scale, proportions may vary based on generation of chip and model
16
Green strategy
Running out of
energy & space
Mainframe
EFFICIENCY
Up to
80%
90%
90%
70%
Flexibility to
respond
EXTREME
VIRTUALIZATION
17
Mainframe Security
Reduce
business risk
Mainframe
SECURITY
Helping businesses:
Protect from INTRUSION
z/OS and z/VM Integrity Statement
Protect DATA
Built in encryption accelerators in every server
FIPS-140-20 Level 4 certified encryption co-processors for highly secure encryption
Ensure PRIVACY
Access to all resources is controlled by an integrated central security manager
Protect VIRTUAL SERVERS
The only servers with EAL5 Common Criteria Certification for partitioning
Respond to COMPLIANCE REGULATIONS
Up to 70% in security audit savings
18
Continuous
business
operations
z HIGH
Mainframe
AVAILABILITY
AVAILABILITY
RELIABILITY
RELIABILITY
Multiple
Mainframe
Site 1
Designed for
application availability
of 99.999%
Site 2
Industry-leading
solution for
disaster recovery
19
20
10
ARCHITECTURE
& Hardware
Platform
21
History
7th April 1964, Poughkeepsie NY: Announcement of the first general-purpose
computers:
A new generation of electronic computing equipment was introduced today
by International Business Machines Corporation. IBM Board Chairman
Thomas J. Watson Jr. called the event the most important product
announcement in the company's history.
The new equipment is known as the IBM System/360.
"System/360 represents a sharp departure from concepts of the past in
designing and building computers. It is the product of an international
effort in IBM's laboratories and plants and is the first time IBM has
redesigned the basic internal architecture of its computers in a decade.
The result will be more computer productivity at lower cost than ever
before. This is the beginning of a new generation - - not only of computers
- - but of their application in business, science and government ." *
* from the April 1964 announcement press release
22
11
1970
IBM
IBM System
System/390
z900
1990
2000
IBM System
z990
IBM System
z9 EC
IBM System
z9 BC
IBM System
z10 EC
IBM System
z10 BC
IBM System
zEnterprise
196
IBM System
zEnterprise
114
2003
2005
2006
2008
2009
2010
2011
23
Conceptual diagram
OLD S/360
The central processor box contains the processors,
memory,1 control circuits and interfaces for
channels
RECENT
Current CPC designs are considerably more
complex than the early S/360 design. This
complexity includes many areas:
I/O connectivity and configuration
I/O operation
Partitioning of the system
24
12
25
Balanced System
CPU, nWay, Memory,
I/O Bandwidth*
172.8 GB/sec*
96 GB/sec
Memory
3 TB**
24 GB/sec
1.5 TB**
512 GB
256
GB
300
64
GB
450
600
920
PCI for
1-way
1202
16-way
32-way
z196
54-way
64-way
* Servers exploit a subset of its designed I/O capability
** Up to 1 TB per LPAR
PCI - Processor Capacity Index
z10 EC
z9 EC
zSeries 990
80-way
Processors
zSeries 900
26
13
28
14
LPAR concept
29
Provides an opportunity to
consolidate distributed environments
to a centralized location
30
15
HiperDispatch
PR/SM and z/OS work in tandem to more efficiently use processor resources. HiperDispatch
is a function that combines the dispatcher actions and the knowledge that PR/SM has about
the topology of the server.
Performance can be optimized by redispatching units of work to same processor group,
keeping processes running near their cached instructions and data, and minimizing transfers
of data ownership among processors/books.
The nested topology is returned to z/OS by the Store System Information (STSI) 15.1.3
instruction, and HiperDispatch utilizes the information to concentrate logical processors
around shared caches (L3 at PU chip level, and L4 at book level), and dynamically optimizes
assignment of logical processors and units of work.
z/OS dispatcher manages multiple queues, called affinity queues, with a target number of
four processors per queue, which fits nicely into a single PU chip. These queues are used to
assign work to as few logical processors as are needed for a given logical partition workload.
So, even if the logical partition is defined with a large number of logical processors,
HiperDispatch optimizes this number of processors nearest to the required capacity. The
optimal number of processors to be used are kept within a book boundary where possible.
32
16
LPAR
Characteristics
Summary
33
Terminology Overlap
34
17
35
Partitions
Subchannels
Channels
CU
CU
CU
Control Units
Devices
(disk, tape, printers)
@
@
=
=
@
@
=
=
36
18
H
S
A
Cache
MBA
SAP
Logical-channel
Subsystem 0
Cache
Cache
MBA
MBA
SAP
Logical-channel
Subsystem 1
Cache
MBA
SAP
Logical-channel
Subsystem 2
SAP
Logical-channel
Subsystem 3
Physical-Channel Subsystem
FICON Switch,
Control Unit,
Devices, etc.
ESCON Switch,
Control Unit,
Devices, etc.
37
38
19
39
I/O subsystem layer exists between the operating system and the CHPIDs
I/O control layer uses a control file IOCDS that translates physical I/O addresses into
devices numbers that are used by z/OS
Device numbers are assigned by the system programmer when creating the IODF and
IOCDS and are arbitrary (but not random!)
40
20
statically
assigned
dynamically
assigned
41
TOK=('CECDCEA',00800007C74E2094220520410111046F00000000,*
00000000,'11-02-15','22:05:20','ESRAU','IODF99')
RESOURCE PARTITION=((CSS(0),(PCF2GAR2,5),(PRDA,1),(PRDC,2),(PR*
DE,3),(PRDF,4),(*,6),(*,7),(*,8),(*,9),(*,A),(*,B),(*,C)*
,(*,D),(*,E),(*,F)),(CSS(1),(*,1),(*,2),(*,3),(*,4),(*,5*
),(*,6),(*,7),(*,8),(*,9),(*,A),(*,B),(*,C),(*,D),(*,E),*
(*,F)),(CSS(2),(*,1),(*,2),(*,3),(*,4),(*,5),(*,6),(*,7)*
,(*,8),(*,9),(*,A),(*,B),(*,C),(*,D),(*,E),(*,F)),(CSS(3*
),(*,1),(*,2),(*,3),(*,4),(*,5),(*,6),(*,7),(*,8),(*,9),*
(*,A),(*,B),(*,C),(*,D),(*,E),(*,F)))
CHPID PATH=(CSS(0),00),SHARED,PARTITION=((PRDA,PRDC),(=)),
CPATH=(CSS(0),00),CSYSTEM=CECDD6A,AID=0C,PORT=1,TYPE=CIB
...
CNTLUNIT CUNUMBR=FFFD,PATH=((CSS(0),08,09,0A,0B,0C,0D,0E,0F)),*
UNIT=CFP
IODEVICE ADDRESS=(FF90,007),CUNUMBR=(FFFD),UNIT=CFP
IODEVICE ADDRESS=(FF97,007),CUNUMBR=(FFFD),UNIT=CFP
42
21
Appl. B
UCB 100
Appl. C
UCB 100
Appl. A
UCB 100
UCB Busy
One I/O to
one volume
at one time
UCB 1FF
Alias to
UCB 100
Device Busy
Parallel Access
Volumes
100
Appl. C
UCB 100
UCB 100
100
Multiple
Allegiance
43
8000
7000
6000
5000
4000
3000
2000
1000
Response (MS)
PAV Performance
9
<Queue
<Pend
<Connect
<Disc
I/O Rate>
NoPAV
PAV
44
22
45
Hyper PAV
Alias is taken from and returned to a pool instead of LCU.
Define max. number of aliases per LCU in HCD.
Every I/O Request is queued when all aliases allocated in the pool in the same LCU are
exhausted.
Aliases kept in pool for use as needed
46
23
ESCON Connectivity
ESCON (Enterprise Systems Connection) is a data connection created by IBM commonly used to
connect their mainframe computers to peripheral devices.
ESCON replaced the older, slower parallel Bus&Tag channel technology
The ESCON channels use a director to support dynamic switching.
47
ESCON Director
ESCD
ESCD
48
24
49
FICON Connectivity
50
25
ESCON vs FICON
ESCON
- 20 Mbytes / Second
- Lots of dead time. One active request at a time.
- One target control unit
FICON
- 800 Mbytes / Second
- Uses FCP standard
- Fiber Optic cable (less space under floor)
- Currently, up to 64 simultaneous I/O packets at a time with up to 64 different
control units
- Supports Cascading switches
51
26
53
HIGHLIGHTS
of z196
54
27
z196 Overview
Machine Type
2817
5 Models
M15, M32, M49, M66 and M80
Processor Units (PUs)
20 (24 for M80) PU cores per book
Up to 14 SAPs per system, standard
2 spares designated per system
Dependant on the H/W model - up to 15,32,49,66 or 80 PU
cores available for characterization
Central Processors (CPs), Integrated Facility for Linux (IFLs),
Internal Coupling Facility (ICFs), Mainframe Application
Assist Processors (zAAPs), Mainframe Integrated
Information Processor (zIIP), optional - additional System
Assist Processors (SAPs)
Memory
System Minimum of 32 GB
Up to 768 GB per book
Up to 3 TB for System and up to 1 TB per LPAR
Fixed HSA, standard
32/64/96/112/128/256 GB increments
I/O
Up to 48 I/O Interconnects per System @ 6 GBps each
Up to 4 Logical Channel Subsystems (LCSSs)
STP - optional (No ETR)
55
z196
56
28
57
z196
z114
29
Design highlights
Offer a flexible infrastructure
Offer state-of-the-art integration capability
Offer high performance
Offer the high capacity and scalability
Offer the capability of concurrent upgrades for processors, memory, and I/O
connectivity
Implement a system with high availability and reliability
Have broad internal and external connectivity offerings
Provide the highest level of security in which every two PUs share a CP Assist for
Cryptographic Function (CPACF).
Be self-managing and self-optimizing IRD, Hiperdispatch
Have a balanced system design
59
Processor Design
CPU (core)
cycle time
pipeline, execution order
branch prediction
hardware vs. millicode*
Memory subsystem (nest)
high speed buffers
(caches)
on chip, on book
private, shared
buses
number, bandwidth
latency
distance
speed of light
Memory
Shared Cache
Private
Cache
Private
Cache
CPU
CPU
Private
Cache
CPU
30
61
Memory
L2 Cache
L2 Cache
L1
L1
CPU
CPU
L1
L1
L1
CPU
CPU
CPU
L1
CPU
PR/SM
LP1
LP2
TCBa
TCBb
z/OSi
LPn
TCBx
LP1
LP2
TCBa
TCBb
z/OSj
LPm
TCBx
62
31
Internal
Batteries
(optional)
Processor Books,
Memory, MBA and
HCA cards
Power
Supplies
2 x Support
Elements
Service Processor
InfiniBand I/O
Interconnects
I/O cage
2 x Cooling
Units
I/O drawers
Fiber Quick Connect
(FQC) Feature
(optional)
FICON &
ESCON
FQC
63
z196 Water cooled Under the covers (Model M66 or M80) front
view
Internal
Batteries
(optional)
Power
Supplies
Support
Elements
I/O cage
I/O drawers
Service Processor
(FSP) cage controller
cards
Processor Books,
Memory, MBA and
HCA cards
InfiniBand I/O
Interconnects
2 x Water
Cooling
Units
z196TLLB64
64
32
M80
M49
z10 EC
M32
M15
Concurrent Upgrade
M66
z196 to higher hardware z196
model
Upgrade of z196 Models M15, M32,
M49 and M66 to M80 is disruptive
When upgrading to z196 all the Books
are replaced
Upgrade from Air to Water cooled not
available
65
z196 Architecture
Continues line of upward-compatible Mainframe processors
66
33
z196
Book Concept
67
Book design
A z196 system has up to four books on a fully connected topology,
up to 80 processor units can be characterized, and up to 3 TB of
memory capacity. Memory has up to 12 memory controllers, using
5-channel redundant array of independent memory (RAIM)
protection, with DIMM bus cyclic redundancy check (CRC) error
retry. The 4-level cache hierarchy is implemented with eDRAM
(embedded) caches. Up until recently eDRAM was considered to
be too slow for this kind of use, however, a break-through in IBM
technology has negated that. In addition eDRAM offers higher
density, less power utilization, fewer soft-errors, and better
performance. Concurrent maintenance allows dynamic book add
and repair.
The z196 server uses 45nm chip technology, with advancing low
latency pipeline design, leveraging high speed yet power efficient
circuit designs. The multichip module (MCM) has a dense
packaging, allowing modular refrigeration units (MRUs) cooling, or
as an option, water cooling. The water cooling option is
recommended as it can lower the total power consumption of the
server.
68
34
Book Layout
Backup Air Plenum
MCM @ 1800W
Refrigeration Cooled or
Water Cooled
16X DIMMs
100mm High
Fanout
Memory
Rear
3x DCA
Front
Cards
Memory
14X DIMMs
100mm High
Cooling
from/to MRU
69
96 MB
SC CHIP
BOOK
Front View
Front View
Fanouts
MCM
70
35
CP2
FBCs
CP1
SC1
CP3
CP0
FBCs
SC0
CP4
CP5
71
L1
L1
PU
L1
L1
L1
L1
PU
L1
L1
L1
L1
PU
L1
L1
L1
L1
PU
L1
L1
L1
L1
PU
L1
L1
L1
L1
L1
24MB eDRAM
Inclusive L3
24MB eDRAM
Inclusive L3
24MB eDRAM
Inclusive L3
24MB eDRAM
Inclusive L3
24MB eDRAM
Inclusive L3
24MB eDRAM
Inclusive L3
LRU Cast-Out
CP Stores
Data Fetch Return
72
192MB eDRAM
Inclusive L4
2 SC Chips
3/2/2012
36
73
74
37
z196
Processor Unit
Design
75
96 engines
80-way
z196
77 engines
64-way
z10 EC
Maximum PCI
64 engines
54-way
z9 EC
48 engines
32-way
z990
z900
20 engines
16-way
Minimum PCI
z900
z/OS 1.6
z990
z/OS 1.6
z9 EC
z/OS 1.6
z10 EC
z/OS 1.8
z196
z/OS 1.11
76
38
4.4
GHz
5000
Fastest
Microprocessor
In The Industry
4000
MHz
3000
1.7
GHz
2000
1000
300
MHz
420
MHz
550
MHz
1998
G5
1999
G6
770
MHz
1.2
GHz
1997
G4
2000
z900
2003
z990
2005
z9 EC
2008
z10 EC
2010
z196
77
39
Instrs
1
2
L1 miss
L1 miss
3
4
5
Time
Execution
Time
Storage access
79
Superscalar processor
A scalar processor is a processor that is based on a single-issue architecture, which means that only
a single instruction is executed at a time. A superscalar processor allows concurrent execution of
instructions by adding additional resources onto the microprocessor to achieve more parallelism
by creating multiple pipelines, each working on its own set of instructions.
A superscalar processor is based on a multi-issue architecture. In such a processor, where
multiple instructions can be executed at each cycle, a higher level of complexity is reached,
because an operation in one pipeline stage might depend on data in another pipeline stage.
Therefore, a superscalar design demands careful consideration of which instruction sequences
can successfully operate in a long pipeline environment.
On the z196, up to three instructions can be decoded per cycle and up to five
instructions/operations can be executed per cycle. Execution can occur out of (program) order.
If the branch prediction logic of the microprocessor makes the wrong prediction, removing all
instructions in the parallel pipelines also might be necessary. Obviously, the cost of the wrong
branch prediction is more costly in a high-frequency processor design. For this reason, a variety of
history-based branch prediction mechanisms are used.
The branch target buffer (BTB) runs ahead of instruction cache pre-fetches to prevent branch
misses in an early stage. Furthermore, a branch history table (BHT) in combination with a pattern
history table (PHT) and the use of tagged multi-target prediction technology branch prediction offer
an extremely high branch prediction success rate.
80
40
Branch prediction
Because of the ultra high frequency of the PUs, the penalty for a wrongly predicted branch is
high. For that reason a multi-pronged strategy for branch prediction, based on gathered
branch history combined with several other prediction mechanisms, is implemented on each
microprocessor. The branch history table (BHT) implementation on processors has a large
performance improvement effect. Originally introduced on the IBM ES/9000 9021 in 1990,
the BHT has been continuously improved.
The BHT offers significant branch performance benefits. The BHT allows each PU to take
instruction branches based on a stored BHT, which improves processing times for
calculation routines. Besides the BHT, the z196 uses a variety of techniques to improve the
prediction of the correct branch to be executed. The techniques include:
Branch history table (BHT)
Branch target buffer (BTB)
Pattern history table (PHT)
BTB data compression
The success rate of branch prediction contributes significantly to the superscalar aspects of
the z196. This is because the architecture rules prescribe that, for successful parallel
execution of an instruction stream, the correctly predicted result of the branch is essential.
81
Wild branch
When a bad pointer is used or when code overlays a data area containing a pointer to code, a random
branch is the result, causing a 0C1 or 0C4 abend. Random branches are very hard to diagnose because
clues about how the system got there are not evident. With the wild branch hardware facility, the last
address from which a successful branch instruction was executed is kept. z/OS uses this
information in conjunction with debugging aids, such as the SLIP command, to determine where a wild
branch came from and might collect data from that storage location. This approach decreases the many
debugging steps necessary when looking for where the branch came from.
82
41
Core 1
Core 0
2nd Level
Cache
IB OB TLB
Cmpr
Exp
TLB
16K
OB
IB
Cmpr
Exp
16K
Crypto
Cipher
Crypto
Hash
83
84
42
85
PU 2
PU 1
PU 0
S00
SC 1
SC 0
S11
S01
24.427 mm x 19.604 mm
1.5 billion transistors/SC chip
L4 Cache 96 MB per SC chip (192 MB/Book)
L4 access to/from other MCMs
PU 3
PU 4
PU 5
86
43
87
L4 Cache
(24MB)
13 layers of metal
Chip Area
478.8mm^2
L4 Cache
(24MB)
24.4mm x 19.6mm
7100 Power C4s
1819 signal C4s
1.5 Billion
Transistors
1 Billion cells for
eDRAM
eDRAM Shared L4
Cache
Fabric
IOs
ETR/
TOD Data
Clk
BitStack
L4 Controller
BitPerv Stack
Fabric
IOs
Perv
Repower
96 MB per SC chip
192 MB per Book
6 CP chip interfaces
3 Fabric interfaces
2 clock domains
L4 Cache
(24MB)
L4 Cache
(24MB)
44
89
Books/
PUs
CPs
IFLs
uIFLs
zAAPs
zIIPs
ICFs
SAPs
Std
Optional
SAPs
Std.
Spares
M15
1/20
0-15
0-15
0-14
0-7
0-7
0-15
0-4
M32
2/40
0-32
0-32
0-31
0-16
0-16
0-16
0-10
M49
3/60
0-49
0-49
0-48
0-24
0-24
0-16
0-15
M66
4/80
0-66
0-66
0-65
0-33
0-33
0-16
12
0-20
M80
4/96
0-80
0-80
0-79
0-40
0-40
0-16
14
0-18
z196 Models M15 to M66 use books each with a 20 core MCM (two 4-core and four 3-core PU chips)
Concurrent Book Add is available to upgrade from model to model (except to the M80)
z196 Model M80 has four books each with a 24 core MCM (six 4-core PU chips)
Disruptive upgrade to z196 Model M80 is done by book replacement
Notes: 1. At least one CP, IFL, or ICF must be purchased in every machine
2. One zAAP and one zIIP may be purchased for each CP purchased even if CP capacity is banked.
3. uIFL stands for Unassigned IFL
90
45
z196
Memory
91
Memory design
The z196 memory design also provides
flexibility and high availability
Memory assignment or allocation is done at
power-on reset (POR) when the system is
initialized. PR/SM is responsible for the
memory assignments.
PR/SM has knowledge of the amount of
purchased memory and how it relates to
the available physical memory in each of
the installed books. PR/SM has control
over all physical memory and therefore is
able to make physical memory available to
the configuration when a book is
nondisruptively added. PR/SM also
controls the reassignment of the content of
a specific physical memory array in one
book to a memory array in another book.
92
46
93
94
47
Standard
Memory GB
Flexible
Memory
GB
M15
32 - 704
NA
M32
32 - 1520
32 - 704
M49
32 - 2288
32 - 1520
M66
32 - 3056
32 - 2288
M80
32 - 3056
32 - 2288
95
z196
Connectivity
96
48
Book 1
Book 2
Book 3
Memory
Memory
Memory
Memory
PU
PU
PU
PU
PU
PU
PU
PU
FBC/L4 Cache
PU
FBC/L4 Cache
FBC/L4 Cache
PU
PU
HCA
PU
PU
PU
PU
PU
PU
HCA
PU
PU
PU
FBC/L4 Cache
PU
PU
PU
PU
HCA2-C
fanout
HCA
HCA
1st level
CopperCables
6 GBps
IFB-MP
RII
IFB-MP
IFB-MP
RII
2 GBps
mSTI
IFB-MP
IFB-MP
RII
IFB-MP
IFB-MP
IFB-MP
RII
2nd level
Embedded
2 GBps mSTI
1 GBps
mSTI
97
ESCON
ESCON
ESCON
ESCON
2GBps
mSTI
FPGA
FPGA
.
Channels
Coupling Links
Channels
Ports
FICON Express8
2/4/8 Gbps
ISC-3
ESCON
OSA-Express3
10 GbE
Full name
AID
Adapter identification
CIB
HCA
MBA
PSIFB
IFB-MP
InfiniBand Multiplexer
STI-MP
Self-Timed Interconnect
Multiplexer
98
49
99
Crypto
Crypto Express3
Configurable Coprocessor / Accelerator
Purpose / Traffic
Operating Systems
OSD
All OSA features
z196, z10, z9, zSeries
z/OS, z/VM
z/VSE, z/TPF
Linux on Mainframe
z/OS, z/VM
z/VSE
z/OS, z/VM
z/VSE
OSM
1000BASE-T
z196 exclusive
z/OS, z/VM
Linux on Mainframe*
OSN
GbE, 1000BASE-T
z196, z10, z9 exclusive
z/OS, z/VM
z/VSE, z/TPF
Linux on Mainframe
OSX
10 GbE
z196 exclusive
z/OS, z/VM
Linux on Mainframe*
OSE
1000BASE-T
z196, z10, z9, zSeries
OSC
1000BASE-T
z196, z10, z9, z990, z890
*IBM is working with its Linux distribution partners to include support in future Linux on
Mainframe distribution releases.
100
50
FLASH
2, 4, 8 Gbps
SFP+
HBA
ASIC
IBM
ASIC
2, 4, 8 Gbps
SFP+
HBA
ASIC
IBM
ASIC
FLASH
PCIe
Switch
LX
OR
SX
101
Linux
LP2
z/OS
LP1
HiperSockets
z/VSE
LP3
L2
z/VM
LP5
L3
z/VM
LP4
IPv6
102
51
Description
Use
Link
data rate
Distance
IC
(ICP)
Internal Coupling
Channel
Internal
communication
Internal
speeds
NA
32
12x InfiniBand
6 GBps
3 GBps*
150 meters
(492 feet)
32
1x InfiniBand
5 Gbps
or 2.5 Gbps**
10 km unrepeated
(6.2 miles)
100 km repeated
32
32
InterSystem
Channel-3
z196, z10, z9
2 Gbps
10 km unrepeated
(6.2 miles)
100 km repeated
48
48
InfiniBan
d
(CIB)
InfiniBan
d
(CIB)
ISC-3
(CFP)
z196
z196
Maximum Max links
z196
Max
CHPIDs
NA
128
CHPIDs
103
Speed
Distance
Fanout
Cabling
50 MM (OM3)
fiber
9 SM fiber
12x InfiniBand
6 or 3 GBps
150 meters
HCA2-O
HCA3-O
1x InfiniBand
5 or 2.5 Gbps
10 km
HCA2-O LR*
HCA3-O LR
IFB
IFB
HCA2-O
IFB
IFB
HCA2-O LR*
HCA3-O
IFB
IFB
IFB
HCA3-O LR
IFB
104
52
2002
2003
2004
2005
2006
2007
2008
2009
2010
OS/390
ICSF
z990
z890
z990
z890
z890
z990/z890
z9 EC z9 BC
z10 EC/BC
z9 EC z9 BC
z10 EC/BC
Crypto Express3
105
Coprocessor #1
PCIe interfaces
Coprocessor #2
106
53
54
z/OS
z/VM
Linux
Linux
Linux on Mainframe
Linux
z/OS
Common Criteria EAL4+
Cryptography
z/OS 1.10
Common Criteria
Novell SUSE SLES9
and 10 certified at
EAL4+ with CAPP
RHEL5 EAL4+ with
CAPP and LSPP
z196 - EAL5+
See: www.ibm.com/security/standards/st_evaluations.shtml
109
z196
RAS
110
55
Sources of Outages
Pre z9
-Hrs/Year/Syst-
Prior Servers
Impact of Outage
Unscheduled Outages
Scheduled Outages
Planned
Outages
Preplanning
requirements
z9 EC
Z10 EC
z196
111
Power
2x Power Supply
2x Power feed
Internal Battery Feature
Optional internal battery in
cause of loss of external
power)
Cooling, air & water-cooling options
Dynamic oscillator switchover
Processors
Multiprocessors
Spare PUs
Memory
RAIM
Chip sparing
Error Correction and Checking
Enhanced book availability
112
56
Improving Reliability
Power & Thermal Optimization and Management
Static Power Save Mode under z/OS control
Smart blower management by sensing altitude and humidity
Enhanced evaporator design to minimize temperature variations
MCM cooling with N+1 design feature
MCM backup cooling with Heat Exchanger
Available air to water heat exchanger for customer water cooling
57
Failover
I/O Hub
I/O Hub
Failover
I/O Mux
I/O Mux
I/O Mux
I/O Mux
I/O
Switch
I/O
Switch
Network
Switch
Network
Switch
Control
Unit
Control
Unit
End
User
End
User
ISC Links
ISC Links
Crypto Accelerator
Crypro Accelerator
Network adapter.
Network adapter.
I/O adapter.
I/O adapter.
Alternate Path
SAP / CP sparing
SAP Reassignment
I/O Reset & Failover
I/O Mux Reset / Failover
Redundant I/O Adapter
Redundant I/O interconnect
Redundant Network Adapters
Redundant ISC links
Redundant Crypto processors
I/O Switched Fabric
Network Switched/Router Fabric
High Availability Plugging Rules
I/O and coupling fanout
rebalancing on CBA
Channel Initiated Retry
High Data Integrity Infrastructure
I/O Alternate Path
Network Alternate Path
Virtualization Technology
115
LPARn
LPAR2
LPAR1
LPARn
LPAR2
LPAR1
CSS /
CHPID
Director
(Switch)
DASD CU
DASD CU
....
116
58
BACKUP
Slides
117
118
59
119
120
60
121
VS.
Maserati MC12
Peterbilt Semi
122
61
VS.
Peterbilt Semi
Is this Better?
VS.
Peterbilt Semi
10 Ford F-450 1 ton Pickups
62
125
63
127
References
Introduction to the New Mainframe: Large-Scale Commercial Computing, SG24-7175-00
IBM Mainframe Strengths and Values, SG24-7333-01
Introduction to the New Mainframe: z/OS Basics, SG24-6366-01
ABCs of z/OS System Programming Volume 1, SG24-6981-01
ABCs of z/OS System Programming Volume 2, SG24-6982-02
ABCs of z/OS System Programming Volume 3, SG24-6983-03
ABCs of z/OS System Programming Volume 4, SG24-6984-00
ABCs of z/OS System Programming Volume 5, SG24-6985-01
ABCs of z/OS System Programming Volume 6, SG24-6986-00
ABCs of z/OS System Programming Volume 7, SG24-6987-01
ABCs of z/OS System Programming Volume 8, SG24-6988-00
ABCs of z/OS System Programming Volume 9, SG24-6989-04
ABCs of z/OS System Programming Volume 9, SG24-6989-05
ABCs of z/OS System Programming Volume 10, SG24-6990-03
ABCs of z/OS System Programming Volume 11, SG24-6327-01
ABCs of z/OS System Programming Volume 12, SG24-7621-00
ABCs of z/OS System Programming Volume 13, SG24-7717-00
IBM zEnterprise 196 Technical Guide, SG24-7833-00
IBM zEnterprise System Technical Introduction, SG24-7832-00
IBM zEnterprise 114 Technical Guide, SG24-7954-00
IBM System z Connectivity Handbook, SG24-5444-12
z/Architecture Principles of Operation, SA22-7832-08
z/Architecture Reference Summary, SA22-7871-06
128
64