P10 Scale Out
P10 Scale Out
P10 Scale Out
Giuliano Anselmi
Young Hoon Cho
Andrew Laidlaw
Armin Rll
Tsvetomir Spasov
Redpaper
IBM Redbooks
August 2022
REDP-5675-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to the IBM Power S1014 (9105-41B), IBM Power S1022s (9105-22B), IBM Power S1022
(9105-22A), and IBM Power S1024 (9105-42A) servers.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
iv IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3.7.2 RDX removable disk drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.8 Disk and media features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
3.9 External IO subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3.9.1 PCIe Gen3 I/O expansion drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3.9.2 EXP24SX SAS Storage Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
3.9.3 IBM Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
3.10 System racks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.10.1 IBM Enterprise 42U Slim Rack 7965-S42 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.10.2 AC power distribution units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.10.3 Rack-mounting rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.10.4 Useful rack additions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.10.5 Original equipment manufacturer racks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Contents v
vi IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Security® POWER9™
C3® IBM Spectrum® PowerHA®
Db2® IBM Watson® PowerPC®
DS8000® Instana® PowerVM®
Enterprise Storage Server® Interconnect® QRadar®
IBM® Micro-Partitioning® Redbooks®
IBM Cloud® PIN® Redbooks (logo) ®
IBM Cloud Pak® POWER® Storwize®
IBM Elastic Storage® Power Architecture® Turbonomic®
IBM FlashSystem® POWER8®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S.
and other countries.
Ansible, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in
the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
viii IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Preface
The goal of this paper is to provide a hardware architecture analysis and highlight the
changes, new technologies, and major features that are introduced in these systems, such as
the following examples:
The latest IBM Power10 processor design, including the Dual Chip Module (DCM) and
Entry Single Chip Module (eSCM) packaging, which is available in various configurations
of 4 - 24 cores per socket.
Native Peripheral Component Interconnect® Express (PCIe) 5th generation (Gen5)
connectivity from the processor socket to deliver higher performance and bandwidth for
connected adapters.
Open Memory Interface (OMI) connected differential DIMM (DDIMM) memory cards that
deliver increased performance, resilience, and security over industry-standard memory
technologies, including the implementation of transparent memory encryption.
Enhanced internal storage performance with the use of native PCIe connected
Non-Volatile Memory Express (NVMe) devices in up to 16 internal storage slots to deliver
up to 102.4 TB of high-performance, low-latency storage in a single, two-socket system.
Consumption-based pricing in the Power Private Cloud with Shared Utility Capacity
commercial model that allows customers to use resources more flexibly and efficiently,
including IBM AIX, IBM i, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and
Red Hat OpenShift Container Platform workloads.
This publication is intended for the following professionals who want to acquire a better
understanding of IBM Power server products:
IBM Power customers
Sales and marketing professionals
Technical support professionals
IBM Business Partners
Independent software vendors (ISVs)
This paper expands the set of IBM Power documentation by providing a desktop reference
that offers a detailed technical description of the Power10 processor-based Scale Out server
models.
Giuliano Anselmi is an IBM® Power Digital Sales Technical Advisor in IBM Digital Sales
Dublin. He joined IBM and focused on Power processor-based technology. For almost 20
years, he covered several technical roles. He is an important resource for the mission of his
group and serves a reference for Business Partners and customers.
Young Hoon Cho is IBM Power Top Gun with the post-sales Technical Support Team for IBM
in Korea. He has over 10 years of experience working on IBM RS/6000, IBM System p, and
Power products. He provides second-line technical support to Field Engineers who are
working on IBM Power and system management.
Andrew Laidlaw is a Senior Power Technical Seller in the United Kingdom. He has 9 years of
experience in the IBM IT Infrastructure team, during which time he worked with the latest
technologies and developments. His areas of expertise include open source technologies,
including Linux and Kubernetes, open source databases, and artificial intelligence
frameworks, and tooling. His current focus area is on the Hybrid Cloud tools and capabilities
that support IBM customers in delivering modernization across their Power estate. He has
presented extensively on all of these topics across the world, including at the IBM Systems
Technical University conferences. He has been an author of many other IBM Redbooks®
publications.
Tsvetomir Spasov is a IBM Power SME at IBM Bulgaria. His main area of expertise is FSP,
eBMC, HMC, POWERLC, and GTMS. He has been with IBM since 2016, providing reactive
break-fix, proactive, preventative, and cognitive support. He has conducted several technical
trainings and workshops.
Scott Vetter
PMP, IBM Poughkeepsie, US
Ryan Achilles, Brian Allison, Ron Arroyo, Joanna Bartz, Bart Blaner, Gareth Coates,
Arnold Flores, Austin Fowler, George Gaylord, Douglas Gibbs, Nigel Griffiths,
Daniel Henderson, Markesha L Hill, Stephanie Jensen, Kyle Keaty,
Rajaram B Krishnamurthy, Charles Marino, Michael Mueller, Vincent Mulligan,
Hariganesh Muralidharan, Kaveh Naderi, Mark Nellen, Brandon Pederson,
Michael Quaranta, Hassan Rahimi, Ian Robinson, Todd Rosedahl, Bruno Spruth,
Nicole Schwartz, Bill Starke, Brian W. Thompto, Madhavi Valluri, Jacobo Vargas,
Madeline Vega, Russ Young
IBM
A special thanks to John Banchy for his relentless support of IBM Redbooks and his
contributions and corrections to them over the years.
x IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an IBM Redbooks residency project and help write a book
in your area of expertise, while honing your experience using leading-edge technologies. Your
efforts will help to increase product acceptance and customer satisfaction, as you expand
your network of technical contacts and relationships. Residencies run from two to six weeks
in length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
xii IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
1
The inclusion of PCIe Gen5 interconnects allows for high data transfer rates to provide higher
I/O performance or consolidation of the I/O demands of the system to fewer adapters running
at higher rates. This situation can result in greater system performance at a lower cost,
particularly when I/O demands are high.
The Power S1022s and S1022 servers deliver the performance of the Power10 processor
technology in a dense 2U (EIA units), rack-optimized form factor that is ideal for consolidating
multiple workloads with security and reliability. These systems are ready for hybrid cloud
deployment, with Enterprise grade virtualization capabilities built in to the system firmware
with the PowerVM hypervisor.
Figure 1-1 shows the Power S1022 server. The S1022s chassis is physically the same as the
S1022 server.
The Power S1014 server provides a powerful single-socket server that can be delivered in a
4U (EIA units) rack-mount form factor or as a desk-side tower model. It is ideally suited to the
modernization of IBM i, AIX, and Linux workloads to allow them to benefit from the
performance, security, and efficiency of the Power10 processor technology. This server easily
integrates into an organization’s cloud and cognitive strategy and delivers industry-leading
price and performance for your mission-critical workloads.
The Power S1024 server is a powerful one- or two-socket server that includes up to 48
Power10 processor cores in a 4U (EIA units) rack-optimized form factor that is ideal for
consolidating multiple workloads with security and reliability. With the inclusion of PCIe Gen5
connectivity and PCIe attached NVMe storage, this server maximizes the throughput of data
across multiple workloads to meet the requirements of modern hybrid cloud environments.
2 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 1-2 show the Power S1024 server.
This system is available in a rack-mount (4U EIA units) form factor, or as a desk-side tower
configuration, which offers flexibility in deployment models.
The Power S1014 server includes eight Differential DIMM (DDIMM) memory slots, each of
which can be populated with a DDIMM that is connected by using the new Open Memory
Interface (OMI). These DDIMMs incorporate DDR4 memory chips while delivering increased
memory bandwidth of up to 204 GBps peak transfer rates.
They also support transparent memory encryption to provide increased data security with no
management setup and no performance impact. The system supports up to 1 TB memory
capacity with the 8-core processor installed, with a minimum requirement of 32 GB memory
installed. The maximum memory capacity with the 4-core processor installed is 64 GB.
Note: The 128 GB DDIMMs will be made available on 18 November 2022; until that date,
the maximum memory capacity of an 8-core S1014 server is 512 GB.
The Power S1014 server includes five usable PCIe adapter slots, four of which support PCIe
Gen5 adapters, while the fifth is a PCIe Gen4 adapter slot. These slots can be populated with
a range of adapters that cover LAN, Fibre Channel, SAS, USB, and cryptographic
accelerators. At least one network adapter must be included in each system. The 8-core
model can deliver more PCIe adapter slots through the addition of a PCIe Expansion drawer
(#EMX0) for a maximum of 10 PCIe adapter slots.
Note: The 4-core Power S1014 model does not support the connection of PCIe expansion
or storage expansion drawers.
The Power S1014 server includes PowerVM Enterprise Edition to deliver virtualized
environments and to support a frictionless hybrid cloud experience. Workloads can run the
AIX, IBM i, and Linux operating systems, including Red Hat OpenShift Container Platform.
This system is a rack-mount (2U EIA units) form factor with an increased depth over previous
2U Power servers. A rack extension is recommended when installing the Power S1022s
server into an IBM Enterprise S42 rack.
The Power S1022s server includes 16 DDIMM memory slots, of which eight are usable when
only one processor socket is populated. Each of the memory slots can be populated with a
DDIMM that is connected by using the new OMI.
These DDIMMs incorporate DDR4 memory chips while delivering increased memory
bandwidth of up to 409 GBps peak transfer rates per socket. They also support transparent
memory encryption to provide increased data security with no management setup and no
performance impact.
The system supports up to 2 TB memory capacity with both sockets populated, with a
minimum requirement of 32 GB installed per socket.
Note: The 128 GB DDIMMs will be made available on 18 November 2022; until that date,
the maximum memory capacity of an S1022s server is 1 TB.
The Power S1022s server includes 10 usable PCIe adapter slots, of which five are usable
when only one processor socket is populated. Eight of the PCIe adapter slots support PCIe
Gen5 adapters, while the remaining two (one per socket) are PCIe Gen4 adapter slots. These
slots can be populated with a range of adapters covering LAN, Fibre Channel, SAS, USB, and
cryptographic accelerators. At least one network adapter must be included in each system.
4 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
A system with one socket that is populated can deliver more PCIe adapter slots through the
addition of a PCIe expansion drawer (#EMX0) for a maximum of 10 PCIe adapter slots. A
system with two sockets that are populated can support up to 30 PCIe adapters with the
addition of PCIe expansion drawers.
Internal storage for the Power S1022s is exclusively NVMe-based, which connects directly
into the system PCIe lanes to deliver high performance and efficiency. A maximum of eight
U.2 form-factor NVMe devices can be installed, which offers a maximum storage capacity of
51.2 TB in a single server. More HDD or SSD storage can be connected to the 8-core system
by way of SAS expansion drawers (the EXP24SX) or Fibre Channel connectivity to an
external storage array.
The Power S1022s server includes PowerVM Enterprise Edition to deliver virtualized
environments and to support a frictionless hybrid cloud experience. Workloads can run the
AIX, IBM i, and Linux operating systems, including Red Hat OpenShift Container Platform.
Multiple IBM i partitions are supported to run on the Power S1022s server with the 8-core
processor feature, but each partition is limited to a maximum of four cores. These partitions
must use virtual I/O connections, and at least one VIOS partition is required. These partitions
can be run on systems that also run workloads that are based on the AIX and Linux operating
systems.
Note: The IBM i operating system is not supported on the Power S1022s model with
four-core processor option.
All processor cores can run up to eight simultaneous threads to deliver greater throughput.
When two sockets are populated, both must be the same processor model.
The Power S1022 supports Capacity Upgrade on Demand, where processor activations can
be purchased when they are required by workloads. A minimum of 50% of the installed
processor cores must be activated and available for use, with activations for the other
installed processor cores available to purchase as part of the initial order or as a future
upgrade. Static activations are linked only to the system for which they are purchased.
The Power S1022 server also can be purchased as part of a Power Private Cloud with
Shared Utility Capacity pool. In this case, the system can be purchased with one or more
base processor activations, which are shared within the pool of systems. More base
processor activations can be added to the pool in the future. A system with static activations
can be converted to become part of a Power Private Cloud with Shared Utility Capacity pool.
This system is a rack-mount (2U EIA units) form factor with an increased depth over previous
2U Power servers. A rack extension is recommended when installing the Power S1022 server
into an IBM Enterprise S42 rack.
The Power S1022 server includes 32 DDIMM memory slots, of which 16 are usable when
only one processor socket is populated.
The system supports up to 4 TB memory capacity with both sockets populated, with a
minimum requirement of 32 GB installed per socket.
Note: The 128 GB DDIMMs will be made available on 18 November 2022; until that date,
the maximum memory capacity of an S1022 server is 2 TB.
The Power S1022 server includes 10 usable PCIe adapter slots, of which five are usable
when only one processor socket is populated. Eight of the PCIe adapter slots support PCIe
Gen5 adapters, while the remaining two (one per socket) are PCIe Gen4 adapter slots. These
slots can be populated with a range of adapters that covers LAN, Fibre Channel, SAS, USB,
and cryptographic accelerators. At least one network adapter must be included in each
system.
A system with one socket that is populated can deliver more PCIe adapter slots through the
addition of a PCIe expansion drawer (#EMX0) for a maximum of 10 PCIe adapter slots. A
system with two sockets that are populated can deliver up to 30 PCIe adapters with the
addition of PCIe expansion drawers.
Internal storage for the Power S1022 is exclusively NVMe based, which connects directly into
the system PCIe lanes to deliver high performance and efficiency. A maximum of eight U.2
form-factor NVMe devices can be installed, which offers a maximum storage capacity of
51.2 TB in a single server. More HDD or SSD storage can be connected to the system by
using SAS expansion drawers (the EXP24SX) or Fibre Channel connectivity to an external
storage array.
The Power S1022 server includes PowerVM Enterprise Edition to deliver virtualized
environments and to support a frictionless hybrid cloud experience. Workloads can run the
AIX, IBM i, and Linux operating systems, including Red Hat OpenShift Container Platform.
Multiple IBM i partitions are supported to run on the Power S1022 server, but each partition is
limited to a maximum of four cores. These partitions must use virtual I/O connections, and at
least one VIOS partition is required. These partitions can be run on systems that also run
workloads that are based on the AIX and Linux operating systems.
All processor cores can run up to eight simultaneous threads to deliver greater throughput.
When two sockets are populated, both must be the same processor model. A maximum of 48
Power10 cores are supported in a single system, which delivers up to 384 simultaneous
workload threads.
6 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The Power S1024 supports Capacity Upgrade on Demand, where processor activations can
be purchased when they are required by workloads. A minimum of 50% of the installed
processor cores must be activated and available for use, with activations for the other
installed processor cores available to purchase as part of the initial order or as a future
upgrade. These static activations are linked only to the system for which they are purchased.
The Power S1024 server also can be purchased as part of a Power Private Cloud with
Shared Utility Capacity pool. In this case, the system can be purchased with one or more
base processor activations that are shared within the pool of systems. More base processor
activations can be added to the pool in the future. It is possible to convert a system with static
activations to become part of a Power Private Cloud with Shared Utility Capacity pool.
The Power S1024 server includes 32 DDIMM memory slots, of which 16 are usable when
only one processor socket is populated. Each of the memory slots can be populated with a
DDIMM that is connected by using the new OMI. These DDIMMs incorporate DDR4 memory
chips while delivering increased memory bandwidth of up to 409 GBps peak transfer rates
per socket.
They also support transparent memory encryption to provide increased data security with no
management setup and no performance impact. The system supports up to 8 TB memory
capacity with both sockets populated, with a minimum requirement of 32 GB installed per
socket.
Note: The 128 GB and 256 GB DDIMMs will be made available on November 2022; until
that date, the maximum memory capacity of an S1024 server is 2 TB.
The Power S1024 server includes 10 usable PCIe adapter slots, of which five are usable
when only one processor socket is populated. Eight of the PCIe adapter slots support PCIe
Gen5 adapters, while the remaining two (one per socket) are PCIe Gen4 adapter slots. These
slots can be populated with a range of adapters that covers LAN, Fibre Channel, SAS, USB,
and cryptographic accelerators. At least one network adapter must be included in each
system.
A system with one socket that is populated can deliver more PCIe adapter slots through the
addition of a PCIe expansion drawer (#EMX0) for a maximum of 10 PCIe adapter slots. A
system with two sockets that are populated can support up to 30 PCIe adapters with the
addition of PCIe expansion drawers.
Internal storage for the Power S1024 is exclusively NVMe-based, which connects directly into
the system PCIe lanes to deliver high performance and efficiency. A maximum of eight U.2
form-factor NVMe devices can be installed, which offers a maximum storage capacity of
102.4 TB in a single server. HDD or SSD storage can be connected to the system by using
SAS expansion drawers (the EXP24SX) or Fibre Channel connectivity to an external storage
array.
The Power S1024 server includes PowerVM Enterprise Edition to deliver virtualized
environments and support a frictionless hybrid cloud experience. Workloads can run the AIX,
IBM i, and Linux operating systems, including Red Hat OpenShift Container Platform.
Table 1-1 lists the electrical characteristics of the Power S1014, S1022s, S1022, and S1024
servers.
Table 1-1 Electrical characteristics for Power S1014, S1022s, S1022, and S1024 servers
Electrical Properties
characteristics
Power S1014 server Power S1022s server Power S1022 server Power S1024 server
Operating 1200 W power supply 1000 W power supply 2000 W power supply 1600 W power supply
voltage 100 - 127 V AC or 200 100 - 127 V AC or 200 - 200 - 240 V AC 200 - 240 V AC
- 240 V AC 240 V AC
or
1600 W power supply
200 - 240 V AC
Thermal output 3668 Btu/hour 7643 Btu/hour 7643 Btu/hour 9383 Btu/hour
(maximum) (maximum) (maximum) (maximum)
Power 1075 watts 2240 watts (maximum) 2240 watts 2750 watts
consumption (maximum) (maximum) (maximum)
Power-source 1.105 kVa (maximum 2.31 kVa (maximum 2.31 kVa (maximum 2.835 kVa (maximum
loading configuration) configuration) configuration) configuration)
Note: The maximum measured value is the worst-case power consumption that is
expected from a fully populated server under an intensive workload. The maximum
measured value also accounts for component tolerance and nonideal operating conditions.
Power consumption and heat load vary greatly by server configuration and utilization. The
IBM Systems Energy Estimator can be used to obtain a heat output estimate that is based
on a specific configuration.
8 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 1-2 lists the environment requirements for the Power10 processor-based Scale Out
servers.
Table 1-2 Environment requirements for Power S1014, S1022s, S1022 and S1024
Environment Recommended Allowable operating Nonoperating
operating
Table 1-3 Noise emissions for Power S1014, S1022s, S1022 and S1024
Product Declared A-weighted sound Declared A-weighted sound
power level, LWAda (B)b pressure level, LpAm (dB)c
Important: NVMe PCIe adapters (EC5G, EC5B, EC5C, EC5D, EC5E, EC5F, EC6U, EC6V,
EC6W, EC6X, EC6Y, and EC6Z) require higher fan speeds to compensate for their higher
thermal output. This issue might affect the acoustic output of the server, depending on the
configuration. The e-config tool can be used to ensure a suitable configuration (log-in
required).
NVMe U.2 drives (ES1G, EC5V, ES1H, and EC5W) also require more cooling to
compensate for their higher thermal output, which might increase the noise emissions of
the servers.
Figure 1-3 shows the front view of the Power S1022 server.
10 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 1-6 lists the physical dimensions of the rack-mounted Power S1014 and Power S1024
chassis. The server is available only in a rack-mounted form factor and takes 4U of rack
space.
Table 1-6 Physical dimensions of the rack-mounted Power S1014 and Power S1024 chassis
Dimension Power S1014 server (9105-41B) Power S1024 server (9105-42A)
Figure 1-4 shows the front view of the Power S1024 server.
12 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
PCIe slots with two processor sockets populated:
– Two x16 Gen4 or x8 Gen5 half-height, half-length slots (CAPI)
– Two x16 Gen4 or x8 Gen5 half-height, half-length slots
– Two x8 Gen5 half-height, half-length slots (with x16 connectors) (CAPI)
– Two x8 Gen5 half-height, half-length slots (with x16 connectors)
– Two x8 Gen4 half-height, half-length slots (with x16 connectors) (CAPI)
Up to two storage backplanes each with four NVMe U.2 drive slots: Up to eight NMVe U.2
cards (800 GB, 1.6 TB, 3.2 TB, and 6.4 TB)
Integrated:
– Baseboard management/service processor
– EnergyScale technology
– Hot-swap and redundant cooling
– Redundant hot-swap AC power supplies
– One front and two rear USB 3.0 ports
– Two 1 GbE RJ45 ports for HMC connection
– One system port with RJ45 connector
– 19-inch rack-mounting hardware (2U)
Optional PCIe I/O expansion drawer with PCIe slots on eight-core model only:
– Up to two PCIe Gen3 I/O Expansion Drawers
– Each I/O drawer holds up to two 6-slot PCIe fan-out modules
– Each fanout module attaches to the system node through a PCIe optical cable adapter
14 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Active Memory Mirroring for Hypervisor is available as an option to enhance resilience by
mirroring critical memory used by the PowerVM hypervisor
PCIe slots with a single processor socket populated:
– One x16 Gen4 or x8 Gen5 full-height, half-length slot (CAPI)
– Two x8 Gen5 full-height, half-length slots (with x16 connector) (CAPI)
– One x8 Gen5 full-height, half-length slot (with x16 connector)
– One x8 Gen4 full-height, half-length slot (with x16 connector) (CAPI)
PCIe slots with two processor sockets populated:
– Two x16 Gen4 or x8 Gen5 full-height, half-length slots (CAPI)
– Two x16 Gen4 or x8 Gen5 full-height, half-length slots
– Two x8 Gen5 full-height, half-length slots (with x16 connectors) (CAPI)
– Two x8 Gen5 full-height, half-length slots (with x16 connectors)
– Two x8 Gen4 full-height, half-length slots (with x16 connectors) (CAPI)
Up to two storage backplanes each with eight NVMe U.2 drive slots:
– Up to 16 NMVe U.2 cards (800 GB, 1.6 TB, 3.2 TB, and 6.4 TB)
– Optional internal RDX drive
Integrated:
– Baseboard management/service processor
– EnergyScale technology
– Hot-swap and redundant cooling
– Redundant hot-swap AC power supplies
– One front and two rear USB 3.0 ports
– Two 1 GbE RJ45 ports for HMC connection
– One system port with RJ45 connector
– 19-inch rack-mounting hardware (2U)
Optional PCIe I/O expansion drawer with PCIe slots:
– Up to two PCIe Gen3 I/O Expansion Drawers
– Each I/O drawer holds up to two 6-slot PCIe fan-out modules
– Each fanout module attaches to the system node through a PCIe optical cable adapter
The minimum initial order also must include one of the following memory options and one of
the following power supply options:
Memory options:
– One processor module: Minimum of two DDIMMs (one memory feature)
– Two processor modules: Minimum of four DDIMMs (two memory features)
4 The 128 GB and 256 GB DDIMMs will be made available on 18 November 2022.
These internal PCIe adapter slots support a range of different adapters. More information
about the available adapters, see 3.4, “Peripheral Component Interconnect adapters” on
page 119.
The adapter slots are a mix of PCIe Gen5 and PCIe Gen4 slots, with some running at x8
speed and others at x16. Some of the PCIe adapter slots also support OpenCAPI functions
when OpenCAPI is used enabled adapter cards. All PCIe adapter slots support hot-plug
capability when used with Hardware Management Console (HMC) or eBMC based
maintenance procedures.
Two other slots are available in the rear of each server. One of these slots is dedicated to the
eBMC management controller for the system, and the other is a dedicated slot for OpenCAPI
connected devices. These slots cannot be used for any other PCIe adapter type.
Each system requires at least one LAN adapter to support connection to local networks. This
requirement allows for initial system testing and configuration, and the preinstallation of any
operating systems, if required. By default, this server is the #5899 in the S1014 server, the
#EC2T in the S1022s or S1022 servers, or the #EC2U in the S1024 server. Alternative LAN
adapters can be installed instead. This required network adapter is installed by default in slot
C10.
16 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 1-7 lists the adapter slots that are available in the Power10 processor-based Scale Out
servers in various configurations.
Table 1-7 PCIe slot details for Power S1014, S1022s, S1022, and S1024 servers
Adapter slot Type Sockets populated OpenCAPI enabled
Figure 1-5 PCIe adapter slot locations on the Power S1014 and S1024 server models
The Power S1022s and S1022 servers are 2U (EIA units), and support the installation of
low-profile PCIe adapters. Figure 1-6 shows the PCIe adapter slot locations for the
Power S1022s and S1022 server models.
Figure 1-6 PCIe adapter slot locations on the Power S1022sand S1022 server models
The total number of PCIe adapter slots available can be increased by adding PCIe Gen3 I/O
expansion drawers. With one processor socket populated (except the S1014 four core
option), one I/O expansion drawer that is installed with one fan-out module is supported.
When two processor sockets are populated, up to two I/O expansion drawers with up to four
fanout modules are supported. The connection of each fan out module in a PCIe Gen3
expansion drawer requires the installation of a PCIe optical cable adapter in one of the
internal PCIe x16 adapter slots (C0, C3®, C4, or C10).
For more information about the connectivity of the internal I/O bus and the PCIe adapter slots,
see 2.4, “Internal I/O subsystem” on page 92.
18 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
1.8 Operating system support
The Power10 processor-based Scale Out servers support the following families of operating
systems:
AIX
IBM i
Linux
In addition, the Virtual I/O Server (VIOS) can be installed in special partitions that provide
virtualization of I/O capabilities, such as network and storage connectivity. Multiple VIOS
partitions can be installed to provide support and services to other partitions running AIX,
IBM i, or Linux, such as virtualized devices and Live Partition Mobility capabilities.
For more information about the Operating System and other software that is available on
Power, see this IBM Infrastructure web page.
The minimum supported levels of IBM AIX, IBM i, and Linux at the time of announcement are
described in the following sections. For more information about hardware features and
Operating System level support, see this IBM Support web page.
This tool helps to plan a successful system upgrade by providing the prerequisite information
for features that are in use or planned to be added to a system. A machine type and model
can be selected and the prerequisites, supported operating system levels and other
information can be determined.
The machine types and model for the Power10 processor-based Scale Out systems are listed
given in Table 1-8.
Table 1-8 Machine types and models of S1014, S1022s, S1022, and S1024 server models
Server name Machine type and model
S1014 9105-41B
S1022s 9105-22B
S1022 9105-22A
S1024 9105-42A
At announcement, the Power S1014, S1022s, S1022, and S1024 servers support the
following minimum levels of the AIX operating system when installed by using direct I/O
connectivity:
AIX Version 7.3 with Technology Level 7300-00 and Service Pack 7300-00-02-2220
AIX Version 7.2 with Technology Level 7200-05 and Service Pack 7200-05-04-2220
AIX Version 7.2 with Technology Level 7200-04 and Service Pack 7200-04-06-2220
IBM periodically releases maintenance packages (service packs or technology levels) for the
AIX operating system. For more information about these packages, downloading, and
obtaining the installation packages, see this IBM Support Fix Central web page.
For more information about hardware features compatibility and the corresponding AIX
Technology Levels, see this IBM Support web page.
The Service Update Management Assistant (SUMA), which can help you automate the task
of checking and downloading operating system downloads, is part of the base operating
system. For more information about the suma command, see this IBM Documentation web
page.
The AIX Operating System can be licensed by using different methods, including the following
examples:
Stand-alone as AIX Standard Edition
With other software tools as part of AIX Enterprise Edition
As part of the IBM Power Private Cloud Edition software bundle
Customers are licensed to run the product through the expiration date of the 1- or 3-year
subscription term. Then, they can renew at the end of the subscription to continue the use
other product. This model provides flexible and predictable pricing over a specific term, with
lower up-front costs of acquisition.
Another benefit of this model is that the licenses are customer-number entitled, which means
they are not tied to a specific hardware serial number as with a standard license grant.
Therefore, the licenses can be moved between on-premises and cloud if needed, something
that is becoming more of a requirement with hybrid workloads.
The subscription licenses are orderable through IBM configurator. The standard AIX license
grant and monthly term licenses for standard edition are still available.
20 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
1.8.2 IBM i operating system
At announcement, the Power S1014, S1022s, S1022, and S1024 servers support the
following minimum levels of IBM i:
IBM i 7.5
IBM i 7.4 Technology Release 6 or later
IBM i 7.3 Technology Release 12 or later
Some limitations exist when running the IBM i operating system on the Power S1022s or
Power S1022 servers. Virtual I/O by way of VIOS is required, and partitions must be set to
“restricted I/O” mode.
The maximum size of the partition also is limited. Up to four cores (real or virtual) per IBM i
partition are supported. Multiple IBM i partitions can be created and run concurrently, and
individual partitions can have up to four cores.
Note: The IBM i operating system is not supported on the Power S1022s model with the
four-core processor option.
IBM periodically releases maintenance packages (service packs or technology releases) for
the IBM i. For more information about these packages, downloading, and obtaining the
installation packages, see this IBM Support Fix Central web page.
For more information about hardware feature compatibility and the corresponding IBM i
Technology Releases, see this IBM Support web page.
IBM i license terms and conditions require that IBM i operating system license entitlements
remain with the machine for which they were originally purchased. Under qualifying
conditions, IBM allows the transfer of IBM i processor and user entitlements from one
machine to another. This capability helps facilitate machine replacement, server
consolidation, and load rebalancing while protecting a customer’s investment in IBM i
software.
When requirements are met, IBM i license transfer can be configured by using IBM
configurator tools.
Having the IBM i entitlement, keys, and support entitlement on a VSN provides the flexibility
to move the partition to a different Power machine without transferring the entitlement.
Note: VSNs can be ordered in specific countries. For more information, see the local
announcement letters.
With VSNs, each partition can have its own serial number that is not tied to the hardware
serial number. If VSNs are not used, an IBM i partition still defaults to the use of the physical
host serial number.
In the first phase of VSN deployment, only one partition can use a single VSN at any time;
therefore, multiple IBM i LPARs cannot use the same VSN. In the first phase, VSNs are not
supported within Power Private Cloud (Power Enterprise Pools 2.0) environments.
VSNs are supported for partitions that are running any version of IBM i that is supported on
the Power10 processor-based Scale Out servers, although some other PTFs might be
required.
22 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
IBM i software tiers
The IBM i operating system is licensed per processor core that is used by the operating
system, and by the number of users that are interacting with the system. Different licensing
requirements depend on the capability of the server model and processor performance.
These systems are designated with a software tier that determines the licensing that is
required for workloads running on each server, as listed in Table 1-9.
Table 1-9 IBM i software tiers for the Power S1014, S1022s, S1022, and S1024
Server model Processor IBM i software tier
The Linux distributions that are described next are supported on the Power S1014, S1022s,
S1022, and S1024 server models. Other distributions, including open source releases, can
run on these servers, but do not include any formal Enterprise Grade support.
At announcement, the Power S1014, S1022s, S1022, and S1024 servers support the
following minimum levels of the Red Hat Enterprise Linux operating system:
Red Hat Enterprise Linux 8.4 for Power LE, or later
Red Hat Enterprise Linux for SAP with Red Hat Enterprise Linux 8.4 for Power LE, or later
Red Hat Enterprise Linux 9.0 for Power LE, or later
Red Hat Enterprise Linux is sold on a subscription basis, with initial subscriptions and support
available for one, three, or five years. Support is available directly from Red Hat or IBM
Technical Support Services.
When you order RHEL from IBM, a subscription activation code is automatically published in
Enterprise Storage Server. After retrieving this code from Enterprise Storage Server, you use
it to establish proof of entitlement and download the software from Red Hat.
At announcement, the Power S1014, S1022s, S1022, and S1024 servers support the
following minimum levels of the SUSE Linux Enterprise Server operating system:
SUSE Linux Enterprise Server 15 Service Pack 3, or later
SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service
Pack 3, or later
SUSE Linux Enterprise Server is sold on a subscription basis, with initial subscriptions and
support available for one, three, or five years. Support is available directly from SUSE or from
IBM Technical Support Services.
SUSE Linux Enterprise Server 15 subscriptions cover up to one socket or one LPAR, and can
be stacked to cover a larger number of sockets or LPARs.
When you order SLES from IBM, a subscription activation code is automatically published in
Enterprise Storage Server, you use it to establish proof of entitlement and download the
software from SUSE.
One specific benefit of Power10 technology is a 10x to 20x advantage over Power9
processor-based technology for AI inferencing workloads because of increased memory
bandwidth and new instructions. One example is the new special purpose-built matrix math
accelerator (MMA) that was tailored for the demands of machine learning and deep learning
inference. It also supports many AI data types.
Network virtualization is an area with significant evolution and improvements, which benefit
virtual and containerized environments. The following recent improvements were made for
Linux networking features on Power10 processor-based servers:
SR-IOV allows virtualization of network adapters at the controller level without the need to
create virtual Shared Ethernet Adapters in the VIOS partition. It is enhanced with virtual
Network Interface Controller (vNIC), which allows data to be transferred directly from the
partitions to or from the SR-IOV physical adapter without transiting through a VIOS
partition.
Hybrid Network Virtualization (HNV) allows Linux partitions to use the efficiency and
performance benefits of SR-IOV logical ports and participate in mobility operations, such
as active and inactive Live Partition Mobility (LPM) and Simplified Remote Restart (SRR).
HNV is enabled by selecting a new migratable option when an SR-IOV logical port is
configured.
24 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Security
Security is a top priority for IBM and our distribution partners. Linux security on IBM Power is
a vast topic; however, improvements in the areas of hardening, integrity protection,
performance, platform security, and certifications are introduced with this section.
Hardening and integrity protection deal with protecting the Linux kernel from unauthorized
tampering while allowing upgrading and servicing to the kernel. These topics become even
more important when in a containerized environment is run with an immutable operating
system, such as RHEL CoreOS, as the underlying operating system for the Red Hat
OpenShift Container Platform.
A Red Hat OpenShift Container Platform cluster consists of several nodes, which can run on
physical or virtual machines. A minimum of three control plane nodes are required to support
the cluster management functions. At least two compute nodes are required to provide the
capacity to run workloads. During installation, another bootstrap node is required to host the
files that are required for installation and initial setup.
The bootstrap and control plane nodes are all based on the Red Hat Enterprise Linux
CoreOS operating system, which is a minimal immutable container host version of the Red
Hat Enterprise Linux distribution. It and inherits the associated hardware support statements.
The compute nodes can run on Red Hat Enterprise Linux or RHEL CoreOS.
The Red Hat OpenShift Container Platform is available on a subscription basis, with initial
subscriptions and support available for one, three, or five years. Support is available directly
from Red Hat or from IBM Technical Support Services. Red Hat OpenShift Container Platform
subscriptions cover two processor cores each, and can be stacked to cover many cores. Only
the compute nodes require subscription coverage.
At announcement, the Power S1014, S1022s, S1022, and S1024 servers support the
following minimum levels of the operating systems that are supported for Red Hat OpenShift
Container Platform:
Red Hat Enterprise Linux CoreOS 4.9 for Power LE, or later
Red Hat Enterprise Linux 8.4 for Power LE, or later
Red Hat OpenShift Container Platform 4.9 for IBM Power is the minimum level of the Red Hat
OpenShift Container Platform on the Power10 processor-based Scale Out servers.
When you order Red Hat OpenShift Container Platform from IBM, a subscription activation
code automatically is published in Enterprise Storage Server, you use it to establish proof of
entitlement and download the software from Red Hat.
IBM regularly updates the VIOS code. For more information, see this IBM Fix Central web
page.
When system firmware updates are applied to the system, the UAK and its expiration date are
checked. System firmware updates include a release date. If the release date for the firmware
updates is past the expiration date for the update access key when attempting to apply
system firmware updates, the updates are not processed.
As update access keys expire, they must be replaced by using the Hardware Management
Console (HMC) or the ASMI on the eBMC.
By default, newly delivered systems include an UAK that expires after three years. Thereafter,
the UAK can be extended every six months, but only if a current hardware maintenance
contract exists for that server. The contract can be verified on the Enterprise Storage Server
web page.
Checking the validity and expiration date of the current UAK can be done through the HMC or
eBMC graphical interfaces or command-line interfaces. However, the expiration date also can
be displayed by using the suitable AIX or IBM i command.
26 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
UAK expiration date by using AIX 7.1
In the case of AIX 7.1, use the following command:
lscfg -vpl sysplanar0 | grep -p "System Firmware"
The output is similar to the output that is shown in Example 1-1 (the Microcode Entitlement
Date represents the UAK expiration date).
Example 1-1 Output of the command to check UAK expiration date by way of AIX 7.1
$ lscfg -vpl sysplanar0 | grep -p "System Firmware"
System Firmware:
...
Microcode Image.............NL1020_035 NL1020_033 NL1020_035
Microcode Level.............FW1020.00 FW1020.00 FW1020.00
Microcode Build Date........20220527 20220527 20220527
Microcode Entitlement Date..20220515
Hardware Location Code......U9105.42A.XXXXXXX-Y1
Physical Location: U9105.42A.XXXXXXX-Y1
The output is similar to the output that is shown in Example 1-2 (the Update Access Key Exp
Date represents the UAK expiration date).
Example 1-2 Output of the command to check UAK expiration date by way of AIX 7.2 and 7.3
$ lscfg -vpl sysplanar0 |grep -p "System Firmware"
System Firmware:
...
Microcode Image.............NL1020_035 NL1020_033 NL1020_035
Microcode Level.............FW1020.00 FW1020.00 FW1020.00
Microcode Build Date........20220527 20220527 20220527
Update Access Key Exp Date..20220515
Hardware Location Code......U9105.42A.XXXXXXX-Y1
Physical Location: U9105.42A.XXXXXXX-Y1
28 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Provides appliance management capabilities for configuring network, users on the HMC,
and updating and upgrading the HMC.
Note: The recovery media for V10R1 is the same for 7063-CR2 and 7063-CR1.
The 7063-CR2 is compatible with flat panel console kits 7316-TF3, TF4, and TF5.
Any customer with a valid contract can download the HMC from the Enterprise Storage
Server web page, or it can be included within an initial Power S1014, S1022s, S1022, or
S1024 order.
The following minimum requirements must be met to install the virtual HMC:
16 GB of Memory
4 virtual processors
2 network interfaces (maximum 4 allowed)
1 disk drive (500 GB available disk drive)
For an initial Power S1014, S1022s, S1022 or S1024 order with the IBM configurator
(e-config), HMC virtual appliance can be found by selecting add software → Other System
Offerings (as product selections) and then:
5765-VHP for IBM HMC Virtual Appliance for Power V10
5765-VHX for IBM HMC Virtual Appliance x86 V10
For more information and an overview of the Virtual HMC, see this web page.
For more information about how to install the virtual HMC appliance and all requirements, see
this IBM Documentation web page.
The 7063-CR2 provides two network interfaces (eth0 and eth1) for configuring network
connectivity for BMC on the appliance.
Each interface maps to a different physical port on the system. Different management tools
name the interfaces differently. The HMC task Console Management → Console
Settings → Change BMC/IPMI Network Settings modifies only the Dedicated interface.
30 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The BMC ports are listed in Table 1-10.
The main difference is the shared and dedicated interface to the BMC can coexist. Each has
its own LAN number and physical port. Ideally, the customer configures one port, but both can
be configured. The rules for connecting Power Systems to the HMC remain the same as for
previous versions.
32 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Scheduled operation function: In the Electronic Service Agent, a new feature that allows
customers to receive message alerts only if scheduled operations fail (see Figure 1-10).
The IBM Power processor-based architecture always ranked highly in terms of end-to-end
security, which is why it remains a platform of choice for mission-critical enterprise workloads.
A key aspect of maintaining a secure Power environment is ensuring that the HMC (or virtual
HMC) is current and fully supported (including hardware, software, and Power firmware
updates).
Outdated or unsupported HMCs represent a technology risk that can quickly and easily be
mitigated by upgrading to a current release.
The IBM Power Private Cloud Edition V1.8 is a complete package that adds flexible licensing
models in the cloud era. It helps you to rapidly deploy multi-cloud infrastructures with a
compelling set of cloud-enabled capabilities. The Power Private Cloud Edition primarily
provides value for clients that use AIX and Linux on Power, with simplified licensing models
and advanced features.
You can use IBM PowerSC MFA with various applications, such as Remote Shell (RSH),
Telnet, and Secure Shell (SSH).
IBM PowerSC Multi-Factor Authentication raises the level of assurance of your mission-critical
systems with a flexible and tightly integrated MFA solution for IBM AIX and Linux on Power
virtual workloads that are running on Power servers.
IBM PowerSC MFA is part of the PowerSC 2.1 software offering; therefore, it also is included
in the IBM Power Private Cloud Edition software bundle.
34 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
For more information, see this Announcement Letter.
With PowerVC for Private Cloud, you can perform several operations, depending on your role
within a project.
Users can perform the following tasks on resources to which they are authorized. Some
actions might require administrator approval. When a user tries to perform a task for which
approval is required, the task moves to the request queue before it is performed (or rejected):
Perform life-cycle operations on virtual machines, such as capture, start, stop, delete,
resume, and resize
Deploy an image from a deploy template
View and withdraw outstanding requests
Request virtual machine expiration extension
View their own usage data
IBM Power Virtualization Center version 2.0.0 features a new UI, and many new features and
enhancements. IBM listens to our client requirements and implements them along with our
benchmark innovation strategies that take PowerVC to the next level every release.
IBM Cloud PowerVC for Private Cloud includes all the functions of the PowerVC Standard
Edition plus the following features:
A self-service portal that allows the provisioning of new VMs without direct system
administrator intervention. An option is for policy approvals for the requests that are
received from the self-service portal.
Templates can be deployed that simplify cloud deployments.
Cloud management policies are available that simplify managing cloud deployments.
Metering data is available that can be used for chargeback.
36 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
1.10.5 IBM Cloud Management Console
IBM Cloud Management Console for Power (CMC) runs as a hosted service in the IBM
Cloud. It provides a view of the entire IBM Power estate that is managed by a customer,
covering traditional and private cloud deployments of workloads.
The CMC interface collates and presents information about the IBM Power hardware
environment and the virtual machines that are deployed across that infrastructure. The CMC
provides access to tools for:
Monitor the status of your IBM Power inventory
Access insights from consolidated logging across all workloads
Monitor the performance and see use trends across the estate
Perform patch planning for hardware, operating systems, and other software
Manage the use and credits for a Power Private Cloud environment
Data is collected from on-premises HMC devices by using a secure cloud connector
component. This configuration ensures that the CMC provides accurate and current
information about your IBM Power environment.
For more information, see IBM Power Systems Private Cloud with Shared Utility Capacity:
Featuring Power Enterprise Pools 2.0, SG24-8478.
Power clients who often relied on an on-premises only infrastructure can now quickly and
economically extend their Power IT resources into the cloud. The use of IBM Power Virtual
Server on IBM Cloud is an alternative to the large capital expense or added risk when
replatforming and moving your essential workloads to another public cloud.
PowerVS on IBM Cloud integrates your IBM AIX and IBM i capabilities into the IBM Cloud
experience, which means you get fast, self-service provisioning, flexible management
on-premises and off, and access to a stack of enterprise IBM Cloud services all with
pay-as-you-use billing that lets you easily scale up and out.
You can quickly deploy an IBM Power Virtual Server on IBM Cloud instance to meet your
specific business needs. With IBM Power Virtual Server on IBM Cloud, you can create a
hybrid cloud environment that allows you to easily control workload demands.
Red Hat OpenShift Container Platform brings developers and IT operations together with a
common platform. It provides applications, platforms, and services for creating and delivering
cloud-native applications and management so IT can ensure that the environment is secure
and available.
Red Hat OpenShift Container Platform for Power provides enterprises the same functions as
the Red Hat OpenShift Container Platform offering on other platforms. Key features include:
A self-service environment for application and development teams.
Pluggable architecture that supports a choice of container run times, networking, storage,
Continuous Integration/Continuous Deployment (CI-CD), and more.
Ability to automate routine tasks for application teams.
Red Hat OpenShift Container Platform subscriptions are offered in two core increments that
are designed to run in a virtual guest.
For more information, see 5639-RLE Red Hat Enterprise Linux for Power, little endian V7.0.
Managing these environments can be a daunting task. Organizations need the right tools to
tackle the challenges that are posed by these heterogeneous environments to accomplish
their objectives.
Collectively, the capabilities that are listed in this section work well together to create a
consistent management platform between client data centers, public cloud providers, and
multiple hardware platforms (fully inclusive of IBM Power), all of which provide all of the
necessary elements for a comprehensive hybrid cloud platform.
38 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
2
The Power10 processor-based scale-out servers introduce two new Power10 processor
module packages. System planar sockets of a scale-out server are populated with dual-chip
models (DCMs) or entry single chip modules (eSCMs).
The DCM module type combines two Power10 processor chips in a tightly integrated unit and
each chip contributes core, memory interface, and PCIe interface resources.
The eSCM also consists of two Power10 chips, but differs from the DCM by the fact that core
and memory resources are provided by only one (chip-0) of the two chips. The other
processor chip (chip-1) on the eSCM only facilitates access to more PCIe interfaces,
essentially a switch.
Figure 2-1 on page 41 shows a logical diagram of Power S1022 and Power S1024 in a
2-socket DCM configuration.
40 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
P0-C39 DDR4 Differential DIMM OMI2B
P0-C38 DDR4 Differential DIMM OMI2A
P0-C40 DDR4 Differential DIMM OMI1B
P0-C21 DDR4 Differential DIMM OMI1A
P0-C20 DDR4 Differential DIMM OMI0B
P0-C19 DDR4 Differential DIMM OMI0A
P0-C29
P0-C28
P0-C23
P0-C27
P0-C13 DDR4 Differential DIMM OMI0B
P0-C12 DDR4 Differential DIMM OMI0A
OMI0
OMI0
OMI1
OMI3
OMI2
OMI1
OMI3
OMI2
OP3A OP3A
OpenCAPI SlimSAS connector OP3B
SMP 2 x9
C10 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot E1 OP4 OP7 E0 C4 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot
@ 32 Gbps
DCM-0 DCM-1
C11 - PCIe Gen5 x8 FHHL slot w/ x16 connectors P0 / Chip-0 P2 / Chip-0
E0 OP7 OP4 E1 C3 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot
C5
RJ45 (rear) 1 GbE Controller
OP2 OP6 OP2 OP6
BMC Display Port (rear)
eBMC
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
SMP 2 x9
SMP 2 x9
SMP 2 x9
SMP 2 x9
Chip
BMC USB / UPS (rear)
OP0A
OMI6
OMI5
OMI4
OMI7
OMI6
OMI5
OMI4
OMI7
Figure 2-1 Logical diagram of Power S1022 or Power S1024 servers in 2-socket configurations
The relevant busses and links are labeled with their respective speeds.
The logical diagram of Power S1022 and Power S1024 1-socket configurations can be
deduced by conceptually omitting the second socket (DCM-1). The number of memory slots
and PCIe slots is reduced by half if only one socket (DCM-0) is populated.
OMI3B
OMI3A
OMI2B
OMI2A
OMI1B
OMI1A
8 OMI links 8 OMI links
@ 32 Gbps @ 32 Gbps
OMI0
OMI0
OMI1
OMI3
OMI2
OMI1
OMI3
OMI2
SMP 2 x9
C10 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot E1 OP4 OP7 E0 C4 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot
@ 32 Gbps
eSCM-0 eSCM-1
C11 - PCIe Gen5 x8 FHHL slot w/ x16 connectors P0 / Chip-0 P2 / Chip-0
E0 OP7 OP4 E1 C3 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot
C5
RJ45 (rear) 1 GbE Controller
OP2 OP6 OP2 OP6
BMC Display Port (rear)
eBMC @ 32 Gbps
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
SMP 2 x9
SMP 2 x9
SMP 2 x9
SMP 2 x9
Chip
BMC USB / UPS (rear)
Figure 2-2 Logical diagram of the Power S1022s server in a 2-socket configuration
Unlike the Power S1022 and Power S1024 servers, the sockets do not host DCM modules;
instead, they are occupied by eSCM models. This configuration implies that the number of
active memory interfaces decreases from 16 to 8, the number of available memory slots
decreases from 32 to 16, and all memory DDIMMs are connected to the first Power10 chip
(chip-0) of each eSCM.
Also, the eSCM-based systems do not support OpenCAPI ports. However, the PCIe
infrastructure of the Power S1022s is identical to PCIe layout of the DCM-based Power S1022
and Power S1024 servers and the number and the specification of the PCIe slots is the same.
42 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
By design, the Power S1014 is a 1-socket server is based on eSCM modules. As shown in
Figure 2-3, only four memory interfaces with the associated eight DDIMM slots are present to
provide main memory access and memory capacity. Also, the number of available PCIe slots
is reduced to five, which is half of the PCIe slots that are offered by Power10 scale-out servers
in 2-socket configurations.
OMI0
OMI3
OMI2
OMI1
C10 - PCIe Gen4 x16 or PCIe Gen5 x8 FHHL slot E1
eSCM-0
C11 - PCIe Gen5 x8 FHHL slot w/ x16 connectors P0 / Chip-0
E0
C5
RJ45 (rear) 1 GbE Controller
OP2 OP6
BMC Display Port (rear)
eBMC
@ 32 Gbps
@ 32 Gbps
SMP 2 x9
SMP 2 x9
Chip
BMC USB / UPS (rear)
Note: The bandwidths that are provided throughout the chapter are theoretical maximums
that are used for reference. The speeds that are shown are at an individual component
level. Multiple components and application implementation are key to achieving the best
performance. Always conduct the performance sizing at the application workload
environment level and evaluate performance by using real-world performance
measurements and production workloads.
The remainder of this section provides more specific information about the Power10
processor technology as it is used in the Power S1014, S1022s, S1022, and S1024 server.
The IBM Power10 Processor session material as presented at the 32nd HOT CHIPS
conference is available at this web page.
Each core has private access to 2 MB L2 cache and local access to 8 MB of L3 cache
capacity. The local L3 cache region of a specific core also is accessible from all other cores
on the processor chip. The cores of one Power10 processor share up to 120 MB of latency
optimized nonuniform cache access (NUCA) L3 cache.
1 https://hotchips.org/
44 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The processor supports the following three distinct functional interfaces that all can run with a
signaling rate of up to 32 Gigatransfers per second (GTps):
Open memory interface
The Power10 processor has eight memory controller unit (MCU) channels that support
one open memory interface (OMI) port with two OMI links each2. One OMI link aggregates
eight lanes that are running at 32 GTps and connects to one memory buffer-based
differential DIMM (DDIMM) slot to access main memory.
Physically, the OMI interface is implemented in two separate die areas of eight OMI links
each. The maximum theoretical full-duplex bandwidth aggregated over all 128 OMI lanes
is 1 TBps.
SMP fabric interconnect (PowerAXON)
A total of 144 lanes are available in the Power10 processor to facilitate the connectivity to
other processors in a symmetric multiprocessing (SMP) architecture configuration. Each
SMP connection requires 18 lanes, eight data lanes plus one spare lane per direction
(2 x(8+1)). In this way, the processor can support a maximum of eight SMP connections
with a total of 128 data lanes per processor. This configuration yields a maximum
theoretical full-duplex bandwidth aggregated over all SMP connections of 1 TBps.
The generic nature of the interface implementation also allows the use of 128 data lanes
to potentially connect accelerator or memory devices through the OpenCAPI protocols.
Also, it can support memory cluster and memory interception architectures.
Because of the versatile characteristic of the technology, it is also referred to as
PowerAXON interface (Power A-bus/X-bus/OpenCAPI/Networking3). The OpenCAPI and
the memory clustering and memory interception use cases can be pursued in the future
and as of this writing are not used by available technology products.
PCIe Version 5.0 interface
To support external I/O connectivity and access to internal storage devices, the Power10
processor provides differential Peripheral Component Interconnect Express version 5.0
interface busses (PCIe Gen 5) with a total of 32 lanes.
The lanes are grouped in two sets of 16 lanes that can be used in one of the following
configurations:
– 1 x16 PCIe Gen 4
– 2 x8 PCIe Gen 4
– 1 x8, 2 x4 PCIe Gen 4
– 1 x8 PCIe Gen 5, 1 x8 PCIe Gen 4
– 1 x8 PCIe Gen 5, 2 x4 PCIe Gen 4
2
The OMI links are also referred to as OMI subchannels.
3 A-busses (between CEC drawers) and X-busses (within CEC drawers) provide SMP fabric ports.
46 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Important Power10 processor characteristics are listed in Table 2-1.
Table 2-1 Summary of the Power10 processor chip and processor core technology
Technology Power10 processor
Processor compatibility modes Support for Power ISAb of Power8 and Power9
a. Complimentary metal-oxide-semiconductor (CMOS)
b. Power instruction set architecture (Power ISA)
2.2.2 Processor modules for S1014, S1022s, S1022, and S1024 servers
For the Power10 processor-based scale-out servers, the Power10 processor is packaged as
a DCM or as an eSCM:
The DCM contains two directly coupled Power10 processor chips (chip-0 and chip-1) plus
more logic that is needed to facilitate power supply and external connectivity to the
module.
The eSCM is a special derivative of the DCM where all active compute cores run on the
first chip (chip-0) and the second chip (chip-1) contributes only extra PCIe connectivity,
essentially a switch:
– Power S1022 and the Power S1024 servers use DCM modules
– Power S1014 and the Power S1022s servers are based on eSCM technology
OP0
OP2
OP3
OP5
OP7
OP1
OP3
OP4
OP7
OP0
OP6
OP5
74.5 mm x 85.75 mm
OMI0 OMI4
OP2
64 OMI lanes 64 OMI lanes
OMI1 Power10 OP1 Power10 OMI5
to bottom to bottom
of module
OMI2 Chip-0 OP6 Chip-1 OMI6
of module
OMI3 OP4 OMI7
E0 E1 E0 E1
64 PCIe5 lanes
to bottom of module
A total of 36 X-bus lanes are used for two chip-to-chip, module internal connections. Each
connection runs at 32 GTps (32 Gbps) speed and bundles 18 lanes, eight data lanes plus one
spare lane per direction (2 x(8+1)).
In this way, the DCM’s internal total aggregated full-duplex bandwidth between chip-0 and
chip-1 culminates at 256 GBps.
The DCM internal connections are implemented by using the interface ports OP2 and OP6 on
chip-0 and OP1 and OP4 on chip-1:
2 × 9 OP2 lanes of chip-0 connect to 2 x 9 OP1 lanes of chip-1
2 × 9 OP6 lanes of chip-0 connect to 2 × 9 OP4 lanes of chip-1
In addition to the interface ports OP2 and OP6 on chip-0 and OP1 and OP4 on chip-1, the
DCM offers 216 A-bus/X-bus/OpenCAPI lanes that are grouped in 12 other interface ports:
OP0, OP1, OP3, OP4, OP5, OP7 on chip-0
OP0, OP2, OP3, OP5, OP6, OP7 on chip-1
In 2-socket configurations of the Power S1022 or Power S1024 server, the interface ports
OP4 and OP7 on chip-0 and OP6 and OP7 on chip-1 are used to implement direct
chip-to-chip SMP connections across the two DCM modules.
The interface port OP3 on chip-0 and OP0 on chip-1 implement OpenCAPI interfaces that are
accessible through connectors that are on the mainboard of Power S1022 and Power S1024
servers.
Note: Although the OpenCAPI interfaces likely can be used in the future, they are not used
by available technology products as of this writing.
48 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The interface ports OP0, OP1, and OP5 on chip-0 and OP2, OP3, and OP5 on chip-1 are
physically present, but not used by DCMs in Power S1022 and Power S1024 servers. This
status is indicated by the dashed lines that are shown in Figure 2-1 on page 41.
In addition to the chip-to-chip DCM internal connections, the cross DCM SMP links, and the
OpenCAPI interfaces, the DCM facilitates eight open memory interface ports (OMI0 - OMI7)
with two OMI links each to provide access to the buffered main memory differential DIMMs
(DDIMMs):
OMI0 - OMI3 of chip-0
OMI4 - OMI7 of chip-1
Note: The OMI interfaces are driven by eight on-chip memory controller units (MCUs) and
are implemented in two separate physical building blocks that lie in opposite areas at the
outer edge of the Power10 processor chip. One MCU directly controls one OMI port.
Therefore, a total of 16 OMI ports (OMI0 - OMI7 on chip-0 and OMI0 - OMI7 on chip-1) are
physically present on a Power10 DCM. However, because the chips on the DCM are tightly
integrated and the aggregated memory bandwidth of eight OMI ports culminates at
1 TBps, only half of the OMI ports are active. OMI4 to OMI7 on chip-0 and OMI0 to OMI3 of
chip-1 are disabled.
Finally, the DCM also offers differential Peripheral Component Interconnect Express version
5.0 interface busses (PCIe Gen 5) with a total of 64 lanes. Every chip of the DCM contributes
32 PCIe Gen5 lanes, which are grouped in two PCIe host bridges (E0, E1) with 16 PCIe Gen5
lanes each:
E0, E1 on chip-0
E0, E1 on chip-1
1 x16 Gen4 or
2 x8 Gen4 or
OP4 OP1 1 x8, 2 x4 Gen4 or
AXON OMI [0:3] AX 1 x8 Gen5, 1 x8 Gen4 or
AXON
AXON
OP0
OP5
1 x8 Gen5, 2 x4 Gen4
PCIe
1 x16 Gen4 or
E0
Power10
I/O MISC
2 x8 Gen4 or
Chip-0 1 x8, 2 x4 Gen4 or
PCIe
E1
1 x8 Gen5, 1 x8 Gen4 or
2 x9 SMP 32 Gbps
1 x8 Gen5, 2 x4 Gen4
AXON
AXON
OP7
OP3
AXON OMI AX 2 x8 OpenCAPI 32 Gbps
OP6 OP2
OP4 OP1
AXON OMI AX 2 x8 OpenCAPI 32 Gbps
AXON
AXON
OP0
OP5
1 x16 Gen4 or
PCIe
E0
Power10
I/O MISC
2 x8 Gen4 or
1 x8, 2 x4 Gen4 or
Chip-1
PCIe
E1
1 x8 Gen5, 1 x8 Gen4 or
1 x8 Gen5, 2 x4 Gen4
AXON
AXON
OP7
OP3
To conserve energy, the unused OMI on the lower side of chip-0 and on the upper side of
chip-1 are grounded on the DCM package. For the same reason, the interface ports OP0 and
OP1 in the upper right corner of chip-0 and OP2 and OP3 in the lower right corner of chip-1
are grounded on the system planar.
The main differences between the eSCM and the DCM structure include the following
examples:
All active cores are on chip-0 and no active cores are on chip-1.
Chip-1 works with chip-0 as a switch to facilitate more I/O connections.
All active OMI interfaces are on chip-0 and no active OMI interfaces on chip-1.
No OpenCAPI connectors are supported through any of the interface ports.
The eSCM internal chip-to-chip connectivity, the SMP links across the eSCM in 2-socket
configurations, and the PCIe Gen5 bus structure are identical to the Power10 DCM
implementation.
50 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
As with the Power10 DCM 36 X-bus, lanes are used for two chip-to-chip connections. These
eSCM internal connections are implemented by the interface ports OP2 and OP6 on chip-0
and OP1 and OP4 on chip-1:
2 × 9 OP2 lanes of chip-0 connect to 2 x 9 OP1 lanes of chip-1
2 × 9 OP6 lanes of chip-0 connect to 2 × 9 OP4 lanes of chip-1
The eSCM module internal chip-to-chip links exhibit the theoretical maximum full-duplex
bandwidth of 256 GBps.
OP6
OP7
OP4
OP7
74.5 mm x 85.75 mm
OMI0
OP2 Power10
64 OMI lanes OMI1 Power10 OP1
to bottom Switch
of module
OMI2 Chip-0 OP6
OP4 Chip-1
OMI3
E0 E1 E0 E1
64 PCIe5 lanes
to bottom of module
The Power S1014 servers are available only in 1-socket configurations and no other interface
ports than OP2 and OP6 on chip-0 and OP1 and OP4 on chip-1 are operational. The same
interface port constellation applies to 1-socket configurations of the Power S1022s server.
Figure 2-3 on page 43 shows the logical system diagram of the Power S1014 1-socket server
based on a single eSCM.
However, in 2-socket eSCM configurations of the Power S1022s server, the interface ports
OP4 and OP7 on chip-0 and OP6 and OP7 on chip-1 of the processor module are active and
used to implement direct chip-to-chip SMP connections between the two eSCM modules.
Figure 2-2 on page 42 shows logical system diagram of the Power S1022s 2-socket server
that is based on two eSCM modules. (The 1-socket constellation can easily be deduced from
Figure 2-2 on page 42 if eSCM-1 is conceptually omitted.)
As with the DCM, the eSCM offers differential PCIe Gen 5 with a total of 64 lanes. Every chip
of the eSCM contributes 32 PCIe Gen5 lanes, which are grouped in two PCIe host bridges
(E0, E1) with 16 PCIe Gen5 lanes each:
E0, E1 on chip-0
E0, E1 on chip-1
2 x9 SMP 32 Gbps
1 x16 Gen4 or
2 x8 Gen4 or
OP4 OP1 1 x8, 2 x4 Gen4 or
AXON OMI [0:3] AX 1 x8 Gen5, 1 x8 Gen4 or
OP0
AXON
AXON
OP5
1 x8 Gen5, 2 x4 Gen4
PCIe
1 x16 Gen4 or
E0
Power10
I/O MISC
2 x8 Gen4 or
Chip-0 1 x8, 2 x4 Gen4 or
PCIe
E1
1 x8 Gen5, 1 x8 Gen4 or
2 x9 SMP 32 Gbps
1 x8 Gen5, 2 x4 Gen4
AXON
AXON
OP7
OP3
AXON OMI AX
OP6 OP2
OP4 OP1
AXON OMI AX
OP0
AXON
AXON
OP5
2 x8 Gen4 or
Switch 1 x8, 2 x4 Gen4 or
PCIe
E1
1 x8 Gen5, 1 x8 Gen4 or
Chip-1 1 x8 Gen5, 2 x4 Gen4
AXON
AXON
OP3
OP7
2 x9 SMP 32 Gbps
The peak computational throughput is markedly improved by new execution capabilities and
optimized cache bandwidth characteristics. Extra matrix math acceleration engines can
deliver significant performance gains for machine learning, particularly for AI inferencing
workloads.
52 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The SMT8 core includes two execution resource domains. Each domain provides the
functional units to service up to four hardware threads.
Figure 2-9 shows the functional units of an SMT8 core where all eight threads are active. The
two execution resource domains are highlighted with colored backgrounds in two different
shades of blue.
Each of the two execution resource domains supports 1 - 4 threads and includes four vector
scalar units (VSU) of 128-bit width, two matrix math accelerator (MMA) units, and one
quad-precision floating-point (QP) and decimal floating-point (DF) unit.
One VSU and the directly associated logic are called an execution slice. Two neighboring
slices also can be used as a combined execution resource, which is then named super-slice.
When operating in SMT8 mode, eight SMT threads are subdivided in pairs that collectively
run on two adjacent slices, as indicated by colored backgrounds in different shades of green
in Figure 2-9.
In SMT4 or lower thread modes, one to two threads each share a four-slice resource domain.
Figure 2-9 also shows other essential resources that are shared among the SMT threads,
such as instruction cache, instruction buffer, and L1 data cache.
The SMT8 core supports automatic workload balancing to change the operational SMT
thread level. Depending on the workload characteristics, the number of threads that is running
on one chiplet can be reduced from four to two and even further to only one active thread. An
individual thread can benefit in terms of performance if fewer threads run against the core’s
executions resources.
Enhancements in the area of computation resources, working set size, and data access
latency are described next. The change in relation to the Power9 processor core
implementation is provided in square parentheses.
Micro-architectural innovations that complement physical and logic design techniques and
specifically address energy efficiency include the following examples:
Improved clock-gating
Reduced flush rates with improved branch prediction accuracy
Fusion and gather operating merging
Reduced number of ports and reduced access to selected structures
Effective address (EA)-tagged L1 data and instruction cache yield ERAT access on a
cache miss only
54 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
In addition to significant improvements in performance and energy efficiency, security
represents a major architectural focus area. The Power10 processor core supports the
following security features:
Enhanced hardware support that provides improved performance while mitigating for
speculation-based attacks
Dynamic Execution Control Register (DEXCR) support
Return oriented programming (ROP) protection
If more than one hardware thread is active, the processor runs in SMT mode. In addition to
the ST mode, the Power10 processor core supports the following SMT modes:
SMT2: Two hardware threads active
SMT4: Four hardware threads active
SMT8: Eight hardware threads active
SMT enables a single physical processor core to simultaneously dispatch instructions from
more than one hardware thread context. Computational workloads can use the processor
core’s execution units with a higher degree of parallelism. This ability significantly enhances
the throughput and scalability of multi-threaded applications and optimizes the compute
density for single-threaded workloads.
Table 2-2 SMT levels that are supported by IBM POWER® processors
Technology Maximum cores Supported hardware Maximum hardware
per system threading modes threads per partition
IBM Power4 32 ST 32
All Power10 processor-based scale-out servers support the ST, SMT2, SMT4, and SMT8
hardware threading modes. Table 2-3 lists the maximum hardware threads per partition for
each scale-out server model.
Table 2-3 Maximum hardware threads supported by Power10 processor-based scale-out servers
Server Maximum cores per system Maximum hardware threads
per partition
Power S1014 8 64
To efficiently accelerate MMA operations, the Power10 processor core implements a dense
math engine (DME) microarchitecture that effectively provides an accelerator for cognitive
computing, machine learning, and AI inferencing workloads.
The DME encapsulates compute efficient pipelines, a physical register file, and associated
data-flow that keeps resulting accumulator data local to the compute units. Each MMA
pipeline performs outer-product matrix operations, reading from and writing back a 512-bit
accumulator register.
Power10 implements the MMA accumulator architecture without adding a designed state.
Each designed 512-bit accumulator register is backed by four 128-bit Vector Scalar eXtension
(VSX) registers.
56 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Code that uses the MMA instructions is included in OpenBLAS and Eigen libraries. This
library can be built by using the most recent versions of GNU Compiler Collection (GCC)
compiler. The latest version of OpenBLAS is available at this web page.
OpenBLAS is used by Python-NumPy library, PyTorch, and other frameworks, which makes it
easy to use the performance benefit of the Power10 MMA accelerator for AI workloads.
The Power10 MMA accelerator technology also is used by the IBM Engineering and Scientific
Subroutine Library for AIX on POWER 7.1 (program number 5765-EAP).
Program code that is written in C/C++ or Fortran can benefit from the potential performance
gains by using the MMA facility if compiled by the following IBM compiler products:
IBM Open XL C/C++ for AIX 17.1 (program numbers 5765-J18, 5765-J16, and 5725-C72)
IBM Open XL Fortran for AIX 17.1 (program numbers 5765-J19, 5765-J17, and 5725-C74)
For more information about the implementation of the Power10 processor’s high throughput
math engine, see the white paper A matrix math facility for Power ISA processors.
For more information about fundamental MMA architecture principles with detailed instruction
set usage, register file management concepts, and various supporting facilities, see
Matrix-Multiply Assist Best Practices Guide, REDP-5612.
Depending on the specific settings of the PCR, the Power10 core runs in a compatibility mode
that pertains to Power9 (Power ISA version 3.0) or Power8 (Power ISA version 2.07)
processors. The support for processor compatibility modes also enables older operating
systems versions of AIX, IBM i, Linux, or Virtual I/O server environments to run on Power10
processor-based systems.
The Power10 processor-based scale-out servers support the Power8, Power9 Base, Power9,
and Power10 compatibility modes.
DCM modules cannot be configured for Power S1014 or Power S1022s servers. eSCM
modules are not supported in Power S1022 of Power S1024 systems.
Depending on the scale-out server model and the number of populated sockets, the following
core densities are available for the supported processor module types:
Power S1014 and Power S1022s server are offered with four or eight functional cores per
eSCM. The Power S1014 is available only as 1-socket server. The Power S1022s
supports the 4-core eSCM only in a 1-socket configuration, and the 8-core eSCM in 1- and
2-socket configurations.
The supported processor activation types and use models vary with the Power10
processor-based scale-out server model type:
Static processor activations
Only the eSCM models with 4-core or 8-core processor density in Power S1014 and
Power S1022s servers support the classical static processor activation model. All
functional cores of the configured eSCM modules are delivered with processor activation
features at initial order. This use model provides static and permanent processor
activations and is the default for the named eSCM-based servers.
Capacity Upgrade on Demand (CUoD) processor activations
The Power S1022 and Power S1024 servers, which are based on DCM modules, support
the Capacity Upgrade on Demand (CUoD) technology option. For these servers, a
minimum of 50% of the configured total processor capacity must be activated through the
related CUoD processor activation features at the time of initial order.
Later, more CUoD processor activations can be purchase through a miscellaneous
equipment specification (MES) upgrade order. The CUoD is the default use model of
Power S1022 and Power S1024 servers. It offers static and permanent processor
activations with the added flexibility to adjust the processor capacity between half of the
physically present cores and the maximum of the configured processor module capacity
as required by the workload demand.
Power Private Cloud with Shared Utility Capacity use model
The Power S1022 and Power S1024 servers also support the IBM Power Private Cloud
with Shared Utility Capacity solution (Power Enterprise Pools 2.0), which is an
infrastructure offering model that enables cloud agility and cost optimization with
pay-for-use pricing.
This use model requires the configuration of the Power Enterprise Pools 2.0 Enablement
feature (#EP20) for the specific server and a minimum of one Base Processor Activation
for Pools 2.0 feature is needed. The base processor activations are permanent and shared
within a pool of servers. More processor resources that are needed beyond the capacity
that is provided by the base processor activations are metered by the minute and paid
through capacity credits.
To assist with the optimization of software licensing, the factory deconfiguration feature
(#2319) is available at initial order for all scale-out server models to permanently reduce the
number of active cores that is less than the minimum processor core activation requirement.
Factory deconfigurations are permanent and they are available only in the context of the static
processor activation use model and the CUoD processor activation use model.
58 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Note: The static activation usage model, the CUoD technology usage model, and the
Power Private Cloud Shared Utility Capacity (Power Enterprise Pools 2.0) offering models
are all mutually exclusive in respect to each other.
Table 2-4 lists the processor module options that are available for Power10 processor-based
scale-out servers. The list is sorted by increasing order of the processor module capacity.
Table 2-4 Processor module options for Power10 processor-based scale-out servers
Module Module CUoD Pools 2.0 Typical Minimum Power Power Power Power
capacity type support option frequency quantity S1014 S1022s S1022 S1024
range per
[GHz] server
3.4 - 4.0 1 — — — X
3.1 - 4.0 2 — — — X
For each processor module option the module type (eSCM / DCM), the support for CUoD, the
availability of the Pools 2.0 option, and the minimum number of sockets that must be
populated are indicated.
Depending on the different physical characteristics of the Power S1022 and Power S1024
servers. two distinct, model-specific frequency ranges are available for processor modules
with 12- and 16-core density.
The last four columns of Table 2-4 list the availability matrix between a specific processor
module capacity and frequency specification on one side and the Power10 processor-base
scale-out server models on the other side. (Available combinations are labeled with “X” and
unavailable combinations are indicated by a “—” hyphen.)
Each L3 region serves as a victim cache for its associated L2 cache and can provide
aggregate storage for the on-chip cache footprint.
Intelligent L3 cache management enables the Power10 processor to optimize the access to
L3 cache lines and minimize cache latencies. The L3 includes a replacement algorithm with
data type and reuse awareness.
It also supports an array of prefetch requests from the core, including instruction and data,
and works cooperatively with the core, memory controller, and SMP interconnection fabric to
manage prefetch traffic, which optimizes system throughput and data latency.
One Power10 processor chip supports the following functional elements to access main
memory:
Eight MCUs
Eight OMI ports that are controlled one-to-one through a dedicated MCU
Two OMI links per OMI port for a total of 16 OMI links
Eight lanes per OMI link for a total of 128 lanes, all running at 32 Gbps speed
60 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
However, because the chips on the DCM are tightly integrated and the aggregated memory
bandwidth of eight OMI ports culminates at a maximum theoretical full-duplex bandwidth of
1 TBps, only half of the OMI ports are active.
Each chip of the DCM contributes four OMI ports and eight OMI links to facilitate main
memory access. For more information about the OMI port designation and the physical
location of the active OMI units of a DCM, see Figure 2-5 on page 48 and Figure 2-6 on
page 50.
In summary, one DCM supports the following functional elements to access main memory:
Four active MCUs per chip for a total of eight MCUs per module
Each MCU maps one-to-one to an OMI port
Four OMI ports per chip for at total of eight OMI ports per module
Two OMI links per OMI port for a total of eight OMI links per chip and 16 OMI links per
module
Eight lanes per OMI link for a total of 128 lanes per module, all running at 32 Gbps
The second Power10 chip (chip-1) is dedicated to drive PCIe Gen5 and Gen4 interfaces
exclusively. For more information about for the OMI port designation and physical location of
the active OMI units of an eSCM, see Figure 2-7 on page 51 and Figure 2-8 on page 52.
In summary, one eSCM supports the following elements to access main memory:
Four active MCUs per module
Each MCU maps one-to-one to an OMI port
Four OMI ports per module
Two OMI links per OMI port for a total of eight OMI links per module
Eight lanes per OMI link for a total of 64 lanes, all running at 32 Gbps speed
With the Power10 processor-based scale-out servers, OMI initially supports one main tier,
low-latency, enterprise-grade Double Data Rate 4 (DDR4) DDIMMs per OMI link. This
architecture yields a total memory module capacity of:
8 DDIMMs per socket for eSCM-based Power S1014 and Power S1022s server
16 DDIMMs per socket for DCM-based Power S1022 and Power S1024 servers
Table 2-5 list the maximum memory bandwidth for Power S1014, Power S1022s,
Power S1022, and Power S1024 servers under the assumption that the maximum number of
supported sockets are configured and all available slots are populated with DDIMMs of the
named density and speed.
Table 2-5 Maximum theoretical memory bandwidth for Power10 processor-based scale-out servers
Server model DDIMM DDIMM Maximum memory Maximum theoretical
density frequency capacity memory bandwidth
(GB) (MHz) (GB) (GBps)
62 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
AES CTR mode
The Counter (CTR) mode of operation designates a low-latency AES bock cipher.
Although the level of encryption is not as strong as with the XTS mode, the low-latency
characteristics make it the preferred mode for memory encryption for volatile memory.
AES CTR makes it more difficult to physically gain access to data through the memory
card interfaces. The goal is to protect against physical attacks, which becomes
increasingly important in the context of cloud deployments.
The Power10 processor-based scale-out servers support the AES CTR mode for
pervasive memory encryption. Each Power10 processor holds a 128-bit encryption key
that is used by the processor’s MCU to encrypt the data of the differential DIMMs that are
attached to the OMI links.
The MCU crypto engine is transparently integrated into the data path, which ensures that
the data fetch and store bandwidth are not compromised by the AES CTR encryption
mode. Because the encryption has no noticeable performance effect and because of the
obvious security benefit, the pervasive memory encryption is enabled by default and
cannot be switched off through any administrative interface.
To facilitate LPM data compression and encryption, the hypervisor on the source system
presents the LPM buffer to the on-chip nest accelerator (NX) unit as part of process in
Step b. The reverse decryption and decompress operation is applied on the target server
as part of process in Step d.
The pervasive memory encryption logic of the MCU decrypts the memory data before it is
compressed and encrypted by the NX unit on the source server. It also encrypts the data
before it is written to memory, but after it is decrypted and decompressed by the NX unit of
the target server.
Each one of the AES/SHA engines, data compression, and Gzip units consist of a
coprocessor type and the NX unit features three coprocessor types. The NX unit also
includes more support hardware to support coprocessor invocation by user code, use of
effective addresses, high-bandwidth storage access, and interrupt notification of job
completion.
The direct memory access (DMA) controller of the NX unit helps to start the coprocessors
and move data on behalf of coprocessors. SMP interconnect unit (SIU) provides the interface
between the Power10 SMP interconnect and the DMA controller.
The NX coprocessors can be started transparently through library or operating system kernel
calls to speed up operations that are related to:
Data compression
Live partition mobility migration
IPsec
JFS2 encrypted file systems
64 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
PKCS11 encryption
Random number generation
The most recently announced logical volume encryption
In effect, this on-chip NX unit on Power10 systems implements a high throughput engine that
can perform the equivalent work of multiple cores. The system performance can benefit by
off-loading these expensive operations to on-chip accelerators, which in turn can greatly
reduce the CPU usage and improve the performance of applications.
The accelerators are shared among the logical partitions (LPARs) under the control of the
PowerVM hypervisor and accessed by way of a hypervisor call. The operating system, along
with the PowerVM hypervisor, provides a send address space that is unique per process that
is requesting the coprocessor access. This configuration allows the user process to directly
post entries to the first in-first out (FIFO) queues that are associated with the NX accelerators.
Each NX coprocessor type features a unique receive address space that corresponds to a
unique FIFO for each of the accelerators.
For more information about the use of the xgzip tool that uses the Gzip accelerator engine,
see the following resources:
IBM support article: Using the POWER9 NX (gzip) accelerator in AIX
IBM Power community article: Power9 GZIP Data Acceleration with IBM AIX
AIX community article: Performance improvement in openssh with on-chip data
compression accelerator in power9
IBM Documentation: nxstat Command
Note: The OpenCAPI interface and the memory clustering interconnect are Power10
technology options for future use.
Because of the versatile nature of signaling technology, the 32 Gbps interface also is referred
to as Power/A-bus/X-bus/OpenCAPI/Networking (PowerAXON) interface. The IBM
proprietary X-bus links connect two processors on a board with a common reference clock.
The IBM proprietary A-bus links connect two processors in different drawers on different
reference clocks by using a cable.
The PowerAXON interface is implemented on dedicated areas that are at each corner of the
Power10 processor die.
The chip-to-chip DCM internal (see Figure 2-5 on page 48) and eSCM internal (see
Figure 2-7 on page 51) chip-to-chip connections are implemented by using the interface ports
OP2 and OP6 on chip-0 and OP1 and OP4 on chip-1:
2 × 9 OP2 lanes of chip-0 connect to 2 x 9 OP1 lanes of chip-1
2 × 9 OP6 lanes of chip-0 connect to 2 × 9 OP4 lanes of chip-1
The processor module internal chip-to-chip connections feature the following common
properties:
Two (2 x 9)-bit buses implement two independent connections between the module chips
Eight data lanes, plus one spare lane in each direction per chip-to-chip connection
32 Gbps signaling rate that provides 128 GBps per chip-to-chip connection bandwidth,
which yields a maximum theoretical full-duplex bandwidth between the two processor
module chips of 256 GBps
In addition to the interface ports OP2 and OP6 on chip-0 and OP1 and OP4 on chip-1, the
DCM offers 216 A-bus/X-bus/OpenCAPI lanes that are grouped in 12 other interface ports:
OP0, OP1, OP3, OP4, OP5, OP7 on chip-0
OP0, OP2, OP3, OP5, OP6, OP7 on chip-1
Each OP1 and OP2 interface port runs as a 2 × 9 SMP bus at 32 Gbps whereas the OP0,
OP3, OP4, OP5, OP6, and OP7 interface ports can run in one of the following two modes:
2 × 9 SMP at 32 Gbps
2 × 8 OpenCAPI at 32 Gbps
66 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
SMP topology and accelerator interfaces for DCM-based servers
Figure 2-11 shows the flat, one-hop SMP topology and the associated interface ports for
Power S1022 and Power S1024 servers in 2-socket configurations (all interfaces that do not
contribute to the SMP fabric were omitted for clarity).
SMP 2 x9
OP4 OP7
@ 32 Gbps
DCM-0 DCM-1
P0 / Chip-0 P2 / Chip-0
OP7 OP4
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
SMP 2 x9
SMP 2 x9
SMP 2 x9
SMP 2 x9
OP1 OP4 OP1 OP4
OP7 OP6
DCM-0 DCM-1
P1 / Chip-1 P3 / Chip-1
SMP 2 x9
OP6 OP7
@ 32 Gbps
Figure 2-11 SMP connectivity for Power S1022 or Power S1024 servers in 2-socket configurations
The interface ports OP4, OP6, and OP7 are used to implement direct SMP connections
between the first DCM chip (DCM-0) and the second DCM chip (DCM-1), as shown in the
following example:
2 x 9 OP4 lanes of chip-0 on DCM-0 connect to 2 x 9 OP7 lanes of chip-0 on DCM-1
2 x 9 OP7 lanes of chip-0 on DCM-0 connect to 2 x 9 OP6 lanes of chip-1 on DCM-1
2 x 9 OP7 lanes of chip-1 on DCM-0 connect to 2 x 9 OP4 lanes of chip-0 on DCM-1
2 x 9 OP6 lanes of chip-1 on DCM-0 connect to 2 x 9 OP7 lanes of chip-1 on DCM-1
Each inter-DCM chip-to-chip SMP link provides a maximum theoretical full-duplex bandwidth
of 128 GBps.
The interface port OP3 on chip-0 and OP0 on chip-1 of the DCM are used to implement
OpenCAPI interfaces through connectors that are on the mainboard of Power S1022 and
Power S1024 servers. The relevant interface ports are subdivided in two bundles of eight
lanes, which are designated by the capital letters A and B respectively. Therefore, the named
ports OP3A, OP3B, OP0A, and OP0B represent one bundle of eight lanes that can support
one OpenCAPI interface in turn.
In a 1-socket Power S1022 or Power S1024 server, a total of 4 OpenCAPI interfaces are
implemented through DCM-0, as shown in the following example:
OP3A and OP3B on chip-0 of DCM-0
OP0A and OP0B on chip-1 of DCM-0
In a 2-socket Power S1022 or Power S1024 server, two other OpenCAPI interfaces are
provided through DCM-1, as shown in the following example:
OP3A on chip-0 of DCM-1
OP0B on chip-1 of DCM-1
Note: The implemented OpenCAPI interfaces can be used in the future and are not used
by available technology products as of this writing.
SMP 2 x9
OP4 OP7
@ 32 Gbps
eSCM-0 eSCM-1
P0 / Chip-0 P2 / Chip-0
OP7 OP4
@ 32 Gbps
@ 32 Gbps
@ 32 Gbps
SMP 2 x9
SMP 2 x9
SMP 2 x9
SMP 2 x9
OP7 OP6
eSCM-0 eSCM-1
Switch Switch
P1 / Chip-1 SMP 2 x9 P3 / Chip-1
OP6 OP7
@ 32 Gbps
Figure 2-12 SMP connectivity for a Power S1022s server in 2-socket configuration
In 2-socket eSCM configurations of the Power S1022s server, the interface ports OP4 and
OP7 on chip-0 and OP6 and OP7 on chip-1 of the processor module are active. They are
used to implement direct SMP connections between the first eSCM (eSCM-0) and the second
eSCM (eSCM-1) in the same way for the 2-socket DCM configurations of the Power S1022
and Power S1024 servers.
However, the eSCM constellation differs by the fact that no active cores (0-cores) are on
chip-1 of eSCM-0 and chip-1 of eSCM-1. These chips operate as switches. For more
information about the Power S1022s 2-socket server that is based on two eSCM modules,
see Figure 2-2 on page 42.
In summary, the SMP interconnect between the eSCMs of a Power S1022s server in 2-socket
configuration and between the DCMs of a Power S1022 or Power S1024 server in 2-socket
configuration features the following properties:
One (2 x 9)-bit buses per chip-to-chip connection across the processor modules
Eight data lanes plus one spare lane in each direction per chip-to-chip connection
68 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Flat, 1-hop SMP topology through direct connection between all chips
32 Gbps signaling rate, which provides 128 GBps bandwidth per chip-to-chip connection
an increase of 33% compared to the Power9 processor-based scale-out servers
implementation
Based on the extensive experience that was gained over the past few years, the Power10
EnergyScale technology evolved to use the following effective and simplified set of
operational modes:
Power saving mode
Static mode (nominal frequency)
Maximum performance mode (MPM)
The Power9 dynamic performance mode (DPM) has many features in common with the
Power9 maximum performance mode (MPM). Because of this redundant nature of
characteristics, the DPM for Power10 processor-based systems was removed in favor of an
enhanced MPM. For example, the maximum frequency is now achievable in the Power10
enhanced maximum performance mode (regardless of the number of active cores), which
was not always the case with Power9 processor-based servers.
The Power10 processor-based scale-out servers feature MPM enabled by default. This mode
dynamically adjusts processor frequency to maximize performance and enable a much higher
processor frequency range. Each of the power saver modes deliver consistent system
performance without any variation if the nominal operating environment limits are met.
For Power10 processor-based systems that are under control of the PowerVM hypervisor, the
MPM is a system-wide configuration setting, but each processor module frequency is
optimized separately.
The following factors determine the maximum frequency at which a processor module can
run:
Processor utilization: Lighter workloads run at higher frequencies.
Number of active cores: Fewer active cores run at higher frequencies.
Environmental conditions: At lower ambient temperatures, cores are enabled to run at
higher frequencies.
70 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 2-13 shows the comparative frequency ranges for the Power10 power saving mode,
static or nominal mode, and the maximum performance mode. The frequency adjustments for
different workload characteristics, ambient conditions, and idle states are also indicated.
Figure 2-13 Power10 power management modes and related frequency ranges
Table 2-6, Table 2-7, Table 2-8 on page 72, and Table 2-9 on page 72 show the power saving
mode, the static mode frequencies, and the frequency ranges of the MPM for all processor
module types that are available for the Power S1014, Power S1022s, Power S1022, and
Power S1024 servers.
Note: For all Power10 processor-based scale-out systems, the MPM is enabled by default.
Table 2-6 Characteristic frequencies and frequency ranges for Power S1014 servers
Feature Cores per Power saving Static mode Maximum performance mode
code single-chip mode frequency frequency frequency range
module [GHz] [GHz] [GHz]
Table 2-7 Characteristic frequencies and frequency ranges for Power S1022s servers
Feature Cores per Power saving Static mode Maximum performance mode
code single-chip mode frequency frequency frequency range
module [GHz] [GHz] [GHz]
Table 2-9 Characteristic frequencies and frequency ranges for Power S1024 servers
Feature Cores per Power saving Static mode Maximum performance mode
code single-chip mode frequency frequency frequency range
module [GHz] [GHz] [GHz]
The controls for all power saver modes are available on the Advanced System Management
Interface (ASMI) and can be dynamically modified. A system administrator can also use the
Hardware Management Console (HMC) to set power saver mode or to enable static mode or
MPM.
Figure 2-14 shows the ASM interface menu for Power and Performance Mode Setup on a
Power10 processor-based scale-out server.
Figure 2-14 ASMI menu for Power and Performance Mode setup
72 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 2-15 shows the HMC menu for power and performance mode setup.
Figure 2-15 HMC menu for Power and Performance Mode setup
The Power E1080 enterprise class systems exclusively use SCM modules with up to 15
active SMT8 capable cores. These SCM processor modules are structural and performance
optimized for usage in scale-up multi-socket systems.
The Power E1050 enterprise class system exclusively uses DCM modules with up to 24
active SMT8 capable cores. This configuration maximizes the core density and I/O
capabilities of these servers.
DCM and eSCM modules are designed to support scale-out 1- to 4-socket Power10
processor-based servers.
Table 2-10 Comparison of the Power10 processor technology to prior processor generations
Characteristics Power10 Power9 Power8
Technology 7 nm 14 nm 22 nm
Die size 2 x 602 mm2 2 x 602 mm2 602 mm2 693 mm2 649 mm2
Maximum cores 24 8 15 12 12
Maximum static frequency / 3.4 - 4.0 GHz 3.0 - 3.9 GHz 3.6 - 4.15 GHz 3.9 - 4.0 GHz 4.15 GHz
high-performance frequency
rangea
Supported DDR4d: Packaged on differential DIMMs with more DDR4 and DDR3 and
memory technology performance and resilience capabilities DDR3e DDR4
74 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
2.3 Memory subsystem
The Power10 processor contains eight independent MCUs that provide the system memory
interface between the on-chip SMP interconnect fabric and the OMI links. Each MCU maps in
a one-to-one relation to an OMI port, which is also referred to as memory channel. Each OMI
port in turn supports two OMI links, for a total of 16 OMI links per chip. The OMI links of a
specific OMI port are also referred to as memory subchannel A and B.
As used in Power S1022 and Power S1024 servers, the Power10 DCM only half of the MCUs
and OMI links on each Power10 chip are used, which results in 16 total OMI links per DCM.
One IBM DDIMM connects to each OMI link, for a total of 32 DDIMMs when two DCM
modules are configured.
As used in Power S1014 and Power S1022s servers, the Power10 eSCM supports only eight
configured OMI links per module, which resulting in a total of 16 DDIMMs when two eSCM
modules are configured.
The DDIMM cards are available in two rack unit (2U) and four rack unit (4U) form factors and
are based on DDR4 DRAM technology. Depending on the form factor and the module
capacity of 16 GB, 32 GB, 64 GB, 128 GB, or 256 GB data rates of 2666 MHz, 2933 MHz, or
3200 MHz are supported.
DDIMM cards contain an OMI attached memory buffer, power management interface
controllers (PMICs), an Electrically Erasable Programmable Read-only Memory (EEPROM)
chip for vital product data, and the DRAM elements.
The PMICs supply all voltage levels that are required by the DDIMM card so that no separate
voltage regulator modules are needed. For each 2U DDIMM card, one PMIC plus one spare
are used.
For each 4U DDIMM card, two PMICs plus two spares are used. Because the PMICs operate
as redundant pairs, no DDIMM is called for replacement if one PMIC in each of the redundant
pairs is still functional.
OMI4A
OMI4B
OMI5A P1D1-OMI5A DDR4 Differential DIMM: P0-C48
DCM-1 OMI5B P1D1-OMI5B DDR4 Differential DIMM: P0-C43
P1 / Chip-1 OMI6A D1P1-OMI6A DDR4 Differential DIMM: P0-C47
OMI6B D1P1-OMI6B DDR4 Differential DIMM: P0-C46
OMI7A D1P1-OMI7A DDR4 Differential DIMM: P0-C44
OP4
OMI7B
OP1
D1P1-OMI7B DDR4 Differential DIMM: P0-C45
OP6 OP2
OMI3B D1P0-OMI3B DDR4 Differential DIMM: P0-C42
OMI3A D1P0-OMI3A DDR4 Differential DIMM: P0-C41
OMI2B D1P0-OMI2B DDR4 Differential DIMM: P0-C39
DCM-1 OMI2A D1P0-OMI2A DDR4 Differential DIMM: P0-C38
P0 / Chip-0 OMI1B D1P0-OMI1B DDR4 Differential DIMM: P0-C40
OMI1A
OMI0B
OMI0A D1P0-OMI1A
P0-C21DDR4 Differential DIMM: P0-C21
D1P0-OMI0B
P0-C20 DDR4 Differential DIMM: P0-C20
D1P0-OMI0A DDR4 Differential DIMM: P0-C19
D0P1-OMI4A DDR4 Differential DIMM: P0-C18
P0-C17 DDR4 Differential DIMM: P0-C17
D0P1-OMI4B
OMI4A P0-C16DDR4 Differential DIMM: P0-C16
D0P1-OMI5A
OMI4B
OMI5A
DCM-0 OMI5B D0P1-OMI5B DDR4 Differential DIMM: P0-C35
P1 / Chip-1 OMI6A D0P1-OMI6A DDR4 Differential DIMM: P0-C37
OMI6B D0P1-OMI6B DDR4 Differential DIMM: P0-C36
OMI7A D0P1-OMI7A DDR4 Differential DIMM: P0-C34
OMI7B D0P1-OMI7B DDR4 Differential DIMM: P0-C33
OP4 OP1
OP6 OP2
OMI3B D0P0-OMI3B DDR4 Differential DIMM: P0-C30
OMI3A D0P0-OMI3A DDR4 Differential DIMM: P0-C31
OMI2B D0P0-OMI2B DDR4 Differential DIMM: P0-C29
DCM-0 OMI2A D0P0-OMI2A DDR4 Differential DIMM: P0-C28
P0 / Chip-0 OMI1B D0P0-OMI1B DDR4 Differential DIMM: P0-C32
OMI1A D0P0-OMI1A DDR4 Differential DIMM: P0-C27
OMI0B
OMI0A
Figure 2-16 Memory logical diagram of DCM-based Power S1022 and Power S1024 servers
All active OMI subchannels are indicated by the labels OMI1A/OMI1B to OMI7A/OMI7B for
the respective DCM chips.
The DDIMM label begins with the DCM-chip-link designation. For example, D1P1-OMI4A
refers to a memory module that is connected to the OMI link OMI4A on chip-1 (processor-1)
of DCM-1.
The DDIMM label concludes with the physical location code of the memory slot. In our
example of the D1P1-OMI4A connected DDIMM, the location code P0-C25 reveals that the
DDIMM is plugged into slot connector 25 (C25) on the main board (P0). Although Figure 2-16
resembles the physical placement and the physical grouping of the memory slots, some slot
positions were moved for the sake of improved clarity.
The memory logical diagram for 1-socket DCM-based Power10 scale-out servers easily can
be seen in Figure 2-16 if you conceptually omit the DCM-1 processor module, including all of
the attached DDIMM memory modules.
76 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 2-17 shows the memory logical diagram for eSCM-based Power10 scale-out servers.
Only half of the OMI links are available for eSCMs in comparison to DCMs and that all active
OMI links are on chip-0 of each eSCM.
eSCM-1
P1 / Chip-1
OP4 OP1
OP6 OP2
OMI3B S1P0-OMI3B DDR4 Differential DIMM: P0-C42
OMI3A S1P0-OMI3A DDR4 Differential DIMM: P0-C41
OMI2B S1P0-OMI2B DDR4 Differential DIMM: P0-C39
eSCM-1 OMI2A S1P0-OMI2A DDR4 Differential DIMM: P0-C38
P0 / Chip-0 OMI1B S1P0-OMI1B DDR4 Differential DIMM: P0-C40
OMI1A
OMI0B
OMI0A S1P0-OMI1A
P0-C21DDR4 Differential DIMM: P0-C21
S1P0-OMI0B
P0-C20 DDR4 Differential DIMM: P0-C20
S1P0-OMI0A DDR4 Differential DIMM: P0-C19
eSCM-0
P1 / Chip-1
OP4 OP1
OP6 OP2
OMI3B S0P0-OMI3B DDR4 Differential DIMM: P0-C30
OMI3A S0P0-OMI3A DDR4 Differential DIMM: P0-C31
OMI2B S0P0-OMI2B DDR4 Differential DIMM: P0-C29
eSCM-0 OMI2A S0P0-OMI2A DDR4 Differential DIMM: P0-C28
P0 / Chip-0 OMI1B S0P0-OMI1B DDR4 Differential DIMM: P0-C32
OMI1A S0P0-OMI1A DDR4 Differential DIMM: P0-C27
OMI0B
OMI0A
Again, the memory logical diagram for 1-socket eSCM-based Power10 scale-out servers can
easily be deduced from Figure 2-17 if you conceptually omit the eSCM-1 processor module
including all of the attached DDIMM memory modules.
Physically, the memory slots are organized into the following groups, as shown in Figure 2-18
on page 78:
C12 and C13 are placed at the outward-facing side of eSCM-0/DCM-0 and are connected
to chip-0 of the named processor modules.
C25 and C26 are placed at the outward-facing side of eSCM-1/DCM-1 and are connected
to chip-1 of the named processor modules.
C27 to C37 (11 slots) are placed toward the front of the server and are assigned to the first
processor module (eSCM-0/DCM-0).
C38 to C48 (11 slots) are placed toward the front of the server and are assigned to the
second processor module (eSCM-1/DCM-1).
C16 to C21 (six slots) are placed between the processor modules where the first half (C16
to C18) is wired to eSCM-0/DCM-0 and the second half (C19 to C21) to eSCM-1/DCM-1.
P0-C10 P0-C28
P0-C12 P0-C13 P0-C27
P0-C11
Figure 2-18 Memory module physical slot locations and DDIMM location codes
Figure 2-18 also shows the physical location of the ten PCIe adapter slots C0 to C4 and C7 to
C11. Slot C5 is always occupied by the eBMC and slot C6 reserves the option to establish an
external OpenCAPI based connection in the future.
In general, the preferred approach is to install memory evenly across all processor modules in
the system. Balancing memory across the installed processor modules enables memory
access in a consistent manner and typically results in the best possible performance for your
configuration. Account for any plans for future memory upgrades when you decide which
memory feature size to use at the time of the initial system order.
78 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Power S1014 memory feature and placement rules
Table 2-11 lists the available memory feature codes for Power S1014 servers. No specific
memory enablement features are required and the entire physical DDIMM capacity of a
memory feature is enabled by default.
#EM6X 128 GB (2x64 GB) DDIMMs, 3200 MHz, 16 Gbit DDR4 memory
#EM6Ya 256 GB (2x128 GB) DDIMMs, 2666 MHz, 16 Gbit DDR4 memory
a. The 128 GB DDIMM parts in feature code #EM6Y are planned to be available on 18 November
2022.
The memory DDIMMs must be ordered in pairs by using the following feature codes:
16 GB: #EM6N
32 GB: #EM6W
64 GB: EM6X
128 GB: EM6Y
The minimum ordering granularity is one memory feature and all DDIMMs must be of the
same feature code type for a Power S1014 server. A maximum of four memory feature codes
can be configured to cover all of the available eight memory slots.
The minimum memory capacity requirement of the Power S1014 server is 32 GB, which can
be fulfilled by one #EM6N feature.
The maximum memory capacity is 64 GB if the 4-core eSCM module (#EPG0) was chosen
and IBM i is the primary operating system for the server. This configuration can be realized by
using one #EM6W memory feature or two #EM6N memory features.
If the Power S1014 server is based on the 8-core eSCM module, a maximum memory
capacity of 1 TB is supported. This specific maximum configuration requires four #EM6Y
memory features. Until the availability of the 128 GB memory DDIMMs (planned for
18 November 2022), the maximum memory capacity is 512 GB.
Figure 2-19 shows the DDIMM plug sequence for Power S1014 servers.
The memory controllers and the related open memory interface (OMI) channels are
highlighted in bright yellow in Figure 2-19 on page 79 and labeled OMI0, OMI1, OMI2, and
OMI3.
The related OMI links (subchannels A and B) are highlighted in light yellow in Figure 2-19 on
page 79 and the physical memory slot location codes are highlighted in light blue:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32
Second DDIMM pair is installed on links OMI1A and OMI1B in slots C12 and C13
Third DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29
Fourth DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30
The memory DDIMMs are bundled in pairs by using the following feature codes:
16 GB: #EM6N
32 GB: #EM6W
64 GB: #EM6X
128 GB: #EM6Y
The Power S1022s server supports the Active Memory Mirroring (AMM) feature #EM8G.
AMM requires a minimum four configured memory feature codes with a total of eight DDIMM
modules.
80 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Memory rules for 1-socket Power S1022s servers
The minimum ordering granularity is one memory feature and all DDIMMs must be of the
same feature code type for a Power S1022s server in 1-socket configuration. A maximum of
four memory feature codes can be configured to cover all of the available eight memory slots.
The minimum memory capacity limit of the Power S1022s 1-socket server is 32 GB, which
can be fulfilled by one #EM6N feature.
The maximum memory capacity of the 1-socket Power S1022s is 1 TB. This specific
maximum configuration requires four #EM6Y memory features. Until the availability of the
128 GB memory DDIMMs (planned for 18 November 2022), the maximum memory capacity
is 512 GB.
Figure 2-20 shows the DDIMM plug sequence for Power S1022s servers in 1-socket
configurations (the rules are identical to the previously described for Power S1014 servers).
All memory modules are attached to the first chip (chip-0) of the single eSCM (eSCM-0) and
are of the same type as highlighted in green in Figure 2-20.
The memory controllers and the related open memory interface (OMI) channels are
highlighted in bright yellow in Figure 2-20 and labeled OMI0, OMI1, OMI2, and OMI3.
The related OMI links (subchannels A and B) are highlighted in light yellow in Figure 2-20 and
the physical memory slot location codes are highlighted in light blue:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32
Second DDIMM pair is installed on links OMI1A and OMI1B in slots C12 and C13
Third DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29
Fourth DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30
The memory modules are attached to the first chip (chip-0) of the first eSCM (eSCM-0) or to
the first chip (chip-0) of the second eSCM (eSCM-1) and are of the same type, as highlighted
in green in Figure 2-21.
The memory controllers and the related open memory interface (OMI) channels are
highlighted in bright yellow in Figure 2-21 and labeled OMI0, OMI1, OMI2, and OMI3.
The related OMI links (subchannels A and B) are highlighted in light yellow in Figure 2-21 and
the physical memory slot location codes are highlighted in light blue:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32 of eSCM-0
Second DDIMM pair is installed on links OMI1A and OMI1B in slots C21 and C40 of
eSCM-1
Third DDIMM pair is installed on links OMI0A and OMI0B in slots C12 and C13 of eSCM-0
Fourth DDIMM pair is installed on links OMI0A and OMI0B in slots C19 and C20 of
eSCM-1
Fifth DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29 of eSCM-0
Sixth DDIMM pair is installed on links OMI2A and OMI2B in slots C38 and C39 of eSCM-1
Seventh DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30 of
eSCM-0
Eighth DDIMM pair is installed on links OMI3A and OMI3B in slots C41 and C42 of
eSCM-1
If the 2-socket configuration is based on two different memory feature types, the minimum
ordering granularity is two identical memory feature codes (4 DDIMMs). All DDIMMs that are
attached to a eSCM must be of the same technical specification, which implies that they are
of the same memory feature code type.
It is not required to configure equal quantities of the two memory feature types. A maximum of
four configured entities of each memory feature type (eight DDIMMs of equal specification)
can be used.
Configurations with more than two memory feature types are not supported.
82 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 2-22 shows the DDIMM plug sequence for Power S1022s servers in 2-socket
configuration when that two different memory feature code types are used.
The memory modules of the first feature type are attached to the first chip (chip-0) of the first
eSCM (eSCM-0) and are highlighted in green in Figure 2-22. The memory modules of the
second feature type are attached to the first chip (chip-0) of the second eSCM (eSCM-1) and
are highlighted in purple.
The memory controllers and the related open memory interface (OMI) channels are
highlighted in bright yellow in Figure 2-22 and labeled OMI0, OMI1, OMI2, and OMI3 for both
eSCMs.
The related OMI links (subchannels A and B) are highlighted in light yellow in Figure 2-22 and
the physical memory slot location codes are highlighted in light blue. Each eSCM can be
viewed as an independent memory feature type domain with its own inherent plug sequence.
The following plug sequence is used for the memory feature type for eSCM-0:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32 of eSCM-0
Second DDIMM pair is installed on links OMI1A and OMI1B in slots C12 and C13 of
eSCM-0
Third DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29 of eSCM-0
Fourth DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30 of
eSCM-0
The following plug sequence is used for the memory feature type for eSCM-1:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C21 and C40 of eSCM-1
Second DDIMM pair is installed on links OMI0A and OMI0B in slots C19 and C20 of
eSCM-1
Third DDIMM pair is installed on links OMI2A and OMI2B in slots C38 and C39 of eSCM-1
Fourth DDIMM pair is installed on links OMI3A and OMI3B in slots C41 and C42 of
eSCM-1
The maximum memory capacity of the 2-socket Power S1022s is 2 TB. This specific
maximum configuration requires eight #EM6Y memory features with a total of 16 128-GB
DDIMM modules. Until the availability of the 128 GB memory DDIMMs (planned for
18 November 2022), the maximum memory capacity is 1 TB.
The 16 GB, 32 GB, 64 GB, and 128 GB memory DDIMMs for Power S1022 servers are
bundled in pairs through the related memory feature codes #EM6N, #EM6W, EM6X, and
EM6Y.
The DDIMMs of all memory features are in a form factor suitable for two rack units (2U) high
Power S1022 servers.
Table 2-14 lists the available memory feature codes for Power S1024 servers. No specific
memory enablement features are required and the entire physical DDIMM capacity of a
memory feature is enabled by default.
The memory DDIMMs for Power S1024 servers are bundled by using the following memory
feature codes:
16 GB: #EM6N
32 GB: #EM6W
64 GB: #EM6X
128 GB: #EM6U
256 GB: #EM78
The DDIMMs of the memory features #EM6N, #EM6W, and #EM6X are in a form factor of two
rack units (2U). The DDIMMs of this types are extended by spacers to fit in four rack units
(4U) high Power S1024 servers.
84 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The 128 GB and 256 GB DDIMMs of memory features #EM6U and #EM78 are of higher
capacity that is compared with their 16 GB, 32 GB, and 64 GB counterparts; therefore, they
must fully use the 4U height of Power S1024 servers.
The Power S1024 server does not support a memory configuration that includes DDIMMs of
different form factors. All memory modules must be 2U DDIMM memory feature codes
(#EM6N, EM6W, and EM6X) or all memory modules must be 4U DDIMM memory feature
codes (EM6U and EM78).
Note: Power S1024 servers in 2-socket configuration do not support the 4U DDIMM
memory feature codes #EM6U and #EM78 if the RDX USB Internal Docking Station for
Removable Disk Cartridge feature is installed.
The Power S1022 and Power S1024 servers support the Active Memory Mirroring (AMM)
Feature Code #EM8G. AMM requires a minimum four configured memory feature codes with
a total of eight DDIMM modules.
The Power S1022 and Power S1024 server share most of the memory feature and placement
rules, which are described next.
Memory rules for 1-socket Power S1022 and Power S1024 servers
The minimum ordering granularity is one memory feature (two DDIMMs) and all DDIMMs
must be of the same Feature Code type for a Power S1022 or Power S1024 server in
1-socket configuration. A maximum of eight memory feature codes can be configured to cover
all of the available (16) memory slots.
The minimum memory capacity limit of the Power S1022 or the Power S1024 1-socket server
is 32 GB, which can be fulfilled by one #EM6N feature.
The maximum memory capacity of the Power S1022 in 1-socket configuration is 2 TB. This
specific maximum configuration requires eight #EM6Y memory features. Until the availability
of the 128 GB memory DDIMMs (planned for 18 November 2022), the maximum memory
capacity is 1 TB.
The maximum memory capacity of the Power S1024 in 1-socket configuration is 4 TB. This
specific maximum configuration requires eight #EM78 memory features. Until the availability
of the 128 GB memory DDIMMs and 256 GB memory DDIMMs (planned for 18 November
2022), the maximum memory capacity is 1 TB.
Power10 DCM-0
Chip-0 Chip-1
OMI0 OMI1 OMI2 OMI3 OMI4 OMI5 OMI6 OMI7
A B A B A B A B A B A B A B A B
C12 C13 C27 C32 C28 C29 C31 C30 C18 C17 C16 C35 C37 C36 C34 C33
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
Figure 2-23 DDIMM plug sequence for Power S1022 and Power S1024 1-socket servers
The memory modules are attached to the first chip (chip-0) or the second chip (chip-1) of the
configured DCM (DCM-0). All memory modules are of the same type as highlighted in green
in Figure 2-23.
The memory controllers and the related OMI channels are highlighted in bright yellow in
Figure 2-23 and labeled OMI0, OMI1, OMI2, OMI3, OMI4, OMI5, OMI6, OMI7, and OMI8.
The related OMI links (subchannels A and B) are highlighted in light yellow in Figure 2-23 and
the physical memory slot location codes are highlighted in light blue:
First DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32 of chip-0
Second DDIMM pair is installed on links OMI5A and OMI5B in slots C16 and C35 of chip-1
Third DDIMM pair is installed on links OMI0A and OMI0B in slots C12 and C13 of chip-0
Fourth DDIMM pair is installed on links OMI4A and OMI4B in slots C18 and C17 of chip-1
Fifth DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29 of chip-0
Sixth DDIMM pair is installed on links OMI6A and OMI6B in slots C37 and C36 of chip-1
Seventh DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30 of
chip-0
Eighth DDIMM pair is installed on links OMI7A and OMI7B in slots C34 and C33 of chip-1
Memory rules for 2-socket Power S1022 and Power S1024 servers
The minimum ordering granularity is two identical memory feature codes (four DDIMMs) for
Power S1022 or Power S1024 server in 2-socket configuration.
The minimum memory capacity limit of the Power S1022 or the Power S1024 2-socket server
is 64 GB, which can be fulfilled by two #EM6N features.
The maximum memory capacity of the Power S1022 in 2-socket configuration is 4 TB. This
specific maximum configuration requires 16 #EM6Y memory features. Until the availability of
the 128 GB memory DDIMMs (planned for 18 November 2022), the maximum memory
capacity is 2 TB.
86 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The maximum memory capacity of the Power S1024 in 2-socket configuration is 8 TB. This
specific maximum configuration requires 16 #EM78 memory features. Until the availability of
the 128 GB memory DDIMMs and 256 GB memory DDIMMs (planned for 18 November
2022), the maximum memory capacity is 2 TB.
Regarding the memory plugging rules, the following configuration scenarios are supported
and must be considered separately:
Only one memory feature type is used across both sockets and all of the DDIMMs adhere
to the same technical specification.
Two different memory feature codes with the corresponding different DDIMM
characteristics are configured. Each memory feature code type is assigned in a
one-to-one relation to one of the two DCM sockets.
It is not required to configure equal quantities of the two memory feature types. A
maximum of eight configured entities of each memory feature type (16 DDIMMs of equal
specification) are allowed.
Note: The Power S1022 nor the Power S1024 servers support memory configurations that
are based on more than two memory feature types.
Figure 2-24 shows the DDIMM plug sequence for Power S1022 and Power S1024 servers in
2-socket configuration when only a single memory feature code type is used. Each chip
(chip-0 and chip-1) of each DCM (DCM-0 and DCM-1) provide four memory channels for
memory module access. All memory DDIMMs are of the same type, as highlighted in green in
Figure 2-24.
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
6 6 6 6
7 7 7 7
8 8 8 8
Figure 2-24 DDIMM plug sequence for Power S1022 and Power S1024 2-socket servers
The memory controllers and the related OMI channels are highlighted in bright yellow in
Figure 2-24 and labeled OMI0, OMI1, OMI2, OMI3, OMI4, OMI5, OMI6, OMI7, and OMI8 for
each configured DCM. The related OMI links (subchannels A and B) are highlighted in light
yellow in Figure 2-24 and the physical memory slot location codes are highlighted in light
blue:
First double DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32 of
chip-0 on DCM-0 and OMI1A and OMI1B in slots C21 and C30 of chip-0 on DCM-1
Second double DDIMM pair is installed on links OMI5A and OMI5B in slots C16 and C35
of chip-1 on DCM-0 and OMI5A and OMI5B in slots C48 and C43 of chip-1 on DCM-1
Third double DDIMM pair is installed on links OMI0A and OMI0B in slots C12 and C13 of
chip-0 on DCM-0 and OMI0A and OMI0B in slots C19 and C20 of chip-0 on DCM-1
Figure 2-25 shows the DDIMM plug sequence for Power S1022 and Power S1024 servers in
2-socket configuration when two different memory feature code types are used.
P o we r 10 D C M -0 P o we r 10 D C M -1
C hip-0 C hip-1 C hip-0 C hip-1
OM I0 OM I1 OM I2 OM I3 OM I4 OM I5 OM I6 OM I7 OM I0 OM I1 OM I2 OM I3 OM I4 OM I5 OM I6 OM I7
A B A B A B A B A B A B A B A B A B A B A B A B A B A B A B A B
C12 C13 C27 C32 C28 C29 C31 C30 C18 C17 C16 C35 C37 C36 C34 C33 C19 C20 C21 C30 C38 C39 C41 C42 C25 C26 C48 C43 C47 C46 C44 C45
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
6 6 6 6
7 7 7 7
8 8 8 8
P o we r 10 D C M -0 P o we r 10 D C M -1
C hip-0 C hip-1 C hip-0 C hip-1
OM I0 OM I1 OM I2 OM I3 OM I4 OM I5 OM I6 OM I7 OM I0 OM I1 OM I2 OM I3 OM I4 OM I5 OM I6 OM I7
A B A B A B A B A B A B A B A B A B A B A B A B A B A B A B A B
C12 C13 C27 C32 C28 C29 C31 C30 C18 C17 C16 C35 C37 C36 C34 C33 C19 C20 C21 C30 C38 C39 C41 C42 C25 C26 C48 C43 C47 C46 C44 C45
1 1 1 1
1 1 1 1
2 2 2 2
2 2 2 2
3 3 3 3
3 3 3 3
4 4 4 4
4 4 4 4
Figure 2-25 DDIMM plug sequence for Power S1022 and Power S1024 2-socket servers
The memory modules of the first memory feature type are attached to the first chip (chip-0)
and second chip (chip-1) of the first DCM (DCM-0) as highlighted in green in FIGURE. The
memory modules of the second memory feature type are attached to the first chip (chip-0)
and second chip (chip-1) of the second DCM (DCM-1) as highlighted purple in Figure 2-25.
Each DCM can be viewed as an independent memory feature type domain with its own
inherent plug sequence.
The following plug sequence is used for the memory feature type for DCM-0:
First double DDIMM pair is installed on links OMI1A and OMI1B in slots C27 and C32 of
chip-0 and OMI5A and OMI5B in slots C16 and C35 of chip-1 on DCM-0
Second double DDIMM pair is installed on links OMI0A and OMI0B in slots C12 and C13
of chip-0 and OMI4A and OMI4B in slots C16 and C35 of chip-1 on DCM-0
Third double DDIMM pair is installed on links OMI2A and OMI2B in slots C28 and C29 of
chip-0 and OMI6A and OMI6B in slots C37 and C36 of chip-1 on DCM-0
Fourth double DDIMM pair is installed on links OMI3A and OMI3B in slots C31 and C30 of
chip-0 and OMI7A and OMI7B in slots C34 and C33 of chip-1 on DCM-0
88 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The following plug sequence is used for the memory feature type for DCM-1:
First double DDIMM pair is installed on links OMI1A and OMI1B in slots C21 and C40 of
chip-0 and OMI5A and OMI5B in slots C48 and C43 of chip-1 on DCM-1
Second double DDIMM pair is installed on links OMI0A and OMI0B in slots C19 and C20
of chip-0 and OMI4A and OMI4B in slots C25 and C26 of chip-1 on DCM-1
Third double DDIMM pair is installed on links OMI2A and OMI2B in slots C38 and C39 of
chip-0 and OMI6A and OMI6B in slots C47 and C46 of chip-1 on DCM-1
Fourth double DDIMM pair is installed on links OMI3A and OMI3B in slots C41 and C42 of
chip-0 and OMI7A and OMI7B in slots C44 and C45 of chip-1 on DCM-1
The Power10 processor-based scale-out servers offers four different DDIMM sizes for all
server models: 16 GB, 32 GB, 64 GB, and 128 GB. The 16 GB, 32 GB, and 64 GB DDIMMs
run at a data rate of 3200 Mbps.
The DDIMMs of 128 GB capacity and 2U form factor are configurable for Power S1014,
Power S1022s, and Power S1024 servers and run at a data rate of 2666 Mbps.
The 128 GB DDIMMs of 4U form factor are exclusively available for Power S1024 servers,
which run at a slightly higher data rate of 2933 Mbps. Only Power S1024 servers can use an
other 4U form factor DDIMM type that holds 256 GB of data and is running at 2933 Mbps.
Table 2-15 lists the available DDIMM capacities and their related maximum theoretical
bandwidth figures per OMI link, Power10 eSCM, and Power10 DCM.
16 GB, 32 GB, 64 GB 3200 Mbps 25.6 GBps 204.8 GBps 409.6 GBps
128 GB, 256 GB 2933 Mbps 23.5 GBps 187.7 GBps 375.4 GBps
a. DDIMM modules that are attached to one DCM or eSCM must be all of the same size.
Each DDIMM slot is serviced by one OMI link (memory subchannel). The maximum
bandwidth of the system depends on the number of OMI links that are used and the data rate
of the DDIMMs that populate the configured links.
Important: For the best possible performance, it is generally recommended that memory
is installed evenly in all memory slots and across all configured processor modules.
Balancing memory across the installed system planar cards enables memory access in a
consistent manner and typically results in better performance for your configuration.
Table 2-16 lists the maximum memory bandwidth for the eSCM-based Power S1014 and
Power S1022s servers, depending on the number of DDIMMs that are used and the DRAM
data rate of the selected DDIMM type. The listing accounts for the minimum memory feature
code order granularity. Unsupported configurations are indicated by a “—” hyphen.
Table 2-16 Maximum memory bandwidth for the Power S1014 and Power S1022s servers
DDIMM Maximum bandwidth based on Maximum bandwidth based on
quantity 3200 Mbps data rate DDIMMs (GBps)a 2666 Mbps data rate DDIMMs (GBps)a
2 51 51 43 43
4 102 102 85 85
10 — 256 — 213
12 — 307 — 256
14 — 358 — 298
16 — 410 — 341
a. Numbers are rounded to the nearest integer.
90 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 2-17 lists the maximum memory bandwidth for the DCM-based Power S1022 and
Power S1024 servers, depending on the number of DDIMMs that are used, the DRAM data
rate of the selected DDIMM type, and the number of configured sockets. The listing accounts
for the minimum memory feature code order granularity and pertains to configurations that
are based on only one single memory feature code type. Unsupported configurations are
indicated by a “—” hyphen.
Table 2-17 Maximum memory bandwidth for the Power S1022 and Power S1024 servers
DDIMM Power S1022 and Power S1024 Power S1024 maximum bandwidth based
quantity maximum bandwidth based on on 2933 Mbps data rate DDIMMs (GBs)a
3200 Mbps data rate DDIMMs (GBs)a
2 51 — 47 —
4 102 102 94 94
6 154 — 141 —
10 256 — 235 —
14 358 — 329 —
18 — — — —
20 — 512 — 470
22 — — — —
24 — 614 — 564
26 — — — —
28 — 717 — 658
30 — — — —
32 — 819 — 752
a. Numbers are rounded to the nearest integer.
The Power10 chip is installed in pairs in a DCM or eSCM that plugs into a socket in the
system board of the systems.
The following versions of Power10 processor modules are used on the Power10
processor-based scale-out servers:
A DCM in which both chips are fully functional with cores, memory, and I/O.
An eSCM in which the first chip (P0) is fully function with cores, memory, and I/O and the
second chip (P1) supports I/O only.
The PCIe slot internal connections of 2 DCM server are shown in Figure 2-26.
C 4 = G 4 x 1 6 o r G 5 x 8 s lo t
C 1 1 = G 5 x 8 w / x 1 6 c s lo t PEC 0 PEC 0
C 3 = G 4 x 1 6 o r G 5 x 8 s lo t
D C M -0 D C M -1
P 0 / C h i p -0 P 2 / C h i p -0
C 1 0 = G 4 x 1 6 o r G 5 x 8 s lo t PEC 1 PEC 1
C 9 = G 5 x 8 w / x 1 6 c s lo t C 2 = G 5 x 8 w / x 1 6 c s lo t
PEC 0 PEC 0
C 8 = G 4 x 8 w / x 1 6 c s lo t C 1 = G 4 x 8 w / x 1 6 c s lo t
D C M -0 D C M -1
P 1 / C h i p -1 P 3 / C h i p -1
( f u lly f u n c t io n o r I/O o n ly) ( f u lly f u n c t io n o r I/O o n ly)
C 7 = G 5 x 8 w / x 1 6 c s lo t PEC 1 PEC 1
C 0 = G 4 x 1 6 o r G 5 x 8 s lo t
All PCIe slots support hot-plug adapter installation and maintenance when service
procedures are used that are activated by way of the eBMC or HMC interfaces, and enhanced
error handling (EEH).
PCIe EEH-enabled adapters respond to a special data packet that is generated from the
affected PCIe slot hardware by calling system firmware, which examines the affected bus,
allows the device driver to reset it, and continues without a system restart.
For Linux, EEH support extends to most of the frequently used devices, although some
third-party PCI devices might not provide native EEH support.
92 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
All PCIe adapter slots support hardware-backed network virtualization through single root IO
virtualization (SR-IOV) technology. Configuring an SR-IOV adapter into SR-IOV shared mode
might require more hypervisor memory. If sufficient hypervisor memory is not available, the
request to move to SR-IOV shared mode fails. The user is then instructed to free up extra
memory and attempt the operation again.
The server PCIe slots are allocated DMA space that use the following algorithm:
All slots are allocated a 2 GB default DMA window.
All I/O adapter slots (except the embedded USB) are allocated Dynamic DMA Window
(DDW) capability that is based on installed platform memory. DDW capability is calculated
assuming 4 K I/O mappings. Consider the following points:
– For systems with less than 64 GB of memory, slots are allocated 16 GB of DDW
capability.
– For systems with at least 64 GB of memory, but less than 128 GB of memory, slots are
allocated 32 GB of DDW capability.
– For systems with 128 GB or more of memory, slots are allocated 64 GB of DDW
capability.
– Slots can be enabled with Huge Dynamic DMA Window capability (HDDW) by using
the I/O Adapter Enlarged Capacity setting in the ASMI.
– HDDW enabled slots are allocated enough DDW capability to map all of installed
platform memory by using 64 K I/O mappings.
– Minimum DMA window size for HDDW enabled slots is 32 GB.
– Slots that are HDDW enabled are allocated the larger of the calculated DDW and
HDDW capability.
The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as
many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth per lane of a PCIe
Gen4 slot, and PCIe Gen4 slots can support up to twice the bandwidth per lane of a PCIe
Gen3 slot.
The servers are smart about energy efficiency when cooling the PCIe adapter environment.
They sense which IBM PCIe adapters are installed in their PCIe slots and, if an adapter
requires higher levels of cooling, they automatically speed up fans to increase airflow across
the PCIe adapters. Faster fans increase the sound level of the server. Higher wattage PCIe
adapters include the PCIe3 SAS adapters and SSD/flash PCIe adapters (#EJ10, #EJ14, and
#EJ0J).
Table 2-18 lists the available PCIe slot types and the related slot location codes in Power
S1014 server.
Table 2-18 PCIe slot locations for a slot type in the Power S1014 server
Slot type Number of slots Location codes Adapter size
eBMC 1 P0-C5
Table 2-19 lists the PCIe adapter slot locations and related characteristics for the Power
S1014 server.
Table 2-19 PCIe slot locations and capabilities for the Power S1014 server
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
P0-C5b eBMC
94 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
Figure 2-27 shows the rear view of the Power S1014 server with the location codes for the
PCIe adapter slots.
Figure 2-27 Rear view of a Power S1014 server with PCIe slots location codes
Table 2-20 lists the available PCIe slot types and the related slot location codes in Power
S1022s and S1022 servers.
Table 2-20 PCIe slot locations for a slot type in the Power S1022s and S1022 servers
Slot type Number of slots Location codes Adapter size
eBMC 1 P0-C5
Table 2-21 lists the PCIe adapter slot locations and related characteristics for the Power
S1022s and S1022 servers.
Table 2-21 PCIe slot locations and capabilities for the Power S1022s and S1022 servers
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
96 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
P0-C5b eBMC
Figure 2-28 shows the rear view of the Power S1022s and S1022 servers with the location
codes for the PCIe adapter slots.
Figure 2-28 Rear view of Power S1022s and S1022 servers with PCIe slots location codes
With one Power10 processor DCM, five PCIe slots are available:
One PCIe x16 Gen4 or x8 Gen5, full-height, half-length slots (CAPI)
Two PCIe x8 Gen5, full-height, half-length slots (with x16 connector) (CAPI)
One PCIe x8 Gen5, full-height, half-length slots (with x16 connector)
One PCIe x8 Gen4, full-height, half-length slots (with x16 connector) (CAPI)
Table 2-22 lists the available PCIe slot types and related slot location codes in the Power
S1024 server.
Table 2-22 PCIe slot locations for each slot type in the Power S1024 server
Slot type Number of slots Location codes Adapter size
eBMC 1 P0-C5
Table 2-23 lists the PCIe adapter slot locations and related characteristics for the Power
S1024 server.
Table 2-23 PCIe slot locations and capabilities for the Power S1024 servers
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
98 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Location code Description Processor OpenCAPI I/O adapter
module capable enlarged
capacity
enablement
ordera
P0-C5b eBMC
Figure 2-29 Rear view of a Power S1024 server with PCIe slots location codes
The eBMC is a specialized service processor that monitors the physical state of the system
by using sensors. A system administrator or service representative can communicate with the
eBMC through an independent connection.
To enter the ASMI GUI, you can use the HMC by selecting the server and then selecting
Operations → Launch Advanced System Management. A window opens that displays the
name of the system; model, type, and serial; and the IP of the service processor (eBMC).
Click OK and the ASMI window opens.
If the eBMC is connected to a network that also is accessible from your workstation, you can
connect directly by entering https://<eBMC IP> in your web browser.
100 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 2-30 show the ASMI login window.
When you log in for the first time, the default username and password is admin, but
invalidated. That is, after the first login, you must immediately change the admin password.
This change also must be made after a factory reset of the system. This policy change helps
to enforce that the eBMC is not left in a state with a well-known password, which improves the
security posture of the system.
The password must meet specific criteria (for example, a password of abcd1234 is invalid).
For more information about password rules, see this IBM Documentation web page.
The new ASMI for eBMC managed servers feature some important differences from the ASMI
version that is used by FSP-based systems. It also delivers some valuable new features:
Update system firmware
A firmware update can be installed for the server by using the ASMI GUI, even if the
system is managed by an HMC. In this case, the firmware update always is disruptive.
To install a concurrent firmware update, the HMC must be used, which is not possible by
using the ASMI GUI.
Download memory dumps
Memory dumps can be downloaded by using the HMC. Also, they also download them
from the ASMI menu if necessary.
It also is possible to start a memory dump from the ASMI. Click Logs → Dumps and then,
select the memory dump type and click Initiate memory dump. The following memory
dump types are available:
– BMC memory dump (nondisruptive)
– Resource memory dump
– System memory dump (disruptive)
Network Time Protocol (NTP) server support
102 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Lightweight directory access protocol (LDAP) for user management
Host console
By using the host console, you can monitor the server’s start process. The host console
also can be used to access the operating system when only a single LPAR uses all of the
resources.
Note: The host console also can be accessed by using an SSH client over port 2200
and logging in as the admin user.
User management
You can create your own users in the eBMC. This feature also can be used to create an
individual user that can be used for the HMC to access the server.
A user features the following types privileges:
– Administrator
– ReadOnly (you cannot modify anything (except the password of that user); therefore, a
user with this privilege level cannot be used for HMC access to the server.
IBM security by way of Access Control Files
To get “root access” to the service processor by using the user celogin in FSP-managed
servers, the IBM support team generated a password by using the serial number and the
date.
In eBMC managed systems, the support team generates an Access Control File (ACF).
This file must be uploaded to the server to get access. This procedure is needed (for
example) if the admin password must be reset. This process requires physical access to
the system.
Jumper reset
Everything on the server on be reset by using a physical jumper. This factory reset
process resets everything on the server, such as LPAR definitions, eBMC settings, and the
NVRAM.
A component also can be displayed. This feature is helpful to see details; for example, the
size of a DDIMM or the part number of a component if something must be exchanged.
Sensors
The ASMI features data from various sensors that are available within the server and many
components by clicking Hardware status → Sensors. The loading of the sensor data takes
some time, during which you see a progress bar on the top of the window.
Network settings
The default network settings for the two eBMC ports are to use DHCP. Therefore, when you
connect a port to a private HMC network with the HMC as a DHCP server, the new system
receives its IP address from the HMC during the start of the firmware. Then, the new system
automatically appears in the HMC and can be configured.
DHCP is the recommended way to attach the eBMC of a server to the HMC.
If you do not use DHCP and want to use a static IP, you can set the IP in the ASMI GUI.
However, before you can make this change, you must connect to the ASMI. Because no
default IPs are the same for every server, you first must determine the configured IP.
To determine the configured IP, use the operator window. This optional component includes
the recommendation that one operator window is purchased per rack of Power10
processor-based scale-out servers.
104 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
For more information about function 30 in the operator window, see this IBM Documentation
web page.
Now that you determined the IP, you can configure any computer with a web browser to an IP
in the same subnet (class C) and connect the computer to the correct Ethernet port of the
server.
Hint: Most connections work by using a standard Ethernet cable. If you do not see a link
with the standard Ethernet cable, use a crossover cable where the send and receive wires
are crossed.
After connecting the cable, you can now use a web browser to access the ASMI with
https://<IP address> and then, configure the network port address settings.
To configure the network ports, click Settings → Network and select the correct adapter to
configure.
Figure 2-33 shows and example of changing eth1. Before you can configure a static IP
address, switch off DHCP. Several static IPs can be configured on one physical Ethernet port.
In the ASMI network settings window, you cannot configure the VMI address. The VMI
address is another IP that is configured on the physical eBMC Ethernet port of the server to
mange the Virtualization of the server. The VMI address can be configured in the HMC only.
Policies
In Security and access → Policies, you can switch security related functions on and off; for
example, whether management over Intelligent Platform Management Interface (IPMI) is
enabled.
Some customers require that the USB ports of the server must be disabled. This change can
be made in the Policies window. Switch off Host USB enablement, as shown in Figure 2-34.
106 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
2.5.2 Managing the system by using DMTF Redfish
eBMC-based systems also can be managed by using the DMTF Redfish APIs. Redfish is a
REST API that is used for platform management and is standardized by the Distributed
Management Task Force, Inc. For more information, see this web page.
You can work with Redfish by using several methods, ll of which require an https connection
to the eBMC. One possibility is to use the curl operating system command. The following
examples show how to work with curl and Redfish.
Before you can acquire data from the server or run systems management tasks by using
Redfish, you must authenticate against the server. In return for supplying a valid username
and password, you receive a token that is used to authenticate requests (see Example 2-1).
With this token, you now can receive data from the server. You start by requesting data of the
Redfish root with /Redfish/v1. You receive data with other branches in the Redfish tree; for
example, Chassis.
For more data, you can use the newly discovered odata.id field information
/Redfish/v1/Chassis, as shown in Example 2-2.
Under Chassis, another chassis is available (with lower case c). We can now use the tree with
both; that is, /Redfish/v1/Chassis/chassis. After running the tool, you can see in
Example 2-2 on page 107 PCIeSlots and Sensors are available as examples of other
resources on the server.
In Example 2-3, you see what is available through the Sensors endpoint. Here, you can find
the same sensors as in the ASMI GUI (see Figure 2-32 on page 104).
For example, in the output, you find the sensor total_power. When you ask for more
information about that sensor (see Example 2-3), you can see that the server needed 1.426
watts at the time of running the command. Having programmatic access to this type of data
allows you to build a view of the electrical power consumption of your Power environment in
real time, or to report usage over a period.
108 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
"Reading": 1426.0,
"ReadingRangeMax": null,
"ReadingRangeMin": null,
"ReadingType": "Power",
"ReadingUnits": "W",
"Status": {
"Health": "OK",
"State": "Enabled"
}
Operations also can be run on the server by using the POST method to the Redfish API
interface. The following curl commands can be used to start or stop the server (these
commands work only if you are authenticated as a user with administrator privileges):
Power on server:
# curl -k -H "X-Auth-Token: $TOKEN" -X POST https://${eBMC}/redfish/v1/Systems/
system/Actions/Reset -d '{"ResetType":"On"}'
Power off server:
# curl -k -H "X-Auth-Token: $TOKEN" -X POST https://${eBMC}/redfish/v1/Systems/
system/Actions/Reset -d '{"ResetType":"ForceOff"}'
For more information about how to work with Redfish in Power systems, see this IBM
Documentation web page.
Because inherent security vulnerabilities are associated with the IPMI, consider the use of
Redfish APIs or the GUI to manage your system.
If you want to use IPMI, this service must be enabled first. This process can be done by
clicking Security and access → Policies. There, you find the policy Network IPMI
(out-of-band IPMI) that must be enabled to support IPMI access.
For more information about common IPMI commands, see this IBM Documentation web
page.
110 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3
Power S1014 servers do not support any Capacity on Demand (CoD) capability; therefore, all
available functional cores of an eSCM type are activated by default.
The 4-core eSCM #EPG0 requires four static processor activation features #EPFT and the
8-core eSCM #EPG2 requires eight static processor activation features #EPF6. To assist with
the optimization of software licensing, the factory deconfiguration feature #2319 is available at
initial order to permanently reduce the number of active cores, if wanted.
Table 3-1 lists the processor card Feature Codes that are available at initial order for
Power S1014 servers.
Table 3-1 Processor card Feature Code specification for the Power S1014 server
Processor Processor Number Typical Static processor core
card feature module of cores frequency activation Feature
code type range (GHz) Code
Table 3-2 lists all processor-related Feature Codes for Power S1014 servers.
#EPG0 4-core typical 3.0 to 3.90 GHz (maximum) Power10 processor card
#EPG2 8-core typical 3.0 to 3.90 GHz (maximum) Power10 processor card
Power S1022s servers do not support any CoD capability; therefore, all available functional
cores of an eSCM type are activated by default.
112 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The 4-core eSCM processor module #EPGR requires four static processor activation features
#EPFR, and the 8-core eSCM processor module #EPGQ requires eight static processor
activation features #EPFQ. To assist with the optimization of software licensing, the factory
deconfiguration Feature Code #2319 is available at initial order to permanently reduce the
number of active cores. if wanted.
The Power S1022s server can be configured with one 4-core processor, one 8-core
processor, or two 8-core processors. An option for a system with two 4-core processors that
are installed is not available.
Table 3-3 lists the processor card Feature Codes that are available at initial order for
Power S1022s servers.
Table 3-3 Processor card Feature Code specification for the Power S1022s server
Processor Processor Number of Typical Static processor core
card Feature module type cores frequency activation Feature
Code range (GHz) Code
Table 3-4 lists all processor-related Feature Codes for Power S1022s servers.
#EPGR 4-core typical 3.0 to 3.90 GHz (maximum) Power10 processor card
#EPGQ 8-core typical 3.0 to 3.90 GHz (maximum) Power10 processor card
The 12-core #EPG9 DCM can be used in 1-socket or 2-socket Power S1022 configurations.
The higher core density modules with 16 or 20 functional cores are available only in 2-socket
configurations and both sockets must be populated by the same processor feature.
Power S1022 servers support the Capacity Upgrade on Demand (CUoD) capability by
default. At an initial order, a minimum of 50% of configured physical processor cores must be
covered by CUoD static processor core activations:
The 12-core DCM processor module #EPG9 requires a minimum of six CUoD static
processor activation features #EPF9 in a 1-socket and a minimum of 12 #EPF9 features in
a 2-socket configuration.
Extra CUoD static activations can be purchased later after the initial order until all physically
present processor cores are entitled.
To assist with the optimization of software licensing, the factory deconfiguration feature #2319
is available at initial order to permanently reduce the number of active cores that are below
the imposed minimum of 50% CUoD static processor activations, if wanted.
As an alternative to the CUoD processor activation use model and to enable cloud agility and
cost optimization with pay-for-use pricing, the Power S1022 server supports the IBM Power
Private Cloud with Shared Utility Capacity solution (also known as Power Enterprise Pools 2.0
or Pools 2.0). This solution is configured at initial system order by including Feature Code
#EP20.
When configured as a Power Private Cloud system, each Power S1022 server requires a
minimum of one base processor core activation. The maximum number of base processor
activations is limited by the physical capacity of the server.
Although configured against a specific server, the base activations can be aggregated across
a pool of servers and used on any of the systems in the pool. When a system is configured in
this way, all processor cores that are installed in the system become available for use. Any
usage above the base processor activations that are purchased across a pool is monitored by
the IBM Cloud Management Console for Power and is debited from the customers cloud
capacity credits, or is invoiced monthly for total usage across a pool of systems.
A system that is initially ordered with a configuration that is based on the CUoD processor
activations can be converted to the Power Private Cloud with Shared Utility Capacity model
later. This process requires the conversion of existing CUoD processor activations to base
activations, which include different feature codes. The physical processor feature codes do
not change.
A system cannot be converted from the Power Private Cloud with Shared Utility Capacity
model to CUoD activations.
Table 3-5 lists the processor card feature codes that are available at initial order for
Power S1022 servers.
Table 3-5 Processor feature code specification for the Power 1022 server
Processor Processor Number Typical CUoDa static Base processor Base core
card module of cores frequency processor core core activation activations
feature type range activation Feature Code for converted
code [GHz] Feature Code Pools 2.0 from CUoD
static
activations
114 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 3-6 lists all processor-related feature codes for Power S1022 servers.
#EPG9 12-core typical 2.90 to 4.0 GHz (maximum) Power10 processor card, available in
quantity of one (1-socket configuration) or two (2-socket configuration)
#EPG8 16-core typical 2.75 to 4.0 GHz (maximum) Power10 processor card, available in
quantity of two (2-socket configuration) only
#EPGA 20-core typical 2.45 to 3.90 GHz (maximum) Power10 processor card, available in
quantity of two (2-socket configuration) only
#EUCB One base processor core activation on processor card #EPG9 for Pools 2.0 to
support any operating system
#EUCA One base processor core activation on processor card #EPG8 for Pools 2.0 to
support any operating system
#EUCC One base processor core activation on processor card #EPGA for Pools 2.0 to
support any operating system
#EUCH One base processor core activation on processor card #EPG9 for Pools 2.0 to
support any operating system (converted from #EPF9)
#EUCG One base processor core activation on processor card #EPG8for Pools 2.0 to
support any operating system (converted from #EPF8)
#EUCJ One base processor core activation on processor card #EPGA for Pools 2.0 to
support any operating system (converted from #EPFA)
The 12-core #EPGM DCM can be used in 1-socket or 2-socket Power S1024 configurations.
The higher core density modules with 16 or 24 functional cores are available only for 2-socket
configurations and both sockets must be populated by the same processor feature.
Power S1024 servers support the CUoD capability by default. At an initial order, a minimum of
50% of configured physical processor cores must be covered by CUoD static processor core
activations:
The 12-core DCM processor module #EPGM requires a minimum of six CUoD static
processor activation features #EPFM in a 1-socket and 12 #EPFM features in a 2-socket
configuration.
To assist with the optimization of software licensing, the factory deconfiguration feature #2319
is available at initial order to permanently reduce the number of active cores that are below
the imposed minimum of 50% CUoD static processor activations, if wanted.
As an alternative to the CUoD processor activation use model and to enable cloud agility and
cost optimization with pay-for-use pricing, the Power S1024 server also supports the IBM
Power Private Cloud with Shared Utility Capacity solution (also known as Power Enterprise
Pools 2.0, or just Pools 2.0). This solution is configured at initial system order by including
Feature Code #EP20.
When configured as a Power Private Cloud system, each Power S1024 server requires a
minimum of one base processor core activation. The maximum number of base processor
activations is limited by the physical capacity of the server.
Although configured against a specific server, the base activations can be aggregated across
a pool of servers and used on any of the systems in the pool. When a system is configured in
this way, all processor cores that are installed in the system become available for use. Any
usage above the base processor activations that are purchased across a pool is monthly for
total usage across a pool of systems.
A system that is initially ordered with a configuration that is based on the CUoD processor
activations can be converted to the Power Private Cloud with Shared Utility Capacity model
later. This process requires the conversion of existing CUoD processor activations to base
activations, which include different feature codes. The physical processor feature codes do
not change.
A system cannot be converted from the Power Private Cloud with Shared Utility Capacity
model to CUoD activations.
Table 3-7 lists the processor card feature codes that are available at initial order for
Power S1024 servers.
Table 3-7 Processor feature code specification for the Power S1024 server
Processor Processor Number Typical CUoD static Base processor Base core
card module of cores frequency processor core core activation activations
feature type range activation Feature Code for converted
code [GHz] Feature Code Pools 2.0 from CUoD
static
activations
116 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 3-8 lists all processor-related feature codes for Power S1024 servers.
#EPGM 12-core typical 3.40 to 4.0 GHz (maximum) Power10 processor card, available in
quantity of one (1-socket configuration) or two (2-socket configuration)
#EPGC 16-core typical 3.10 to 4.0 GHz (maximum) Power10 processor card, available in
quantity of two (2-socket configuration) only
#EPGD 24-core typical 2.75 to 3.9 GHz (maximum) Power10 processor card, available in
quantity of two (2-socket configuration) only
#EUBX One base processor core activation on processor card #EPGM for Pools 2.0 to
support any operating system
#EUCK One base processor core activation on processor card #EPGC for Pools 2.0 to
support any operating system
#EUCL One base processor core activation on processor card #EPGD for Pools 2.0 to
support any operating system
#EUBZ One base processor core activation on processor card #EUGM for Pools 2.0 to
support any operating system converted from #EPFM
#EUCR One base processor core activation on processor card #EUGC for Pools 2.0 to
support any operating system converted from #EPFC
#EUCT One base processor core activation on processor card #EUGD for Pools 2.0 to
support any operating system converted from #EPFD
Table 3-9 Memory Feature Codes for Power10 processor-based scale-out servers
Feature Capacity Packaging DRAM DRAM Form Supported
code density data rate factor servers
The memory module cards for the scale-out servers are manufactured in two different form
factors, which are used in servers with 2 rack units (2U) or 4 rack units (4U). The 2U memory
cards can be extended through spacers for use in 4U servers, but the 4U high cards do not fit
in 2U servers.
All Power10 processor-based scale-out servers can use the following configurations:
2U 16 GB capacity DDIMMs of memory feature #EN6N
2U high 32 GB capacity DDIMMs of memory feature #EM6W
2U high 64 GB capacity DDIMMs of memory feature #EM6X.
The 2U 128 GB capacity DDIMMs of feature #EM6Y can be used in all of the Power10
scale-out servers except for Power S1024 systems. The 4U high 128 GB capacity DDIMMs of
feature #EM6U and the 4U high 256 GB capacity DDIMMs of feature #EM78 are exclusively
provided for Power S1024 servers.
All memory slots that are connected to a DCM or an eSCM must be fitted with DDIMMs of the
same memory feature code:
For 1-socket Power10 scale-out server configurations, all memory modules must be of the
same capacity, DRAM density, DRAM data rate and form factor.
For 2-socket Power10 scale-out server configurations two different memory feature codes
can be selected, but the memory slots that are connected to a socket must be filled with
DDIMMs of the same memory feature code, which implies that they are of identical
specifications.
The minimum memory capacity limit is 32 GB per eSCM or DCM processor module that can
be fulfilled by one #EM6N memory feature.
118 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
No specific memory enablement features are required for any of the supported Power10
scale-out server memory features. The entire physical DDIMM capacity of a memory
configuration is enabled by default.
All Power10 processor-based scale-out servers (except the Power S1014) support the Active
Memory Mirroring (AMM) feature #EM8G. AMM is available as an optional feature to enhance
resilience by mirroring critical memory that is used by the PowerVM hypervisor so that it can
continue operating if a memory failure occurs.
A portion of available memory can be operatively partitioned such that a duplicate set can be
used if noncorrectable memory errors occur. This partitioning can be implemented at the
granularity of DDIMMs or logical memory blocks.
The Power S1022s server supports two 2000 W 200 - 240 V AC power supplies (#EB3N).
Two power supplies are always installed. One power supply is required during the boot phase
and for normal system operation, and the second is for redundancy.
The Power S1022 server supports two 2000 W 200 - 240 V AC power supplies (#EB3N). Two
power supplies are always installed. One power supply is required during the boot phase and
for normal system operation, and the second is for redundancy.
The Power S1024 server supports four 1600 W 200 - 240 V AC (#EB3S) power supplies. Four
power supplies are always installed. Two power supplies are required during the boot phase
and for normal system operation, and the third and fourth are for redundancy.
This list is based on the PCIe adapters available on the General Availability (GA) date of
these systems, but is subject to change as more PCIe adapters are tested and certified, or
listed adapters are no longer available.
For more information about the supported adapters, see the IBM Offering Information web
page.
The Order type table column in the following subsections is defined as:
Initial Denotes the orderability of a feature only with the purchase of a new
system.
MES Denotes the orderability of a feature only as part of an MES upgrade
purchase for an existing system.
Both Denotes the orderability of a feature as part of new and MES upgrade
purchases.
Supported Denotes that feature is not orderable with a system, but is supported;
that is, the feature can be migrated from existing systems, but cannot
be ordered new.
Table 3-10 lists the low profile (LP) LAN adapters that are supported within the Power S1022s
and Power S1022 server models.
Table 3-10 Low profile LAN adapters that are supported in the S1022s and S1022
Feature CCIN Description Operating Order type
code system support
5260 576F PCIe2 LP 4-port 1 GbE Adapter AIX, Linux, IBM ia Supported
EN0T 2CC3 PCIe2 LP 4-Port (10 Gb+1 GbE) AIX, Linux, IBM ia Supported
SR+RJ45 Adapter
EN0V 2CC3 PCIe2 LP 4-port (10 Gb+1 GbE) Copper AIX, Linux, IBM ia Supported
SFP+RJ45 Adapter
EN0X 2CC4 PCIe2 LP 2-port 10/1 GbE BaseT RJ45 AIX, Linux, IBM ia Both
Adapter
a. The IBM i operating system is supported through VIOS only.
b. The #EC2T adapter requires one or two suitable transceivers to provide 10 Gbps SFP+
(#EB46), 25 Gbps SFP28 (#EB47), or 1 Gbps RJ45 (#EB48) connectivity as required.
c. Linux support requires Red Hat Enterprise Linux 8.4 or later, Red Hat Enterprise Linux for SAP
8.4 or later, SUSE Linux Enterprise Server 15 Service Pack 3 or later, SUSE Linux Enterprise
Server for SAP with SUSE Linux Enterprise Server 15 Service Pack 3 or later, or Red Hat
OpenShift Container Platform 4.9 or later. All require Mellanox OFED 5.5 drivers or later.
d. To deliver the full performance of both ports, each 100 Gbps Ethernet adapter must be
connected to a PCIe slot with 16 lanes (x16) of PCIe Gen4 connectivity. In the Power S1022s
and Power S1022 server models this limits placement to PCIe slots C0, C3, C4, and C10. In
systems with only a single socket populated, a maximum of one 100 Gbps Ethernet adapter is
supported. The 100 Gbps Ethernet adapters are not supported in PCIe expansion drawers.
120 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 3-11 lists the full-height LAN adapters that are supported within the Power S1014 and
Power S1024 server models, and within the PCIe expansion drawer (EMX0) connected to any
of the Power10 processor-based scale-out server models.
Table 3-11 Full-height LAN adapters supported in the S1014, S1024, and PCIe expansion drawers
Feature CCIN Description Operating Order type
code system support
5899 576F PCIe2 4-port 1 GbE Adapter AIX, Linux, IBM ia Supported
EN0S 2CC3 PCIe2 4-Port (10 Gb+1 GbE) SR+RJ45 AIX, Linux, IBM i Supported
Adapter (through VIOS)
EN0U 2CC3 PCIe2 L4-port (10 Gb+1 GbE) Copper AIX, Linux, IBM i Supported
SFP+RJ45 Adapter (through VIOS)
EN0W 2CC4 PCIe2 2-port 10/1 GbE BaseT RJ45 AIX, Linux, IBM i Both
Adapter (through VIOS)
a. When this adapter is installed in an expansion drawer that is connected to an S1022s or S1022
server, IBM i is supported through VIOS only.
b. The #EC2U adapter requires one or two suitable transceivers to provide 10 Gbps SFP+
(#EB46), 25 Gbps SFP28 (#EB47), or 1 Gbps RJ45 (#EB48) connectivity as required.
c. Linux support covers Requires Red Hat Enterprise Linux 8.4 or later, Requires Red Hat
Enterprise Linux for SAP 8.4 or later, SUSE Linux Enterprise Server 15 Service Pack 3 or later,
SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service Pack 3
or later, or Red Hat OpenShift Container Platform 4.9 or later. All require Mellanox OFED 5.5
drivers or later.
Two full-height LAN adapters with 100 Gbps connectivity are available that are supported only
when they are installed within the Power S1014 or Power S1024 server models. To deliver the
full performance of both ports, each 100 Gbps Ethernet adapter must be connected to a PCIe
slot with 16 lanes (x16) of PCIe Gen4 connectivity.
In the Power S1014 or the Power S1024 with only a single socket that is populated, this
requirement limits placement to PCIe slot C10. In the Power S1024 with both sockets
populated, this requirement limits placement to PCIe slots C0, C3, C4, and C10. These 100
Gbps Ethernet adapters are not supported in PCIe expansion drawers.
Table 3-12 Full-height 100 Gbps LAN adapters that are supported in the S1014 and S1024 only
Feature CCIN Description Operating Order type
code system support
EC66 2CF3 PCIe4 2-port 100 Gb ROCE EN adapter AIX, Linux, IBM i Both
(through VIOS)a
All supported Fibre Channel adapters feature LC connections. If you are attaching a switch or
a device with an SC type fiber connector, an LC-SC 50-Micron Fibre Converter Cable or an
LC-SC 62.5-Micron Fiber Converter Cable is required.
Table 3-13 lists the low profile Fibre Channel adapters that are supported within the
Power S1022s and Power S1022 server models.
Table 3-13 Low profile FC adapters that are supported in the S1022s and S1022
Feature CCIN Description Operating Order type
code system support
EN1B 578F PCIe3 LP 32 Gb 2-port Fibre Channel AIX, Linux, IBM i Both
Adapter (through VIOS)
122 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 3-14 lists the full-height Fibre Channel adapters that are supported within the
Power S1014 and Power S1024 server models, and within the PCIe expansion drawer
(EMX0) that is connected to any of the Power10 processor-based scale-out server models.
Table 3-14 Full-height FC adapters supported in the S1014, S1024, and PCIe expansion drawers
Feature CCIN Description Operating Order type
code system support
EN1A 578F PCIe3 LP 32 Gb 2-port Fibre Channel AIX, Linux, IBM i Both
Adapter
Table 3-15 lists the low profile SAS adapters that are supported within the Power S1022s and
Power S1022 server models.
Table 3-15 Low profile SAS adapters that are supported in the S1022s and S1022
Feature CCIN Description Operating Order type
code system
support
EJ0M 57B4 PCIe3 LP RAID SAS Adapter Quad-Port 6 AIX, Linux, IBM i Both
Gb x8 (through VIOS)
Table 3-16 Full-height SAS adapters supported in the S1014, S1024, and PCIe expansion drawers
Feature CCIN Description Operating Order type
code system
support
EJ0J 57B4 PCIe3 RAID SAS Adapter Quad-Port 6 Gb AIX, Linux, IBM i Both
x8 (through VIOS)
EJ10 57B4 PCIe3 SAS Tape/DVD Adapter Quad-port 6 AIX, Linux, IBM i
Gb x8 (through VIOS)
EJ14 57B1 PCIe3 12 GB Cache RAID PLUS SAS AIX, Linux, IBM i
Adapter Quad-port 6 Gb x8 (through VIOS)
Table 3-17 lists the low profile USB adapter that is supported within the Power S1022s and
Power S1022 server models.
Table 3-17 Low profile USB adapter that is supported in the S1022s and S1022
Feature CCIN Description Operating system support Order type
code
EC6J 590F PCIe2 LP 2-Port USB 3.0 Adapter AIX, Linux, IBM i (through VIOS) Both
Table 3-18 lists the full-height USB adapter that is supported within the Power S1014 and
Power S1024 server models, and within the PCIe expansion drawer (EMX0) connected to any
of the Power10 processor-based scale-out server models.
Table 3-18 Full-height USB adapter supported in the S1014, S1024, and PCIe expansion drawers
Feature CCIN Description Operating system support Order type
code
EC6K 590F PCIe2 LP 2-Port USB 3.0 Adapter AIX, Linux, IBM i (through VIOS) Both
For more information about the cryptographic coprocessors, the available associated
software, and the available CCA, see this IBM Security® web page.
124 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
PCIe Gen3 cryptographic coprocessor 4767
Secure-key adapter provides cryptographic coprocessor and accelerator functions in a single
PCIe card. The adapter is suited to applications that require high-speed, security-sensitive,
RSA acceleration; cryptographic operations for data encryption and digital signing; secure
management, and the use of cryptographic keys or custom cryptographic applications.
This adapter is available only in full-height form factor, and is available in two variations with
two different Feature Codes:
#EJ32 does not include a Blind Swap Cassette (BSC) and can be installed only within the
chassis of a Power S1014 or Power S1024 server.
#EJ33 includes a Blind Swap Cassette housing, and can be installed only in a PCIe Gen3
I/O expansion drawer enclosure. This option is supported only for the Power S1022s and
Power S1022 server models.
The hardened encapsulated subsystem contains redundant IBM PowerPC® 476 processors,
custom symmetric key and hashing engines to perform AES, DES, TDES, SHA-1 and SHA-
2, MD5 and HMAC, and public key cryptographic algorithm support for RSA and Elliptic Curve
Cryptography.
Other hardware support includes a secure real-time clock, a hardware random number
generator, and a prime number generator. It also contains a separate service processor that
is used to manage self-test and firmware updates. The secure module is protected by a
tamper responding design that protects against various system attacks.
It includes acceleration for: AES; DES; Triple DES; HMAC; CMAC; MD5; multiple SHA
hashing methods; modular-exponentiation hardware, such as RSA and ECC; and full-duplex
direct memory access (DMA) communications.
The IBM 4769 is verified by NIST at FIPS 140-2 Level 4, the highest level of certification that
is achievable as of this writing for commercial cryptographic devices.
This adapter is available only in full-height form factor, and is available in two variations with
two different Feature Codes:
#EJ35 does not include a Blind Swap Cassette (BSC) and can be installed only within the
chassis of a Power S1014 or Power S1024 server.
#EJ37 includes a Blind Swap Cassette housing, and can be installed only in a PCIe Gen3
I/O expansion drawer enclosure. This option is supported only for the Power S1022s and
Power S1022 server models.
Table 3-19 Cryptographic adapters supported in the Power S1014, S1024, and PCIe expansion drawer
Feature CCIN Description Operating Order type
code system
support
EJ32 4767 PCIe3 Crypto Coprocessor no BSC 4767 AIX, Linux, IBM i Both
(S1014 or S1024 chassis only) Direct onlya
EJ35 C0AF PCIe3 Crypto Coprocessor no BSC 4769 AIX, Linux, IBM i
(S1014 or S1024 chassis only) Direct only
General PCIe slots (C10/C8 and C11) support NVMe just a bunch of flash (JBOF) adapters
and are cabled to the NVMe backplane. Each NVMe JBOF card contains a 52-lane PCIe
Gen4 switch. The connected NVMe devices are individually addressable, and can be
allocated individually to LPARs that are running on the system.
Concurrently Yes
Maintainable NVMe
126 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3.5.1 S1022s and S1022 Backplane
The Storage backplane with four NVMe U.2 drive slots (#EJ1X) is the base storage
backplane. The internal NVMe is attached to the processor by using a plug-in PCIe NVMe
JBOF card that is included with each storage backplane.
Up to 2 NVMe JBOF cards can be populated in the Power S1022s and S1022 servers with a
1:1 correspondence between the card and the storage backplane. Each JBOF card contains
four connectors that are cabled to connectors on a single 4-device backplane, with each cable
containing signals for two NVMe devices. Only two cables are installed to support a total of
four devices per backplane.
The NVMe JBOF card and storage backplane connection is shown in Figure 3-1.
Storage
Backplane
NVMe JBOF Card
Cables
2 x4 U.2 drive slot
G4 x16
The NVMe JBOF card is treated as a regular cable card, with the similar EEH support as a
planar switch. The card is not concurrently maintainable because of the cabling that is
required to the NVMe backplane.
Up to two NVMe JBOF cards can be populated in the Power S1014 and S1024 servers with a
1:1 correspondence between the card and the storage backplane. Each JBOF card contains
four connectors that are cabled to four connectors on a single 8-device backplane, with each
cable containing signals for two NVMe devices.
Storage
Backplane
NVMe JBOF Card
Cables
The NVMe JBOF card is treated as a regular cable card, with the similar EEH support as a
planar switch. The card is not concurrently maintainable because of the cabling that is
required to the NVMe backplane.
PCIe slots C8 and C10 can be cabled only to NVMe backplane P1 and PCIe slot C11 can be
cabled only to NVMe backplane P2. A JBOF card never can be plugged in a lower numbered
slot than an OpenCAPI adapter.
Table 3-21 lists the NVMe JBOF card slots that are cabled to NVMe backplanes under various
configurations.
NVMe backplane
P1 (Left) P2 (Middle)
128 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Each connector on the JBOF card cables to the corresponding connector on the backplane:
C0 provides signaling for NVMe drives 0 and 1
C1 provides signaling for drives 2 and 3
C2 provides signaling for drives 4 and 5
C3 provides signaling for drives 6 and 7
In the Power S1022s and S1022 servers, only C1 and C2 are connected. The other
connectors on the JBOF and backplane are left unconnected.
Figure 3-3 shows the connector numbering on the NVMe JBOF card on the left and the
NVMe backplane on the right.
T0 = NVMe C0/C1
T1 = NVMe C2/C3
T3 = NVMe C4/C5
T4 = NVMe C6/C7
Figure 3-3 Connector locations for JBOF card and NVMe backplane
For more information about the U.2 form factor NVMe storage devices, see 3.8, “Disk and
media features” on page 134.
Table 3-22 PCIe-based NVMe storage devices for the Power S1022s and S1022 servers
Feature Code CCIN Description Minimum Maximum Operating
system support
Table 3-23 lists the PCIe-based NVMe storage devices that are available for the Power S1014
server.
Table 3-23 PCIe based NVMe storage adapters for the Power S1014 server
Feature Code CCIN Description Minimum Maximum Operating
system support
130 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Feature Code CCIN Description Minimum Maximum Operating
system support
Table 3-24 lists the PCIe-based NVMe storage devices that are available for the Power S1024
server.
Table 3-24 PCIe based NVMe storage devices for the Power S1024 server
Feature code CCIN Description Min Max Operating
system support
Several protection options are available for hard disk drives (HDDs) or SSDs that are in
disk-only I/O drawers. Although protecting drives is always preferred, AIX and Linux users can
choose to leave a few or all drives unprotected at their own risk. IBM supports these
configurations.
132 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3.7 Media drawers
The IBM System Storage 7226 Model 1U3 Multi-Media Enclosure can accommodate up to
two LTO tape drives, two RDX removable disk drive docking stations, or up to four DVD-RAM
drives. The 7226 offers SAS, USB, and FC electronic interface drive options for attachment to
the Power S1014, S1022s, S1022, and S1024 servers.
For more information about the 7226-1U3 multi-media expansion enclosure and supported
options, see 3.10.4, “Useful rack additions” on page 153.
The RDX USB External Docking Station attaches to a Power server by way of a USB cable,
which carries data and control information. It is not powered by the USB port on the Power
server or Power server USB adapter, but has a separate electrical power cord.
Physically, the docking station is a stand-alone enclosure that is approximately 2.0 x 7.0 x
4.25 inches and can sit on a shelf or on top of equipment in a rack.
General PCIe slots (C10/C8, C11) support NVMe JBOF cards that are cabled to an NVMe
backplane. NVMe JBOF cards contain a 52-lane PCIe Gen4 switch.
The Power S1014 and S1024 servers also support an optional internal RDX drive that is
attached by way of the USB controller.
Table 3-26 lists the available internal storage options that can be installed in the Power S1014
and S1024 servers.
Table 3-26 Internal storage options in the Power S1014 and S1024 servers
Feature code Description Maximum
The Power S1014 and S1024 servers with two storage backplanes and RDX drive are shown
in Figure 3-4 on page 135.
134 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 3-4 The Power S1014 and S1024 servers with two storage backplanes and RDX drive
Table 3-27 lists the available U.2 form factor NVMe drive Feature Codes for the Power S1014
and S1024 servers. These codes are different from the PCIe based NVMe storage devices
that can be installed in the PCIe slots in the rear of the server. For more information about the
available PCIe-based NVMe adapters, see 3.5.4, “NVMe support” on page 129.
Table 3-27 U.2 form factor NVMe device features in the Power S1014 and S1024 servers
Feature code CCIN Description Minimum Maximum Operating system
support
EC5V 59BA Enterprise 6.4 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia, and Linux
U.2 module for AIX/Linux
EC5X 59B7 Mainstream 800 GB SSD PCIe3 NVMe 0 4 AIX and Linux
U.2 module for AIX/Linux
EKF3 5B52 Enterprise 1.6 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
EKF5 5B51 Enterprise 3.2 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
EKF7 5B50 Enterprise 6.4 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
ES1E 59B8 Enterprise 1.6 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
ES1F 59B8 Enterprise 1.6 TB SSD PCIe4 NVMe 0 16 AIX and IBM ib
U.2 module for IBM i
ES1G 59B9 Enterprise 3.2 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
ES3B 5B34 Enterprise 1.6 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
ES3D 5B51 Enterprise 3.2 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
ES3F 5B50 Enterprise 6.4 TB SSD PCIe4 NVMe 0 16 AIX, IBM ia , and Linux
U.2 module for AIX/Linux
Table 3-28 lists the available internal storage option that can be installed in the Power S1022s
and S1022 servers.
Table 3-28 Internal storage option in the Power S1022s and S1022 servers
Feature code Description Maximum
a
EJ1X Storage backplane with four NVMe U.2 drive slots 2
a. Each backplane ships with 1 NVMe JBOF card that plugs into a PCIe slot.
136 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Table 3-29 lists the available U.2 form factor NVMe drive Feature Codes for the
Power S1022s and S1022 servers. These codes are different from the PCIe based NVMe
storage devices that can be installed in the PCIe slots in the rear of the server. For more
information about the available PCIe-based NVMe adapters, see 3.5.4, “NVMe support” on
page 129.
Table 3-29 U.2 form factor NVMe device features in the Power S1022s and S1022 servers
Feature code CCIN Description Minimum Maximum Operating system
support
The Stand-alone USB DVD drive (#EUA5) is an optional, stand-alone external USB-DVD
device. This device includes a USB cable. The cable provides the data path and power to this
drive.
SAS backplane is not supported on the Power S1014, S1022s, S1022, and S1024 servers.
SAS drives can be placed only in IBM EXP24SX SAS Storage Enclosures, which are
connected to system units by using a serial-attached SCSI (SAS) ports in PCIe based SAS
adapters.
For more information about the available SAS adapters, see 3.4.3, “SAS adapters” on
page 123.
If you need more directly connected storage capacity than is available within the internal
NVMe storage device bays, you can attach external disk subsystems to the Power S1014,
S1022s, S1022, and S1024 servers:
EXP24SX SAS Storage Enclosures
IBM System Storage
The PCIe Gen3 I/O Expansion Drawer has two redundant, hot-plug power supplies. Each
power supply has its own separately ordered power cord. The two power cords plug into a
power supply conduit that connects to the power supply. The single-phase AC power supply is
rated at 1030 W and can use 100 - 120 V or 200 - 240 V. If 100 - 120 V is used, the maximum
is 950 W. It is a best practice that the power supply connects to a power distribution unit
(PDU) in the rack. IBM Power PDUs are designed for a 200 - 240 V electrical source.
A blind swap cassette (BSC) is used to house the full-height adapters that are installed in
these slots. The BSC is the same BSC that is used with previous generation 12X attached I/O
drawers (#5802, #5803, #5877, and #5873). The drawer includes a full set of BSCs, even if
the BSCs are empty.
Concurrent repair, and adding or removing PCIe adapters, is done by HMC-guided menus or
by operating system support utilities.
138 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 3-6 shows the back view of the PCIe Gen3 I/O expansion drawer.
Figure 3-6 Rear view of the PCIe Gen3 I/O expansion drawer
Figure 3-7 Rear view of a PCIe Gen3 I/O expansion drawer with PCIe slots location codes
Table 3-30 lists the PCI slots in the PCIe Gen3 I/O expansion drawer that is equipped with two
PCIe3 6-slot fan-out modules.
Table 3-30 PCIe slot locations for the PCIe Gen3 I/O expansion drawer with two fan-out modules
Slot Location code Description
140 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Consider the following points about Table 3-30 on page 140:
All slots support full-length, full-height adapters or short (LP) adapters with a full-height tail
stock in single-wide, Gen3, BSC.
Slots C1 and C4 in each PCIe3 6-slot fan-out module are x16 PCIe3 buses, and slots C2,
C3, C5, and C6 are x8 PCIe buses.
All slots support enhanced error handling (EEH).
All PCIe slots are hot-swappable and support concurrent maintenance.
Table 3-31 lists the maximum number of I/O drawers that are supported and the total number
of PCI slots that are available to the server.
Table 3-31 Maximum number of I/O drawers that are supported and total number of PCI slots
Server Maximum number of Maximum number of Maximum PCIe
I/O exp drawers I/O fan-out modules slots
Table 3-32 lists the available converter adapter that can be installed in the Power S1022s and
S1022 servers.
Table 3-32 Available converter adapter in the Power S1022s and S1022
Feature code Slot priorities Maximum Slot priorities Maximum
(one processor) number of (two number of
adapters processors) adapters
supported supported
EJ24a 10 1 3, 0, 4, 10 4
a. single-wide, low-profile
Table 3-33 Available converter adapter in the Power S1014 and S1024
Feature code Slot priorities Maximum Slot priorities Maximum
(one processor) number of (two number of
adapters processors) adapters
supported supported
EJ24a 10 1 3, 0, 4, 10 4
a. single-wide, full-height
The PCIe3 x16 to CXP Converter Adapter (#EJ24) is shown in Figure 3-8.
T1
T2
Although these cables are not redundant, the loss of one cable reduces the I/O bandwidth
(that is, the number of lanes that are available to the I/O module) by 50%.
142 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
A minimum of one PCIe3 x16 to CXP Converter adapter for PCIe3 Expansion Drawer is
required to connect to the PCIe3 6-slot fan-out module in the I/O expansion drawer. The
fan-out module has two CXP ports. The top CXP port of the fan-out module is cabled to the
top CXP port of the PCIe3 x16 to CXP Converter adapter. The bottom CXP port of the fan-out
module is cabled to the bottom CXP port of the same PCIe3 x16 to CXP Converter adapter.
Figure 3-9 shows the connector locations for the PCIe Gen3 I/O Expansion Drawer.
Figure 3-9 Connector locations for the PCIe Gen3 I/O expansion drawer
PCIe Gen3 I/O expansion drawer system power control network cabling
No system power control network (SPCN) is used to control and monitor the status of power
and cooling within the I/O drawer. SPCN capabilities are integrated into the optical cables.
The EXP24SX drawer is a storage expansion enclosure with 24 2.5-inch SFF SAS drive bays.
It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA rack units (2U) of space in a 19-inch
rack. The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.
144 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Figure 3-11 shows the EXP24SX drawer.
With AIX/Linux/VIOS, the EXP24SX can be ordered as configured with four sets of 6 bays
(mode 4), two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). With IBM i, one set of
24 bays (mode 1) is supported. It is possible to change the mode setting in the field by using
software commands along with a documented procedure.
1 2 3 4 5 6 7 8 9 0
1 1
1 2
1 3
1 4
1 5
1 6
1 7
1 8
1 9
1 0
2 1
2 2
2 3
2 4
2
-D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1
P P P P P P P P P P P P P P P P P P P P P P P P
Figure 3-12 Front view of the ESLS storage enclosure with mode groups and drive locations
Four mini-SAS HD ports on the EXP24SX are attached to PCIe Gen3 SAS adapters. The
following PCIe3 SAS adapters support the EXP24SX:
PCIe3 RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0J)
PCIe3 12 GB Cache RAID Plus SAS Adapter Quad-port 6 Gb x8 (#EJ14)
PCIe3 LP RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0M)
The attachment between the EXP24SX drawer and the PCIe Gen 3 SAS adapter is through
SAS YO12 or X12 cables. The PCIe Gen 3 SAS adapters support 6 Gb throughput. The
EXP24SX drawer can support up to 12 Gb throughput if future SAS adapters support that
capability.
146 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
20 M 100 GbE Optical Cable QSFP28 (AOC) (#EB5V)
30 M 100 GbE Optical Cable QSFP28 (AOC) (#EB5W)
50 M 100 GbE Optical Cable QSFP28 (AOC) (#EB5X)
Six SAS connectors are at the rear of the EXP24SX drawers to which SAS adapters or
controllers are attached. They are labeled T1 T2, and T3; two T1s, two T2s, and two T3s
connectors. Consider the following points:
In mode 1, two or four of the six ports are used. Two T2 ports are used for a single SAS
adapter, and two T2 and two T3 ports are used with a paired set of two adapters or a dual
adapters configuration.
In mode 2 or mode 4, four ports are used, two T2s and two T3 connectors to access all the
SAS bays.
The T1 connectors are not used.
Figure 3-13 shows the connector locations for the EXP24SX storage enclosure.
Figure 3-13 Rear view of the EXP24SX with location codes and different split modes
For more information about SAS cabling and cabling configurations, see this IBM
Documentation web page.
For more information about the various offerings, see Data Storage Solutions.
With the low latency and high-performance NVMe storage technology and up to 8 YB global
file system and global data services of IBM Spectrum® Scale, the IBM Elastic Storage
System 3500 and 5000 nodes can grow to multi-yottabyte configurations. They also can be
integrated into a federated global storage system.
IBM FlashSystem is built with IBM Spectrum Virtualize software to help deploy sophisticated
hybrid cloud storage solutions, accelerate infrastructure modernization, address
cybersecurity needs, and maximize value by using the power of AI. New IBM FlashSystem
models deliver the performance to facilitate cyber security without compromising production
workloads.
148 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3.10 System racks
Except for the S1014 tower model IBM Power S1014, the S1022s, S1022, and S1024 servers
fit into a standard 19-inch rack. These server models are all certified and tested in the IBM
Enterprise racks (7965-S42, 7014-T42, or 7014-T00). Customers can choose to place the
server in other racks if they are confident that those racks have the strength, rigidity, depth,
and hole pattern characteristics that are needed. Contact IBM Support to determine whether
other racks are suitable.
Order information: Only the IBM Enterprise 42U slim rack (7965-S42) is available and
supported for factory integration and installation of the server. The other Enterprise racks
(7014-T42 and 7014-T00) are supported only for installation into existing racks. Multiple
servers can be installed into a single IBM Enterprise rack in the factory or field.
If a system is installed in a rack or cabinet that is not from IBM, ensure that the rack meets the
requirements that are described in 3.10.5, “Original equipment manufacturer racks” on
page 155.
Responsibility: The customer is responsible for ensuring the installation of the server in
the preferred rack or cabinet results in a configuration that is stable, serviceable, and safe.
It also must be compatible with the drawer requirements for power, cooling, cable
management, weight, and rail security.
The 7965-S42 rack includes space for up to four PDUs in side pockets. Extra PDUs beyond
four are mounted horizontally and each uses 1U of rack space.
The Enterprise Slim Rack comes with options for the installed front door:
Basic Black/Flat (#ECRM)
High-End appearance (#ECRF)
OEM Black (#ECRE)
All options include perforated steel, which provides ventilation, physical security, and visibility
of indicator lights in the installed equipment within. All options also include a lock and
mechanism that is identical to the lock on the rear doors.
Only one front door must be included for each rack ordered. The basic door (#ECRM) and
OEM door (#ECRE) can be hinged on the left or right side.
Orientation: #ECRF must not be flipped because the IBM logo is upside down.
The basic door (#ECRG) can be hinged on the left or right side, and includes a lock and
mechanism identical to the lock on the front door. The basic rear door (#ECRG) or the RDHX
indicator (#ECR2) must be included with the order of a new Enterprise Slim Rack.
Because of the depth of the S1022s and S1022 server models, the 5-inch rear rack extension
(#ECRK) is required for the Enterprise Slim Rack to accommodate these systems. This
extension expands the space available for cable management and allows the rear door to
close safely.
Rack-integrated system orders require at least two PDU devices be installed in the rack to
support independent connection of redundant power supplies in the server.
To connect to the standard PDU, and system units and expansion units must use a power
cord with a C14 plug to connect to #7188. One of the following power cords must be used to
distribute power from a wall outlet to the #7188 PDU: #6489, #6491, #6492, #6653, #6654,
#6655, #6656, #6657, #6658, or #6667.
The following high-function PDUs are orderable as #ECJJ, #ECJL, #ECJN, and #ECJQ:
High Function 9xC19 PDU plus (#ECJJ)
This intelligent, switched 200 - 240 volt AC PDU includes nine C19 receptacles on the
front of the PDU and three C13 receptacles on the rear of the PDU. The PDU is mounted
on the rear of the rack, which makes the nine C19 receptacles easily accessible.
High Function 9xC19 PDU plus 3-Phase (#ECJL)
This intelligent, switched 208-volt 3-phase AC PDU includes nine C19 receptacles on the
front of the PDU and three C13 receptacles on the rear of the PDU. The PDU is mounted
on the rear of the rack, which makes the nine C19 receptacles easily accessible.
150 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
High Function 12xC13 PDU plus (#ECJN)
This intelligent, switched 200 - 240 volt AC PDU includes 12 C13 receptacles on the front
of the PDU. The PDU is mounted on the rear of the rack, which makes the 12 C13
receptacles easily accessible.
High Function 12xC13 PDU plus 3-Phase (#ECJQ)
This intelligent, switched 208-volt 3-phase AC PDU includes 12 C13 receptacles on the
front of the PDU. The PDU is mounted on the rear of the rack, which makes the 12 C13
receptacles easily accessible.
Table 3-34 lists the Feature Codes for the high-function PDUs.
Table 3-34 High-function PDUs available with IBM Enterprise Slim Rack (7965-S42)
PDUs 1-phase or 3-phase 3-phase 208 V depending on
depending on country wiring country wiring standards
standards
The PDU receives power through a UTG0247 power-line connector. Each PDU requires one
PDU-to-wall power cord. Various power cord features are available for various countries and
applications by varying the PDU-to-wall power cord, which must be ordered separately.
Each power cord provides the unique design characteristics for the specific power
requirements. To match new power requirements and save previous investments, these
power cords can be requested with an initial order of the rack, or with a later upgrade of the
rack features.
Table 3-35 lists the available PDU-to-wall power cord options for the PDU features, which
must be ordered separately.
Table 3-35 PDU-to-wall power cord options for the PDU features
Feature Wall plug Rated voltage Phase Rated amperage Geography
code (V AC)
6492 IEC 309, 2P+G, 200 - 208, 240 1 48 amps US, Canada, LA,
60 A and Japan
6654 NEMA L6-30 200 - 208, 240 1 24 amps US, Canada, LA,
and Japan
6655 RS 3750DP 200 - 208, 240 1 24 amps
(watertight)
Notes: Ensure that a suitable power cord feature is configured to support the power that is
being supplied. Based on the power cord that is used, the PDU can supply 4.8 - 19.2 kVA.
The power of all the drawers that are plugged into the PDU must not exceed the power
cord limitation.
For maximum availability, a preferred approach is to connect power cords from the same
system to two separate PDUs in the rack, and to connect each PDU to independent power
sources.
For more information about power requirements of and the power cord for the 7965-S42 rack,
see this IBM Documentation web page.
PDU installation
The IBM Enterprise Slim Rack includes four side mount pockets to allow for the vertical
installation of PDUs. This configuration frees up more of the horizontal space in the rack for
the installation of systems and other equipment. Up to four PDU devices can be installed
vertically in each rack, so any other PDU devices must be installed horizontally. When PDUs
are mounted horizontally in a rack, they each use 1 EIA (1U) of rack space.
Note: When a new IBM Power server is factory installed in an IBM rack that also includes a
PCIe expansion drawer, all of the PDUs for that rack are installed horizontally by default.
This configuration allows for extra space in the sides of the rack to enhance cable
management.
152 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
3.10.4 Useful rack additions
This section highlights several rack addition solutions for IBM Power rack-based systems.
The IBM System Storage 7226 Multi-Media Enclosure supports LTO Ultrium and DAT160
Tape technology, DVD-RAM, and RDX removable storage requirements on the following IBM
systems:
IBM POWER6 processor-based systems
IBM POWER7 processor-based systems
IBM POWER8 processor-based systems
IBM POWER9 processor-based systems
IBM POWER10 processor-based systems
The IBM System Storage 7226 Multi-Media Enclosure offers the drive feature options that are
listed in Table 3-36.
Removable RDX drives are in a rugged cartridge that inserts in to an RDX removable (USB)
disk docking station (#EU03). RDX drives are compatible with docking stations, which are
installed internally in Power8, Power9, and Power10 processor-based servers (where
applicable) or the IBM System Storage 7226 Multi-Media Enclosure (7226-1U3).
Figure 3-14 shows the IBM System Storage 7226 Multi-Media Enclosure with a single RDX
docking station and two DVD-RAM devices installed.
The IBM System Storage 7226 Multi-Media Enclosure offers a customer-replaceable unit
(CRU) maintenance service to help make the installation or replacement of new drives
efficient. Other 7226 components also are designed for CRU maintenance.
The IBM System Storage 7226 Multi-Media Enclosure is compatible with most Power8,
Power9, and Power10 processor-based systems that offer current level AIX, IBM i, and Linux
operating systems.
For a complete list of host software versions and release levels that support the IBM System
Storage 7226 Multi-Media Enclosure, see System Storage Interoperation Center (SSIC).
154 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Flat panel display options
The IBM 7316 Model TF5 is a rack-mountable flat panel console kit that also can be
configured with the tray pulled forward and the monitor folded up, which provides full viewing
and keying capability for the HMC operator.
The Model TF5 is a follow-on product to the Model TF4 and offers the following features:
A slim, sleek, and lightweight monitor design that occupies only 1U (1.75 in.) in a 19-inch
standard rack
A 18.5-inch (409.8 mm x 230.4 mm) flat panel TFT monitor with truly accurate images and
virtually no distortion
The ability to mount the IBM Travel Keyboard in the 7316-TF5 rack keyboard tray
The IBM Documentation provides the general rack specifications, including the following
information:
The rack or cabinet must meet the EIA Standard EIA-310-D for 19-inch racks that was
published 24 August 1992. The EIA-310-D standard specifies internal dimensions; for
example, the width of the rack opening (width of the chassis), the width of the module
mounting flanges, and the mounting hole spacing.
The front rack opening must be a minimum of 450 mm (17.72 in.) wide, and the
rail-mounting holes must be 465 mm +/- 1.6 mm (18.3 in. +/- 0.06 in.) apart on center
(horizontal width between vertical columns of holes on the two front-mounting flanges and
on the two rear-mounting flanges).
Figure 3-15 is a top view showing the rack specification dimensions.
The following rack hole sizes are supported for racks where IBM hardware is mounted:
– 7.1 mm (0.28 in.) plus or minus 0.1 mm (round)
– 9.5 mm (0.37 in.) plus or minus 0.1 mm (square)
The rack or cabinet must support an average load of 20 kg (44 lb.) of product weight per EIA
unit. For example, a four EIA drawer has a maximum drawer weight of 80 kg (176 lb.).
156 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
4
Note: PowerVM Enterprise Edition license entitlement is included with each Power10
processor-based, scale-out server. PowerVM Enterprise Edition is available as a hardware
feature (#5228) and supports up to 20 partitions per core, VIOS, and multiple shared
processor pools (SPPs), and offers LPM.
Combined with features in the Power10 processor-based scale-out servers, the IBM Power
Hypervisor delivers functions that enable other system technologies, including the following
examples:
Logical partitioning (LPAR)
Virtualized processors
IEEE virtual local area network (VLAN)-compatible virtual switches
Virtual SCSI adapters
Virtual Fibre Channel adapters
Virtual consoles
The Power Hypervisor is a basic component of the system’s firmware and offers the following
functions:
Provides an abstraction between the physical hardware resources and the LPARs that use
them.
Enforces partition integrity by providing a security layer between LPARs.
Controls the dispatch of virtual processors to physical processors.
Saves and restores all processor state information during a logical processor context
switch.
158 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Controls hardware I/O interrupt management facilities for LPARs.
Provides VLAN channels between LPARs that help reduce the need for physical Ethernet
adapters for inter-partition communication.
Monitors the enterprise baseboard management controller (eBMC) or the flexible service
processor (FSP) of the system and performs a reset or reload if it detects the loss of one
of the eBMC or FSP controllers, and notifies the operating system if the problem is not
corrected.
The Power Hypervisor is always active, regardless of the system configuration or whether it is
connected to the managed console. It requires memory to support the resource assignment
of the LPARs on the server.
The amount of memory that is required by the Power Hypervisor firmware varies according to
the following memory usage factors:
For hardware page tables (HPTs)
To support I/O devices
For virtualization
The amount of memory for the HPT is based on the maximum memory size of the partition
and the HPT ratio. The default HPT ratio is 1/128th (for AIX, Virtual I/O Server [VIOS], and
Linux partitions) of the maximum memory size of the partition. AIX, VIOS, and Linux use
larger page sizes (16 and 64 KB) instead of the use of 4 KB pages.
The use of larger page sizes reduces the overall number of pages that must be tracked;
therefore, the overall size of the HPT can be reduced. For example, the HPT is 2 GB for an
AIX partition with a maximum memory size of 256 GB.
When defining a partition, the maximum memory size that is specified is based on the amount
of memory that can be dynamically added to the dynamic partition (DLPAR) without changing
the configuration and restarting the partition.
In addition to setting the maximum memory size, the HPT ratio can be configured. The
hpt_ratio parameter for the chsyscfg Hardware Management Console (HMC) command can
be issued to define the HPT ratio that is used for a partition profile. The following values are
valid:
1:32
1:64
1:128
1:256
1:512
The TCEs also provide the address of the I/O buffer, which is an indication of read versus
write requests, and other I/O-related attributes. Many TCEs are used per I/O device, so
multiple requests can be active simultaneously to the same physical device. To provide better
affinity, the TCEs are spread across multiple processor chips or drawers to improve
performance while accessing the TCEs.
For physical I/O devices, the base amount of space for the TCEs is defined by the hypervisor
that is based on the number of I/O devices that are supported. A system that supports
high-speed adapters also can be configured to allocate more memory to improve I/O
performance. Linux is the only operating system that uses these extra TCEs so that the
memory can be freed for use by partitions if the system uses only AIX or IBM i operating
systems.
The Power Hypervisor must set aside save areas for the register contents for the maximum
number of virtual processors that are configured. The greater the number of physical
hardware devices, the greater the number of virtual devices, the greater the amount of
virtualization, and the more hypervisor memory is required.
For efficient memory consumption, wanted and maximum values for various attributes
(processors, memory, and virtual adapters) must be based on business needs, and not set to
values that are significantly higher than requirements.
The Power Hypervisor provides the following types of virtual I/O adapters:
Virtual SCSI
The Power Hypervisor provides a virtual SCSI mechanism for the virtualization of storage
devices. The storage virtualization is accomplished by using two paired adapters: a virtual
SCSI server adapter and a virtual SCSI customer adapter.
160 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Virtual Ethernet
The Power Hypervisor provides a virtual Ethernet switch function that allows partitions fast
and secure communication on the same server without any need for physical
interconnection or connectivity outside of the server if a Layer 2 bridge to a physical
Ethernet adapter is set in one VIOS partition, also known as Shared Ethernet Adapter
(SEA).
Virtual Fibre Channel
A virtual Fibre Channel adapter is a virtual adapter that provides customer LPARs with a
Fibre Channel connection to a storage area network through the VIOS partition. The VIOS
partition provides the connection between the virtual Fibre Channel adapters on the VIOS
partition and the physical Fibre Channel adapters on the managed system.
Virtual (tty) console
Each partition must have access to a system console. Tasks, such as operating system
installation, network setup, and various problem analysis activities, require a dedicated
system console. The Power Hypervisor provides the virtual console by using a virtual tty
and a set of hypervisor calls to operate on them. Virtual tty does not require the purchase
of any other features or software, such as the PowerVM Edition features.
Logical partitions
LPARs and the use of virtualization increase the usage of system resources while adding a
level of configuration possibilities.
Logical partitioning is the ability to make a server run as though it were two or more
independent servers. When you logically partition a server, you divide the resources on the
server into subsets, called LPARs. You can install software on an LPAR, and the LPAR runs
as an independent logical server with the resources that you allocated to the LPAR.
LPARs are also referred to in some documentation as virtual machines (VMs), which make
them appear to be similar to what other hypervisors offer. However, LPARs provide a higher
level of security and isolation and other features.
Processors, memory, and I/O devices can be assigned to LPARs. AIX, IBM i, Linux, and VIOS
can run on LPARs. VIOS provides virtual I/O resources to other LPARs with general-purpose
operating systems.
LPARs share a few system attributes, such as the system serial number, system model, and
processor FCs. All other system attributes can vary from one LPAR to another.
Micro-Partitioning
When you use the Micro-Partitioning technology, you can allocate fractions of processors to
an LPAR. An LPAR that uses fractions of processors is also known as a shared processor
partition or micropartition. Micropartitions run over a set of processors that is called a shared
processor pool (SPP), and virtual processors are used to enable the operating system to
manage the fractions of processing power that are assigned to the LPAR.
The shared processor partitions are dispatched and time-sliced on the physical processors
under the control of the Power Hypervisor. The shared processor partitions are created and
managed by the HMC.
Processing mode
When you create an LPAR, you can assign entire processors for dedicated use, or you can
assign partial processing units from an SPP. This setting defines the processing mode of the
LPAR.
Dedicated mode
In dedicated mode, physical processors are assigned as a whole to partitions. The SMT
feature in the Power10 processor core allows the core to run instructions from one, two, four,
or eight independent software threads simultaneously.
The dedicated partition maintains absolute priority for dedicated CPU cycles. Enabling this
feature can help increase system usage without compromising the computing power for
critical workloads in a dedicated processor mode LPAR.
Shared mode
In shared mode, LPARs use virtual processors to access fractions of physical processors.
Shared partitions can define any number of virtual processors (the maximum number is 20
times the number of processing units that are assigned to the partition).
The Power Hypervisor dispatches virtual processors to physical processors according to the
partition’s processing units entitlement. One processing unit represents one physical
processor’s processing capacity. All partitions receive a total CPU time equal to their
processing unit’s entitlement.
The logical processors are defined on top of virtual processors. Therefore, even with a virtual
processor, the concept of a logical processor exists, and the number of logical processors
depends on whether SMT is turned on or off.
162 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Micropartitions are created and then identified as members of the default processor pool or a
user-defined SPP. The virtual processors that exist within the set of micropartitions are
monitored by the Power Hypervisor. Processor capacity is managed according to
user-defined attributes.
If the Power server is under heavy load, each micropartition within an SPP is assured of its
processor entitlement, plus any capacity that might be allocated from the reserved pool
capacity if the micropartition is uncapped.
If specific micropartitions in an SPP do not use their processing capacity entitlement, the
unused capacity is ceded, and other uncapped micropartitions within the same SPP can use
the extra capacity according to their uncapped weighting. In this way, the entitled pool
capacity of an SPP is distributed to the set of micropartitions within that SPP.
All Power servers that support the multiple SPP capability have a minimum of one (the
default) SPP and up to a maximum of 64 SPPs.
This capability helps customers reduce the TCO significantly when the costs of software or
database licenses depend on the number of assigned processor-cores.
The VIOS eliminates the requirement that every partition owns a dedicated network adapter,
disk adapter, and disk drive. The VIOS supports OpenSSH for secure remote logins. It also
provides a firewall for limiting access by ports, network services, and IP addresses.
It is a preferred practice to run dual VIO servers per physical server to allow for redundancy of
all I/O paths for client LPARs.
Because the SEA processes packets at Layer 2, the original MAC address and VLAN tags of
the packet are visible to other systems on the physical network. IEEE 802.1 VLAN tagging is
supported.
By using the SEA, several customer partitions can share one physical adapter. You also can
connect internal and external VLANs by using a physical adapter. The SEA service can be
hosted only in the VIOS (not in a general-purpose AIX or Linux partition) and acts as a
Layer 2 network bridge to securely transport network traffic between virtual Ethernet
networks (internal) and one or more (Etherchannel) physical network adapters (external).
These virtual Ethernet network adapters are defined by the Power Hypervisor on the VIOS.
Virtual SCSI
Virtual SCSI is used to view a virtualized implementation of the SCSI protocol. Virtual SCSI is
based on a customer/server relationship. The VIOS LPAR owns the physical I/O resources
and acts as a server or in SCSI terms, a target device. The client LPARs access the virtual
SCSI backing storage devices that are provided by the VIOS as clients.
The virtual I/O adapters (a virtual SCSI server adapter and a virtual SCSI client adapter) are
configured by using an HMC. The virtual SCSI server (target) adapter is responsible for
running any SCSI commands that it receives, and is owned by the VIOS partition. The virtual
SCSI client adapter allows a client partition to access physical SCSI and SAN-attached
devices and LUNs that are mapped to be used by the client partitions. The provisioning of
virtual disk resources is provided by the VIOS.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) is a technology that allows multiple LPARs to access one or
more external physical storage devices through the same physical Fibre Channel adapter.
This adapter is attached to a VIOS partition that acts only as a pass-through that manages
the data transfer through the Power Hypervisor.
Each partition features one or more virtual Fibre Channel adapters, each with their own pair
of unique worldwide port names. This configuration enables you to connect each partition to
independent physical storage on a SAN. Unlike virtual SCSI, only the client partitions see the
disk.
For more information and requirements for NPIV, see IBM PowerVM Virtualization Managing
and Monitoring, SG24-7590.
LPM provides systems management flexibility and improves system availability by avoiding
the following situations:
Planned outages for hardware upgrade or firmware maintenance.
Unplanned downtime. With preventive failure management, if a server indicates a potential
failure, you can move its LPARs to another server before the failure occurs.
164 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
For more information and requirements for LPM, see IBM PowerVM Live Partition Mobility,
SG24-7460.
HMCV10.1.1020.0 and VIOS 3.1.3.21 or later provide the following enhancements to the
LPM feature:
Automatically choose fastest network for LPM memory transfer
Allow LPM when a virtual optical device is assigned to a partition
A portion of available memory can be proactively partitioned such that a duplicate set can be
used upon noncorrectable memory errors. This feature can be implemented at the granularity
of DDIMMs or logical memory blocks.
The Remote Restart function relies on technology that is similar to LPM where a partition is
configured with storage on a SAN that is shared (accessible) by the server that hosts the
partition.
HMC V10R1 provides an enhancement to the Remote Restart Feature that enables remote
restart when a virtual optical device is assigned to a partition.
On Power servers, partitions can be configured to run in several modes, including the
following modes:
POWER8
This native mode for Power8 processors implements Version 2.07 of the IBM Power ISA.
For more information, see this IBM Documentation web page.
POWER9
This native mode for Power9 processors implements Version 3.0 of the IBM Power ISA.
For more information, see this IBM Documentation web page.
Power10
This native mode for Power10 processors implements Version 3.1 of the IBM Power ISA.
For more information, see this IBM Documentation web page.
Processor compatibility mode is important when LPM migration is planned between different
generations of server. An LPAR that might be migrated to a machine that is managed by a
processor from another generation must be activated in a specific compatibility mode.
The operating system that is running on the POWER7 processor-based server must be
supported on Power10 processor-based scale-out server, or must be upgraded to a
supported level before starting the above steps.
166 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
4.1.8 Single Root I/O virtualization
Single Root I/O Virtualization (SR-IOV) is an extension to the Peripheral Component
Interconnect Express (PCIe) specification that allows multiple operating systems to
simultaneously share a PCIe adapter with little or no runtime involvement from a hypervisor or
other virtualization intermediary.
SR-IOV is a PCI standard architecture that enables PCIe adapters to become self-virtualizing.
It enables adapter consolidation through sharing, much like logical partitioning enables server
consolidation. With an adapter capable of SR-IOV, you can assign virtual slices of a single
physical adapter to multiple partitions through logical ports;, which is done without a VIOS.
For more information about the virtualization features, see the following publications:
IBM PowerVM Best Practices, SG24-8062
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
IBM Power Systems SR-IOV: Technical Overview and Introduction, REDP-5065
IBM PowerVC can manage AIX, IBM i, and Linux-based VMs that are running under
PowerVM virtualization and are connected to an HMC or by using NovaLink. As of this writing,
the release supports the scale-out and the enterprise Power servers that are built on IBM
Power8, IBM Power9, and Power10.
Note: The Power S1014, S1022s, S1022, and S1024 servers are supported by PowerVC
2.0.3 or later. More fix packs might be required. For more information, see this IBM
Support Fix Central web page.
IBM PowerVC is an addition to the PowerVM set of enterprise virtualization technologies that
provide virtualization management. It is based on open standards and integrates server
management with storage and network management.
Because IBM PowerVC is based on the OpenStack initiative, Power can be managed by tools
that are compatible with OpenStack standards. When a system is controlled by IBM
PowerVC, it can be managed in one of three ways:
By a system administrator that uses the IBM PowerVC graphical user interface (GUI)
By a system administrator that uses scripts that contain the IBM PowerVC
Representational State Transfer (REST) APIs
By higher-level tools that call IBM PowerVC by using standard OpenStack API
168 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
The PowerVC for Private Cloud edition adds a self-service portal with which users can deploy
and manage their own LPARs and workloads, and offers further cloud management functions.
These functions include more project level metering, approval flows, and notification
capabilities.
For more information about PowerVC, see IBM PowerVC Version 2.0 Introduction and
Configuration, SG24-8477.
In the IBM vision, digital transformation takes a customer-centric and digital-centric approach
to all aspects of business: from business models to customer experiences, processes and
operations. It uses artificial intelligence, automation, hybrid cloud, and other digital
technologies to use data and drive intelligent workflows, enable faster and smarter decision
making, and a real-time response to market disruptions. Ultimately, it changes customer
expectations and creates business opportunities.
Red Hat OpenShift Container Platfom is a container orchestration platform that is based on
Kubernetes that helps develop containerized applications with open source technology that is
ready for the Enterprise. Red Hat OpenShift Container Platfom facilitates management and
deployments in hybrid and multicloud environments by using full-stack automated operations.
Containers first appeared decades ago with releases, such as FreeBSD jails and AIX
Workload Partitions (WPARs). However, most modern developers remember 2013 as the
beginning of the modern container era with the introduction of Docker.
One way to better understand a container is to understand how it differs from a traditional VM.
In traditional virtualization (on-premises and in the cloud), a hypervisor is used to virtualize
the physical hardware. Therefore, each VM contains a guest operating system and a virtual
copy of the hardware that the operating system requires to run, with an application and its
associated libraries and dependencies.
Instead of virtualizing the underlying hardware, containers virtualize the operating system
(usually Linux) so that each individual container includes only the application and its libraries
and dependencies. The absence of the guest operating system is the reason why containers
are so light and therefore, fast and portable.
In addition to AIX WPARs, IBM i projects came from the 1980s. The IBM i team devised an
approach to create a container for objects (that is, programs, databases, security objects, and
so on). This container can be converted into an image that can be transported from a
development environment to a test environment, another system, or the cloud. A significant
difference between this version of containers and the containers that we know today is the
name: on IBM i they are called libraries and a container image is called a backup file.
The IBM Power platform delivers a high container density per core, with multiple CPU threads
to enable higher throughput. By using PowerVM virtualization, cloud native applications also
can be colocated alongside applications that are related to AIX or IBM i worlds. This ability
makes available API connections to business-critical data for higher bandwidth and lower
latency than other technologies.
Only with IBM Power can you have a flexible and efficient use of resources, manage peaks,
and support traditional and modern workloads with the capabilities of capacity on demand or
shared processor pools. Hardware is not just a commodity; rather, it must be carefully
evaluated.
The ability to automate by using Red Hat Ansible returns valuable time to the system
administrators.
The Red Hat Ansible Automation Platform for Power is fully enabled, so enterprises can
automate several tasks within AIX, IBM i, and Linux all the way up to provisioning VMs and
deploying applications. Ansible also can be combined with HMC, PowerVC, and Power Virtual
Server to provision infrastructure anywhere you need, including cloud solutions from other
IBM Business Partners or third-party providers that are based on Power processor-based
servers.
170 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
A first task after the initial installation or set-up of a new LPAR is to ensure that the correct
patches are installed. Also, extra software (whether it is open source software, ISV software,
or perhaps their own enterprise software) must be installed. Ansible features a set of
capabilities to roll out new software, which makes it popular in Continuous
Integration/Continuous Delivery (CI/CD) pipeline environments. Orchestration and integration
of automation with security products represent other ways in which Ansible can be applied
within the data center.
Despite the wide adoption of AIX and IBM i in many different business sectors by different
types of customers, Ansible can help introduce the Power processor-based technology to
customers who believe that AIX and IBM i skills are a rare commodity that is difficult to locate
in the marketplace, but want to take advantage of all the features of the hardware platform.
The Ansible experience is identical across Power or x86 processor-based technology and the
same tools can be used in IBM Cloud and other cloud providers.
AIX and IBM i skilled customers can also benefit from the extreme automation solutions that
are provided by Ansible.
The Power processor-based architecture features unique advantages over commodity server
platforms, such as x86, because the engineering teams that are working on the processor,
system boards, virtualization. and management appliances collaborate closely to ensure an
integrated stack that works seamlessly. This approach is in stark contrast to the multivendor
x86 processor-based technology approach, in which the processor, server, management, and
virtualization must be purchased from different (and sometimes competing) vendors.
The Power stack engineering teams partnered closely to deliver the enterprise server
platform, which results in an IT architecture with industry-leading performance, scalability, and
security (see Figure 4-3).
Every layer in the Power stack is optimized to make the Power10 processor-based technology
the platform of choice for mission-critical enterprise workloads. This stack includes the
Ansible Automation Platform, which is described next.
Many Ansible collections are available for IBM Power processor-based technology, which (at
the time of this writing) were downloaded more than 25,000 times by customers, are now
included in the Red Hat Ansible Automation Platform. As a result, these modules are covered
by Red Hat’s 24x7 enterprise support team, which collaborates with the respective Power
processor-based technology development teams.
From an IBM i perspective, a pertinent example is the ability to run SQL queries against the
integrated IBM Db2 database that is built into the IBM i platform, manage object authorities,
and so on. All of these modules and playbooks can be combined by an AIX administrator or
IBM i administrator to perform complex tasks rapidly and consistently.
The IBM operating system development teams, alongside community contributors, develop
modules that are sent to an open source community (named Ansible Galaxy). Every
developer can post any object that can be a candidate for a collection in the open Ansible
Galaxy community and possibly certified to be supported by IBM with a subscription to Red
Hat Ansible Automation Platform (see Figure 4-4).
The collection includes modules and sample playbooks that help to automate tasks and is
available at this web page.
172 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Ansible modules for IBM i
Ansible Content for IBM Power - IBM i provides modules, action plug-ins, roles, and sample
playbooks to automate tasks on IBM i workloads, including the following examples:
Command execution
System and application configuration
Work management
Fix management
Application deployment
For more information about the collection, see this web page.
For more information about this collection, see this web page.
Many organizations also are adapting their business models, and have thousands of people
that are connecting from home computers that are outside the control of an IT department.
Users, data, and resources are scattered all over the world, which makes it difficult to connect
them quickly and securely. Also, without a traditional local infrastructure for security,
employees’ homes are more vulnerable to compromise, which puts the business at risk.
Many companies are operating with a set of security solutions and tools, even without them
being fully integrated or automated. As a result, security teams spend more time on manual
tasks. They lack the context and information that is needed to effectively reduce the attack
surface of their organization. Rising numbers of data breaches and increasing global
regulations make securing networks difficult.
Applications, users, and devices need fast and secure access to data, so much so that an
entire industry of security tools and architectures was created to protect them.
Although enforcing a data encryption policy is an effective way to minimize the risk of a data
breach that, in turn, minimizes costs, only a few enterprises at the worldwide level have an
encryption strategy that is applied consistently across the entire organization. This is true in
large part because such policies add complexity and cost, and negatively affect performance,
which can mean missed SLAs to the business.
The rapidly evolving cyberthreat landscape also requires focus on cyber-resilience. Persistent
and end-to-end security is the best way to reduce exposure to threats.
174 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Orders of magnitude lower operating system CVEs for AIX and IBM i.
Simplified patching with PowerSC.
Multi-Factor Authentication with PowerSC.
Also introduced were significant innovations along the following major dimensions:
Advanced Data Protection that offers simple to use and efficient capabilities to protect
sensitive data through mechanisms, such as encryption and multi-factor authentication.
Platform Security ensures that the server is hardened against tampering, continuously
protecting its integrity, and ensuring strong isolation among multi-tenant workloads.
Without strong platform security, all other system security measures are at risk.
Security Innovation for Modern Threats provides the ability to stay ahead of new types of
cybersecurity threats by using emerging technologies.
Integrated Security Management addresses the key challenge of ensuring correct
configuration of the many security features across the stack, monitoring them, and
reacting if unexpected changes are detected.
The Power10 processor-based servers are enhanced to simplify and integrate security
management across the stack, which reduces the likelihood of administrator errors.
In the Power10 processor-based scale-out servers, all data is protected by a greatly simplified
end-to-end encryption that extends across the hybrid cloud without detectable performance
impact and prepares for future cyberthreats.
Quantum-safe cryptography refers to the efforts to identify algorithms that are resistant to
attacks by classical and quantum computers in preparation for the time when large-scale
quantum computers are built.
176 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Homomorphic encryption refers to encryption techniques that permit systems to perform
computations on encrypted data without decrypting the data first. The software libraries for
these solutions are optimized for the Power processor-chip ISA.
The coprocessor holds a security-enabled subsystem module and batteries for backup power.
The hardened encapsulated subsystem contains two sets of two 32-bit PowerPC 476FP
reduced-instruction-set-computer (RISC) processors running in lockstep with cross-checking
to detect soft errors in the hardware.
IBM offers an embedded subsystem control program and a cryptographic API that
implements the IBM Common Cryptographic Architecture (CCA) Support Program that can
be accessed from the internet at no charge to the user.
Feature #EJ35 and #EJ37 are feature codes that represent the same physical card with the
same CCIN of C0AF. Different feature codes are used to indicate whether a blind swap
cassette is used and its type: #EJ35 indicates no blind swap cassette, #EJ37 indicates a
Gen 3 blind swap cassette.
The 4769 PCIe Cryptographic Coprocessor is designed to deliver the following functions:
X.509 certificate services support
ANSI X9 TR34-2019 key exchange services that use the public key infrastructure (PKI)
ECDSA secp256k1
CRYSTALS-Dilithium, a quantum-safe algorithm for digital signature generation and
verification
Rivest-Shamir-Adleman (RSA) algorithm for digital signature generation and verification
with keys up to 4096 bits
PowerSC is introducing more features to help customers manage security end-to-end across
the stack to stay ahead of various threats. Specifically, PowerSC 2.0 adds support for
Endpoint Detection and Response (EDR), host-based intrusion detection, block listing, and
Linux.
Security features are beneficial only if they can be easily and accurately managed. Power10
processor-based scale-out servers benefit from the integrated security management
capabilities that are offered by IBM PowerSC.
PowerSC is a key part of the Power solution stack. It provides features, such as compliance
automation, to help with various industry standards, real-time file integrity monitoring,
reporting to support security audits, patch management, trusted logging, and more.
By providing all of these capabilities within a clear and modern web-based user interface,
PowerSC simplifies the management of security and compliance significantly.
The PowerSC Multi-Factor Authentication (MFA) capability provides more assurance that only
authorized people access the environments by requiring at least one extra authentication
factor to prove that you are the person you say you are. MFA is included in PowerSC 2.0.
Because stolen or guessed passwords are still one of the most common ways for hackers to
access systems, having an MFA solution in place allows you to prevent a high percentage of
potential breaches.
This step is important on the journey toward implementing a zero trust security posture.
178 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
PowerSC 2.0 also includes Endpoint Detection and Response (EDR), which provides the
following features:
Intrusion Detection and Prevention (IDP)
Log inspection and analysis
Anomaly detection, correlation, and incident response
Response triggers
Event context and filtering
The terms Secure Boot and Trusted Boot have specific connotations. The terms are used as
distinct, yet complementary concepts, as described next.
Secure Boot
This feature protects system integrity by using digital signatures to perform a
hardware-protected verification of all firmware components. It also distinguishes between the
host system trust domain and the eBMC or FSP trust domain by controlling service processor
and service interface access to sensitive system memory regions.
Trusted Boot
This feature creates cryptographically strong and protected platform measurements that
prove that specific firmware components ran on the system. You can assess the
measurements by using trusted protocols to determine the state of the system and use that
information to make informed security decisions.
Hardware assist is necessary to avoid tampering with the stack. The Power platform added
four instructions (hashst, hashchk, hashstp, and hashchkp) to handle ROP in the Power ISA
3.1B.
Because AI is set to deploy everywhere, attention is turning from how fast data science teams
can build AImodels to how fast inference can be run against new data with those trained AI
models. Enterprises are asking their engineers and scientists to review new solutions and
new business models where the use of GPUs is no longer fundamental, especially because
this method became more expensive.
To support this shift, the Power10 processor-based server delivers faster business insights by
running AI in place with four Matrix Math Accelerator (MMA) units to accelerate AI in each
Power10 technology-based processor-core. The robust execution capability of the processor
cores with MMA acceleration, enhanced SIMD, and enhanced data bandwidth, provides an
alternative to external accelerators, such as GPUs.
It also reduces the time and cost that is associated with the related device management for
execution of statistical machine learning and inferencing workloads. These features, which
are combined with the possibility to consolidate multiple environments for AI model execution
on a Power10 processor-based server together with other different types of environments,
reduces costs and leads to a greatly simplified solution stack for the deployment of AI
workloads.
The use of data gravity on Power10 processor-cores enables AI to run during a database
operation or concurrently with an application, for example. This feature is key for
time-sensitive use cases. It delivers fresh input data to AI faster and enhances the quality and
speed of insight.
180 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
As no-code application development, pay-for-use model repositories, auto-machine learning,
and AI-enabled application vendors continue to evolve and grow, the corresponding software
products are brought over to the Power10 processor-based platform. Python and code from
major frameworks and tools, such as TensorFlow, PyTorch, and XGBoost, run on the
Power10 processor-based platform without any changes.
Open Neural Network Exchange (ONNX) models can be brought over from x86 or Arm
processor-based servers or other platforms and small-sized VMs or Power Virtual Server
(PowerVS) instances for deployment on Power10 processor-based servers. This Power10
technology gives customers the ability to train models on independent hardware but deploy
on enterprise grade servers.
The IBM development teams optimized common math libraries so that AI tools benefit from
the acceleration that is provided by the MMA units of the Power10 chip. The benefits of MMA
acceleration can be realized for statistical machine learning and inferencing, which provides a
cost-effective alternative to external accelerators or GPUs.
Because Power10 cores are equipped with four MMAs for matrix and tensor math,
applications can run models against colocated data without the need for external
accelerators, GPUs, or separate AI platforms. Power10 technology uses the “train anywhere,
deploy here” principle to operationalize AI.
A model can be trained on a public or private cloud and then deployed on a Power server (see
Figure 4-6 on page 182) by using the following procedure:
1. The trained model is registered with its version in the model vault. This vault is a VM or
LPAR with tools, such as IBM Watson® OpenScale, BentoML, or Tensorflow Serving, to
manage the model lifecycle.
2. The model is pushed out to the destination (in this case, a VM or an LPAR that is running
a database with an application). The model might be used by the database or the
application.
3. Transactions that are received by the database and application trigger model execution
and generate predictions or classifications. These predictions also can be stored locally.
For example, these predictions can be the risk or fraud that is associated with the
transaction or product classifications that are to be used by downstream applications.
4. A copy of the model output (prediction or classification) is sent to the model operations
(ModelOps) engine for calculation of drift by comparison with Ground Truth.
182 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Related publications
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only:
IBM Power Private Cloud with Shared Utility Capacity: Featuring Power Enterprise Pools
2.0, SG24-8476
SAP HANA Data Management and Performance on IBM Power Systems, REDP-5570
IBM PowerAI: Deep Learning Unleashed on IBM Power Systems Servers, SG24-8409
IBM Power E1080 Technical Overview and Introduction, REDP-5649
IBM Power E1050 Technical Overview and Introduction, REDP-5684
IBM Power System AC922 Technical Overview and Introduction, REDP-5494
IBM Power System E950: Technical Overview and Introduction, REDP-5509
IBM Power System E980: Technical Overview and Introduction, REDP-5510
IBM Power System L922 Technical Overview and Introduction, REDP-5496
IBM Power System S822LC for High Performance Computing Introduction and Technical
Overview, REDP-5405
IBM Power Systems H922 and H924 Technical Overview and Introduction, REDP-5498
IBM Power Systems LC921 and LC922: Technical Overview and Introduction,
REDP-5495
IBM Power Systems S922, S914, and S924 Technical Overview and Introduction
Featuring PCIe Gen 4 Technology, REDP-5595
IBM PowerVM Best Practices, SG24-8062
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
IBM PowerVC Version 2.0 Introduction and Configuration, SG24-8477
You can search for, view, download, or order these documents and other Redbooks
publications, Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
184 IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction
Back cover
REDP-5675-00
ISBN 0738460761
Printed in U.S.A.
®
ibm.com/redbooks