E1050 Power 10
E1050 Power 10
E1050 Power 10
Giuliano Anselmi
Marc Gregorutti
Stephen Lutz
Michael Malicdem
Guido Somers
Tsvetomir Spasov
Redpaper
IBM Redbooks
August 2022
REDP-5684-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to the IBM Power E1050 (9043-MRX) server and the Hardware Management Console
(HMC) Version 10 Release 1.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Contents v
5.3.1 Application and services modernization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.3.2 System automation with Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.4 Protecting trust from core to cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.4.1 Power10 processor-based technology integrated security ecosystem . . . . . . . . 156
5.4.2 Crypto engines and transparent memory encryption . . . . . . . . . . . . . . . . . . . . . 157
5.4.3 Quantum-safe cryptography support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.4.4 IBM PCIe3 Crypto Coprocessor BSC-Gen3 4769 . . . . . . . . . . . . . . . . . . . . . . . 158
5.4.5 IBM PowerSC support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.4.6 Secure Boot and Trusted Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.4.7 Enhanced CPU: baseboard management controller isolation . . . . . . . . . . . . . . 161
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM FlashCore® PowerPC®
DS8000® IBM FlashSystem® PowerVM®
Easy Tier® IBM Security® QRadar®
HyperSwap® IBM Spectrum® Redbooks®
IBM® Instana® Redbooks (logo) ®
IBM Cloud® Micro-Partitioning® Storwize®
IBM Cloud Pak® Power Architecture® Tivoli®
IBM Elastic Storage® PowerHA® Turbonomic®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S.
and other countries.
Red Hat, OpenShift, and Ansible are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries
in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redpaper publication is a comprehensive guide that covers the IBM Power E1050
server (9043-MRX) that uses the latest IBM Power10 processor-based technology and
supports IBM AIX® and Linux operating systems (OSs). The goal of this paper is to provide a
hardware architecture analysis and highlight the changes, new technologies, and major
features that are being introduced in this system, such as:
The latest IBM Power10 processor design, including the dual-chip module (DCM)
packaging, which is available in various configurations from 12 - 24 cores per socket.
Support of up to 16 TB of memory.
Native Peripheral Component Interconnect Express (PCIe) 5th generation (Gen5)
connectivity from the processor socket to deliver higher performance and bandwidth for
connected adapters.
Open Memory Interface (OMI) connected Differential Dual Inline Memory Module
(DDIMM) memory cards delivering increased performance, resiliency, and security over
industry-standard memory technologies, including transparent memory encryption.
Enhanced internal storage performance with the use of native PCIe-connected
Non-volatile Memory Express (NVMe) devices in up to 10 internal storage slots to deliver
up to 64 TB of high-performance, low-latency storage in a single 4-socket system.
Consumption-based pricing in the Power Private Cloud with Shared Utility Capacity
commercial model to allow customers to consume resources more flexibly and efficiently,
including AIX, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server, and Red
Hat OpenShift Container Platform workloads.
This publication is for professionals who want to acquire a better understanding of IBM Power
products. The intended audience includes:
IBM Power customers
Sales and marketing professionals
Technical support professionals
IBM Business Partners
Independent software vendors (ISVs)
This paper expands the set of IBM Power documentation by providing a desktop reference
that offers a detailed technical description of the Power E1050 Midrange server model.
This paper does not replace the current marketing materials and configuration tools. It is
intended as an extra source of information that, together with existing sources, can be used to
enhance your knowledge of IBM server solutions.
Giuliano Anselmi is an IBM Power Digital Sales Technical Advisor in IBM Digital Sales
Dublin. He joined IBM and focused on Power processor-based technology. For almost
20 years, he covered several technical roles. He is an important resource for the mission of
his group, and he serves a reference for IBM Business Partners and customers.
Marc Gregorutti is an Europe, Middle East, and Asia (EMEA) IBM Power Product Field
Engineer at IBM France. He started as a IBM service representative in 1998, and then
became a remote technical support member for IBM Power for France and then the EMEA.
He joined the EMEA IBM Power Product engineering team in 2009 and became one of its
leaders. He now focuses on scale-out enterprise midrange systems support to continuously
improve the product.
Stephen Lutz is a Certified Leading Technical Sales Professional for IBM Power working at
IBM Germany. He holds a degree in Commercial Information Technology from the University
of Applied Science Karlsruhe, Germany. He has 23 years of experience in AIX, Linux,
virtualization, and IBM Power and its predecessors. He provides pre-sales technical support
to clients, IBM Business Partners, and IBM sales representatives in Germany.
Michael Malicdem is a brand technical specialist for IBM Power at IBM Philippines. He has
over 17 years of experience in pre-sales solution design, client and partner consultation,
presentations, technical enablements relative to IBM Power servers, and IBM storage, which
includes 3 years in post-sales services and support. He is a licensed Electronics Engineer
and a cum laude graduate from Pamantasan ng Lungsod ng Maynila (PLM). His areas of
expertise include server configurations, AIX, RHEL, IBM PowerVM®, PowerSC, PowerVC,
and IBM PowerHA®.
Tsvetomir Spasov is an IBM Power subject matter expert at IBM Bulgaria. His main areas of
expertise are Flexible Service Processor (FSP), enterprise Baseboard Management
Controller (eBMC), Hardware Management Console (HMC), IBM PowerPC®, and Global
Total Microcode Support (GTMS). He has been with IBM since 2016, and provides reactive
break-fix, proactive, preventive, and cognitive support. He has conducted different technical
trainings and workshops.
Jesse P Arroyo, Irving Baysah, Nigel Griffiths, Sabine Jordan, Charles Marino,
Hariganesh Muralidharan, Hoa Nguyen, Ian Robinson, William Starke, Edward M.H. Tam,
Madeline Vega
IBM
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
xii IBM Power E1050: Technical Overview and Introduction
1
A Power E1050 server with four 24-core DCMs offers the maximum of 96 cores. All processor
cores can run up to eight simultaneous threads to deliver greater throughput. All sockets must
be populated with the same processor modules.
Figure 1-2 shows a top view of the Power E1050 server with the top lid removed. Under the
left metal plate are the fans and Non-volatile Memory Express (NVMe) slots, as shown in
Figure 1-3 on page 4. Going to the right, you can see the memory slots that belong the
processors to the right of that memory column. Going further to the right, there is another
column of memory slots that belong to the processors to the right of them. Under the metal
plate at the right edge are the four Titanium class 2300W power supplies and the 11
Peripheral Component Interconnect Express (PCIe) slots, as shown in Figure 1-4 on page 5.
The air flow goes from the front to the rear of the server, which in Figure 1-2 is from left to
right.
Each processor module that is added to the system offers 16 Open Memory Interface (OMI)
slots that can be populated with 4U Differential Dual Data Rate DIMMs (DDIMMs). At the time
of writing, these DDIMMs incorporate Double Data Rate 4 (DDR4) memory chips that deliver
an increased memory bandwidth of up to 409 GBps peak transfer rates per socket. With four
processor modules, the Power E1050 server provides 64 OMI slots that support up to 16 TB
of memory and a maximum peak transfer rate of 1.636 GBps.
The Power E1050 server provides state-of-the-art PCIe Gen5 connectivity. Up to 11 PCIe
slots are provided in the system unit with different characteristics:
Six PCIe Gen4 x16 or PCIe Gen5 x8 slots
Two PCIe Gen5 x8 slots
Three PCIe Gen4 x8 slots
The number of available slots depends on the number of available processor modules. For
more information about the system diagram, see Figure 2-1 on page 35.
Note: Although some slots are x8 capable only, all connectors in the system have x16
connectors.
If more slots are needed, up to four PCIe Gen3 I/O Drawers with two fanout modules each
can be added to the system. Each fanout module provides six slots. With eight fanout
modules in four I/O drawers, the maximum number of available slots is 51.
The PCIe slots can be populated with a range of adapters covering local area network (LAN),
Fibre Channel (FC), serial-attached SCSI (SAS), Universal Serial Bus (USB), and
cryptographic accelerators. At least one network adapter must be included in each system.
The Power E1050 server offers up to 10 internal NVMe U.2 flash bays that can be equipped
with 800 GB U.2 Mainstream NVMe drives or U.2 Enterprise class NVMe drives in different
sizes up to 6.4 TB. Each NVMe device is connected as a separate PCIe endpoint and can be
assigned individually to VMs for best flexibility. The 10 NVMe bays offer a maximum of 64 TB
internal storage. For all 10 NVMe bays to b available, the server must be populated with all
The Power E1050 server does not have internal spinning SAS drives. However, it is possible
to attach 19-inch disk expansion drawers that offer SFF Gen2-carrier bays for SAS disks. For
more information, see 2.3, “Internal I/O subsystem” on page 65.
In addition to extensive hardware configuration flexibility, the Power E1050 server offers
Elastic Capacity on Demand (Elastic CoD) temporarily for both processor cores and memory;
IBM Active Memory Expansion; and Active Memory Mirroring (AMM) for hypervisor.
For the best flexibility, the Power E1050 server can be part of an IBM Power Private Cloud
with Shared Utility Capacity pool, also known as IBM Power Enterprise Pool 2.0. It consists of
Power E1050 servers, Power E950 servers, or a mix of both. In such a pool, Base Capacity
can be purchased for processor cores, memory, and operating system (OS) licenses (AIX) or
subscriptions (Linux). This Base Capacity is independent of the configuration of the servers.
Even if only a small Base Capacity was purchased, all available resources of the servers in
the pool can be used. If more resources are used than are available in the Base Capacity of
the pool (the sum of all Base Capacities of all servers that are part of the pool), these
additional used resources, that is, metered resource consumption, are billed. The metering is
done on a per-minute basis. The billing can be pre-paid by purchasing credits upfront, or the
billing can be post-pay. In a post-pay pool, IBM generates an invoice monthly.
The Power E1050 server includes IBM PowerVM Enterprise Edition to deliver virtualized
environments and support a frictionless hybrid cloud experience. Workloads can run the AIX,
and Linux OSs, including the Red Hat OpenShift Container Platform. IBM i is not a supported
OS on the Power E1050 server.
The Power E1050 server also provides strong resiliency characteristics, which include
Power10 chip capabilities and memory protection. The new 4U DDIMMS that are used in the
Power E1050 server offer an enhanced buffer, N+1 voltage regulation, and spare dynamic
RAM (DRAM) technology. Also, technologies like Chipkill with advanced error correction code
(ECC) protection are included, and transparent Power10 memory encryption with no
performance impact. This technology is the same enterprise class technology that is used in
the Power E1080 server.
Other resiliency features that are available in the Power E1050 server are hot-plug NVMe
bays, hot-plug PCIe slots, redundant and hot-plug power supplies, hot-plug redundant cooling
fans, hot-plug Time of Day battery, and even highly resilient architecture for power regulators.
Figure 1-3 shows the front view of a Power E1050 server with the front bezel removed.
Internal NVMe Flash bays Up to 10 U.2 NVMe bays for 15-mm NVMe drives
or 7-mm NVMe drives in a 15-mm carrier.
Internal USB ports USB 3.0. Two front and two rear.
Table 1-2 Comparing the Power E950 and Power E1050 servers
Features Power E950 server Power E1050 server
Maximum memory 16 TB 16 TB
PCIe slots 11 (eight Gen4 16-lane + two 11 (six Gen5 x8/Gen4 x16 + two
Gen4 8-lane + one Gen3 slots) Gen5 x8 + three Gen4 x8 slots)
Internal storage bays 12 (eight SAS + four NVMe 10 (10 NVMe drives)
drives)
Note: IBM does not recommend operation above 27C, however, one can expect full
performance up to 35C for these systems. Above 35C, the system is capable of operating,
but possible reductions in performance may occur to preserve the integrity of the system
components. Above 40C there may be reliability concerns for components within the
system.
Environmental assessment: The IBM Systems Energy Estimator tool can provide more
accurate information about the power consumption and thermal output of systems that are
based on a specific configuration, including adapters and I/O expansion drawers.
Note: Derate maximum allowable dry-bulb temperature 1°C (1.8°F) per 175 m (574 ft)
above 900 m (2,953 ft) up to a maximum allowable elevation of 3050 m (10,000 ft).
Government regulations, such as those prescribed by the Occupational Safety and Health
Administration (OSHA) or European Community Directives, may govern noise level exposure
in the workplace, which might apply to you and your server installation. The Power E1050 is
available with an optional acoustical door feature that can help reduce the noise that is
emitted from this system.
The actual sound pressure levels in your installation depend upon various factors, including
the number of racks in the installation, the size, materials, and configuration of the room
where you designate the racks to be installed, the noise levels from other equipment, the
ambient room temperature, and employees' location in relation to the equipment.
Compliance with such government regulations also depends on many more factors, including
the duration of employees' exposure and whether employees wear hearing protection. IBM
recommends that you consult with qualified experts in this field to determine whether you are
in compliance with the applicable regulations.
Table 1-4 lists the physical dimensions of the system node and the PCIe Gen3 I/O Expansion
Drawer.
Table 1-4 Physical dimensions of the system node and the PCIe Gen3 I/O Expansion Drawer
Dimension Power E1050 system node PCIe I/O expansion drawer
Height 175 mm (6.9 in.) four EIA units 177.8 mm (7.0 in.) four EIA units
Note: The EMX0 remote I/O drawer connection in the T42 and S42 racks stops the rear
door from closing, so you must have the 8-inch rack extensions.
Processor modules
The Power E1050 supports 24 - 96 processor cores:
– Twelve-core typical 3.35 – 4.0 GHz (max) #EPEU Power10 processor.
– Eighteen-core typical 3.20 – 4.0 GHz (max) #EPEV Power10 processor.
Twenty-four-core typical 2.95 – 3.90 GHz (max) #EPGW Power10 processor.
A minimum of two and a maximum of four processor modules are required for each
system. The modules can be added to a system later through a Miscellaneous Equipment
Specification (MES) upgrade, but the system requires scheduled downtime to install. All
processor modules in one server must be of the same frequency (same processor module
feature number), that is, you cannot mix processor modules of different frequencies.
Permanent CoD processor core activations are required for the first processor module in
the configuration and are optional for the second and fourth modules. Specifically:
– Two, three, or four 12-core typical 3.35 – 4.0 GHz (max) processor modules (#EPEU)
require 12 processor core activations (#EPUR) at a minimum.
– Two, three, or four 18-core typical 3.20 – 4.0 GHz (max) processor modules (#EPEV)
require 18 processor core activations (#EPUS) at a minimum.
Two, three, or four 24-core typical 2.95 – 3.90 GHz (max) processor modules (#EPGW)
require 24 processor core activations (#EPYT) at a minimum.
Temporary CoD capabilities are optionally used for processor cores that are not
permanently activated. An HMC is required for temporary CoD.
System memory
256 GB - 16 TB high-performance memory up to 3200 MHz DDR4 OMI:
– 64 GB DDIMM Memory (#EM75).
– 128 GB DDIMM Memory (#EM76).
– 256 GB DDIMM Memory (#EM77).
– 512 GB DDIMM Memory (#EM7J).
– Optional Active Memory Expansion (#EMBM).
Permanent CoD memory activations are required for at least 50% of the physically installed
memory or 256 GB of activations, whichever is larger. Use 1 GB activation (#EMCP) and
100 GB activation (#EMCQ) features to order permanent memory activations.
Temporary CoD for memory is available for memory capacity that is not permanently
activated.
Delivery through Virtual Capacity machine type and model (MTM) (4586-COD) by using
the IBM Entitled Systems Support (IBM ESS) process.
An HMC is required for temporary CoD.
Notes:
Memory is ordered in a quantity of eight of the same memory feature.
The minimum memory that is supported per two Power10 processors installed is
256 GB.
The minimum memory that is supported per three Power10 processors installed is
384 GB.
The minimum memory that is supported per four Power10 processors installed is
512 GB.
Storage options
The Power E1050 supports up to 10 NVMe 7-mm or 15-mm drives:
Six NVMe drives within a two or three-socket configuration
Ten NVMe drives within a four-socket configuration
All NVMe drives are driven directly from the system backplane with no PCIe card or cables
required.
The 7-mm NVMe drives from the IBM Power E950 are also supported on the Power E1050
with a carrier conversion feature that is offered to hold these drives.
Table 1-6 lists the minimum features of a Power E1050 server configuration.
Table 1-6 Selecting the minimum configuration for the Power E1050 server
Feature Feature Feature Code description Minimum
Code quantity
Heat sink + #EPLU Front Heat Sink + TIM PAD (For MRX) 1
thermal #EPLV Rear Heat Sink + TIM PAD (For MRX) 1
interface Note: Applies
material (TIM) to base two
pad sockets
populated.
Processor card #EPEU 12-core typical 3.35 - 4.0 GHZ (max) Two of any
#EPEV processor processor
#EPGW 18-core typical 3.20 - 4.0 GHZ (max) Feature Code,
processor and they must
24-core typical 2.95 - 3.90 GHZ (max) be the same.
processor
NVMe device #EC5X Mainstream 800 GB SSD PCIe3 NVMe U.2 One of any of
#EC7T module for AIX or Linux these Feature
#ES1E 800 GB Mainstream NVMe U.2 SSD 4k for Codes.
#ES1G AIX or Linux Note:
#ES3E Enterprise 1.6 TB SSD PCIe4 NVMe U.2 Recommend
module for AIX or Linux to have two for
Enterprise 3.2 TB SSD PCIe4 NVMe U.2 mirrored copy.
module for AIX or Linux Not required if
Enterprise 6.4 TB SSD PCIe4 NVMe U.2 Feature Code
module for AIX or Linux # 0837 (SAN
Boot Specify)
is selected.
Power supplies EB39 Power Supply - 2300W for Server (200 - 240 4
VAC)
The 16-lane slots can provide up to twice the bandwidth of the 8-lane slots because they offer
twice as many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth of PCIe
Gen4 slots and up to four times the bandwidth of a PCI Gen3 slot, assuming an equivalent
number of PCIe lanes. PCIe Gen1, PCIe Gen2, PCIe Gen3, PCIe Gen4, and PCIe Gen5
adapters can be plugged into a PCIe Gen5 slot, if that adapter is supported. The 16-lane slots
can be used to attach PCIe Gen3 I/O expansion drawers.
Table 1-7 shows the number of slots that is supported by the number of processor modules.
x8 Gen4 slots 3 3
x8 Gen5 slots 2 2
Figure 1-6 shows the 11 PCIe adapter slots location with labels for the Power E1050 server
model.
Figure 1-6 PCIe adapter slot locations on the Power E1050 server
Slot C0 is not included in the list. It is meant for only the eBMC service processor card. The
total number of PCIe adapter slots that is available can be increased by adding one or more
PCIe Gen3 I/O expansion drawers (#EMX0). The maximum number depends on the number
of processor modules physically installed. The maximum is independent of the number of
processor core activations.
Table 1-8 list the number of maximum number of I/O drawers per populated socket.
2 2 Up to 4
3 3 Up to 6
4 4 Up to 8
In addition, VIOS can be installed in a special logical partition (LPAR) where its primary
function is to host physical I/O adapters like network and storage connectivity, and provide
virtualized I/O devices for client LPARs, such as AIX and Linux OSes.
For more information about the software that is available on IBM Power, see IBM Power
Systems Software.
The minimum supported levels of IBM AIX and Linux at the time of announcement are
described in the following sections. For more information about hardware features,
see IBM Power Systems Prerequisites.
IBM Power Systems Prerequisites helps to plan a successful system upgrade by providing
the prerequisite information for features in use or that you plan to add to a system. It is
possible to choose an MTM (9043-MRX for Power E1050) and discover all the prerequisites,
the OS levels that are supported, and other information.
At the time of announcement, Power E1050 supports the following minimum level of AIX
when installed with virtual I/O:
AIX 7.3 with the 7300-00 Technology Level and Service Pack 1 or later
AIX 7.2 with the 7200-05 Technology Level and Service Pack 1 or later
AIX 7.2 with the 7200-04 Technology Level and Service Pack 2 or later
AIX 7.1 with the 7100-05 Technology Level and Service Pack 6 or later
Notes:
AIX 7.2 with the 7200-04 Technology Level and Service Pack 6 is planned to be
available on September 16, 2022.
AIX 7.1 has been withdrawn from marketing since November 2021. AIX 7.2 is the
minimum available version for a new software order.
AIX 7.1 instances must run in an LPAR in IBM Power8 compatibility mode with a VIOS-based
virtual storage and network.
AIX 7.3 instances can use both physical and virtual I/O adapters, and can run in an LPAR in
native Power10 mode.
IBM periodically releases maintenance packages (service packs (SPs) or technology levels
(TLs)) for the AIX OS. For more information about these packages, and downloading and
obtaining the installation packages, see Fix Central.
The Service Update Management Assistant (SUMA), which can help you automate the task
of checking and downloading OS downloads, is part of the base OS. For more information
about the suma command, see IBM Documentation.
Customers are licensed to run the product through the expiration date of the 1- or 3-year
subscription term, and then can renew the subscription at the end of it to continue using the
product. This model provides flexible and predictable pricing over a specific term, with lower
upfront costs of acquisition.
Another benefit of this model is that the licenses are customer number entitled, which means
that they are not tied to a specific hardware serial number as with a standard license grant.
Therefore, the licenses can be moved between on-premises and cloud if needed, something
that is becoming more of a requirement with hybrid workloads.
The product IDs for the subscription licenses are listed in Table 1-9.
The subscription licenses are orderable through an IBM configurator. The standard AIX
license grant and monthly term licenses for standard edition are still available.
The following Linux distributions are supported on the Power E1050 server model.
At the time of announcement, the Power E1050 server supports the following minimum levels
of the RHEL OS:
Red Hat Enterprise Linux 8.4 for Power Little Endian (LE) or later
Red Hat Enterprise Linux 9.0 for Power LE or later
Red Hat Enterprise Linux for SAP with Red Hat Enterprise Linux 8.4 for Power LE or later
Note: RHEL 9.0 for Power LE or later is supported to run once it is announced.
RHEL is sold on a subscription basis, with initial subscriptions and support that are available
for 1 year, 3 years, or 5 years. Support is available either directly from Red Hat or from
IBM Technical Support Services. An RHEL 8 for Power LE unit subscription covers up to four
cores and up to four LPARs, and the subscription can be stacked to cover more cores and
LPARs.
When a client orders RHEL from IBM, a subscription activation code is published at the
IBM ESS website. After you retrieve this code from IBM ESS, use it to establish proof of
entitlement and download the software from Red Hat.
At the time of announcement, the Power E1050 server supports the following minimum levels
of SUSE Linux Enterprise Server OS:
SUSE Linux Enterprise Server 15 Service Pack 3 or later
SUSE Linux Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service
Pack 3 or later
SUSE Linux Enterprise Server is sold on a subscription basis, with initial subscriptions and
support that are available for 1 year, 3 years, or 5 years. Support is available either directly
from SUSE or from IBM Technical Support Services. A SUSE Linux Enterprise Server 15 unit
subscription covers 1 - 2 sockets or 1 -2 LPARs, and they subscriptions can be stacked to
cover more sockets and LPARs.
When a client orders SUSE Linux Enterprise Server from IBM, a subscription activation code
is published at the IBM ESS website. After you retrieve this code from IBM ESS, use it to
establish proof of entitlement and download the software from SUSE.
One specific benefit of Power10 technology is a 10x - 20x advantage over Power9
processor-based technology for artificial intelligence (AI) inferencing workloads because of
increased memory bandwidth and new instructions. One example is the new special
purpose-built Matrix Math Accelerator (MMA) that was tailored for the demands of machine
learning and deep learning inference. The MMA also supports many AI data types.
Network virtualization is an area with significant evolution and improvements, which benefit
virtual and containerized environments. The following recent improvements were made for
Linux networking features on Power10 processor-based servers:
SR-IOV allows virtualization of network cards at the controller level without needing to
create virtual Shared Ethernet Adapters (SEAs) in the VIOS partition. It is enhanced with
a virtual Network Interface Controller (vNIC), which allows data to be transferred directly
from the partitions to or from the SR-IOV physical adapter without transiting through a
VIOS partition.
Hybrid Network Virtualization (HNV) allows Linux partitions to use the efficiency and
performance benefits of SR-IOV logical ports and participate in mobility operations, such
as active and inactive Live Partition Mobility (LPM) and Simplified Remote Restart (SRR).
HNV is enabled by selecting Migratable when an SR-IOV logical port is configured.
Security
Security is a top priority for IBM and our distribution partners. Linux security on IBM Power is
a vast topic that can be the subject of detailed separate material. However, improvements in
the areas of hardening, integrity protection, performance, platform security, and certifications
are introduced in this section.
Hardening and integrity protection deal with protecting the Linux kernel from unauthorized
tampering while allowing upgrading and servicing of the kernel. These topics become even
more important when running in a containerized environment with an immutable OS, such as
CoreOS in Red Hat OpenShift.
Performance is a security topic because specific hardening mitigation strategies (for example,
against side-channel attacks) can have a significant performance effect. In addition,
cryptography can use significant compute cycles.
The Power E1050 features transparent memory encryption at the level of the controller, which
prevents an attacker from retrieving data from physical memory or storage-class devices that
are attached to the processor bus.
The bootstrap and control plane nodes are all based on RHEL CoreOS, which is a minimal
immutable container host version of the RHEL distribution that inherits the associated
hardware support statements. The compute nodes can run on either RHEL or RHEL CoreOS.
Red Hat OpenShift Container Platform is available on a subscription basis, with initial
subscriptions and support that are available for 1 year, 3 years, or 5 years. Support is
available either directly from Red Hat or from IBM Technical Support Services. Red Hat
OpenShift Container Platform subscriptions cover two processor cores each, and they can be
stacked to cover more cores.
At the time of announcement, the Power E1050 server supports Red Hat OpenShift
Container Platform 4.10 or later.
When a client orders Red Hat OpenShift Container Platform for Power from IBM, a
subscription activation code is published at the IBM ESS website. After you retrieve this code
from IBM ESS, use it to establish proof of entitlement and download the software from
Red Hat.
For more information about running Red Hat OpenShift Container Platform on IBM Power,
see Red Hat OpenShift documentation.
The minimum required level of VIOS for the Power E1050 server model is VIOS 3.1.3.21 or
later.
IBM regularly updates the VIOS code. For more information, see IBM Fix Central.
For initial access and to get more information, see IBM ESS.
Note: A valid registered IBMid is required before a user can sign in to IBM ESS.
By default, newly delivered systems include an UAK that often expires after 3 years.
Thereafter, the UAK can be extended every 6 months, but only if an IBM maintenance
contract exists. The contract can be verified at the IBM ESS website (see 1.7.5, “Entitled
Systems Support” on page 19).
Figure 1-8 shows another example of viewing the access key in ASMI.
Note: The recovery media for V10R1 is the same for 7063-CR2 and 7063-CR1.
The 7063-CR2 is compatible with flat panel console kits 7316-TF3, TF4, and TF5.
Any customer with a valid contract can download this offering from the IBM ESS website, or
this offering can be included with an initial Power E1050 order.
The following minimum requirements must be met to install the virtual HMC:
16 GB of memory
Four virtual processors
Two network interfaces (a maximum of four is allowed)
One disk drive (500 GB available disk drive)
For an initial Power E1050 order with the IBM configurator (e-config), you can find the HMC
virtual appliance by selecting Add software → Other System Offerings (as product
selections) and then select either of the following items:
5765-VHP for IBM HMC Virtual Appliance for Power V10
5765-VHX for IBM HMC Virtual Appliance x86 V10
For more information about an overview of the Virtual HMC, see this web page.
For more information about how to install the virtual HMC appliance and all requirements, see
IBM Documentation.
Note: This section describes the BMC of the hardware HMC 7063-CR2. The Power E1050
also uses an eBMC for the systems management, as described in 2.5, “The enterprise
Baseboard Management Controller” on page 73.
The 7063-CR2 provides two network interfaces (eth0 and eth1) for configuring network
connectivity for BMC on the appliance.
Each interface maps to a different physical port on the system. Different management tools
name the interfaces differently. The HMC task Console Management → Console
Settings → Change BMC/IPMI Network Settings modifies only the Dedicated interface.
This path is something specific to PowerVM with the HMC. eBMC is based on of the
OpenBMC code base, which is platform-neutral. Developers wanted to minimize the number
of PowerVM specific functions in eBMC.
Note: Each port also has two MAC addresses, that is, BMC and VMI each have one.
The eBMC IP address is the equivalent of the FSP IP address in previous generations of
IBM Power servers. The WebUI, Representational State Transfer (REST) interfaces, and
others, all use the eBMC IP address. This IP address is the only one that users interact with
directly.
The VMI IP address is used for virtualization management. This IP address is the one that the
HMC used to communicate with Power Hypervisor for partition management and consoles.
Users do not interact directly with this IP address. From a customer perspective, other than
having two IP addresses on the service network instead of one, there is no difference from an
HMC user perspective.
All traffic between the HMC and VMI is encrypted with TLS by using a system unique
certificate.
Figure 1-12 shows a dual HMC connection to the eBMC of an Power E1050 server.
Public
IP
HMC 1
eth0
x.x.x.1 eth1
eth2 eBMC
eth3
x.x.x.2
VMI
Public eth0 eth0 x.x.x.3
Here is a summary for configuring a new server that has factory settings:
1. Connect the Ethernet cable from the eBMC port to the internal HMC network.
2. Plug in the power cables. The eBMC starts and obtains the IP address configuration from
the DHCP server on the HMCs.
3. Enter the access password or, if HMC auto-discovers, the default credentials are used.
4. The server shows as Power Off, but it is now in a manageable state.
5. Configure the VMI to change from static to DHCP, as shown in Figure 1-13.
6. Power on the server. The VMI obtains its IP address, and the HMC to VMI connection is
established automatically.
The IBM Power processor-based architecture always ranked highly in terms of end-to-end
security, which is why it remains a platform of choice for mission-critical enterprise workloads.
Outdated or unsupported HMCs represent a technology risk that can quickly and easily be
mitigated by upgrading to a current release.
Both Elastic and Shared Utility Capacity options are available on all Power E1050 servers
through the Virtual Capacity (4586-COD) MTM and the IBM ESS website.
Elastic Capacity on the Power E1050 server enables you to deploy pay-for-use consumption
of processor, memory, and supported OSs.
Shared Utility Capacity on Power E1050 servers provides enhanced multisystem resource
sharing and by-the-minute tracking and consumption of compute resources across a
collection of systems within a Power Enterprise Pools 2.0 (PEP2). Shared Utility Capacity
delivers a complete range of flexibility to tailor initial system configurations with the right mix
of purchased and pay-for-use consumption of processor, memory, and software across a
collection of Power E1050 and Power E950 servers.
Metered Capacity is the extra installed processor and memory resource above each system's
Base Capacity. It is activated and made available for immediate use when a pool is started,
and then it is monitored by the minute by an IBM Cloud® Management Console (IBM CMC).
Metered resource usage is charged only for minutes that exceed the pool's aggregate base
resources, and usage charges are debited in real time against your purchased Capacity
Credits (5819-CRD) on account.
IBM offers a Private Cloud Capacity Assessment and Implementation Service that is
performed by IBM Systems Lab Services professionals, which can be preselected at time of
purchase or requested for qualifying Power E1050 servers.
If you use IBM AIX as the primary OS, there is a specific offering for it: IBM Private Cloud
Edition with AIX 7 1.8.0 (5765-CBA). The offering includes:
IBM AIX 7.3 or IBM AIX 7.2
IBM PowerSC 2.1
IBM PowerSC MFA
IBM Cloud PowerVC for Private Cloud
IBM VM Recovery Manager DR
IBM Tivoli Monitoring
You can use IBM PowerSC MFA with many applications, such as Remote Shell (rsh), Telnet,
and Secure Shell (SSH).
IBM PowerSC MFA raises the level of assurance of your mission-critical systems with a
flexible and tightly integrated MFA solution for IBM AIX and Linux on Power virtual workloads
running on IBM Power servers.
With PowerVC for Private Cloud, you can perform several operations, depending on your role
within a project.
Users can perform the following tasks on resources to which they are authorized. Some
actions might require administrator approval. When a user tries to perform a task for which
approval is required, the task moves to the request queue before it is performed (or rejected).
Performing lifecycle operations on VMs, such as capture, start, stop, delete, resume, and
resize
Deploying an image from a deployment template
Viewing and withdrawing outstanding requests
Requesting VM expiration extension
Viewing their usage data
IBM Power Virtualization Center 2.0 comes with a new UI, and many new features and
enhancements.
Because IBM Power Virtualization Center is built on the OpenStack technology, you might
see some terminology in messages or other text that is not the same as what you see
elsewhere in PowerVC. There is also some terminology that might be different from what you
are used to seeing in other IBM Power products.
IBM Cloud PowerVC Manager includes all the functions of the PowerVC Standard Edition
plus the following features:
A self-service portal that allows the provisioning of new VMs without direct system
administrator intervention. An option is for policy approvals for the requests that are
received from the self-service portal.
Deploy templates that simplify cloud deployments.
Cloud management policies that simplify management of cloud deployments.
Metering data that can be used for chargeback.
An IBM Cloud integrates your IBM AIX capabilities into the IBM Cloud experience, which
means you get fast, self-service provisioning, flexible management both on-premises and off,
and access to a stack of enterprise IBM Cloud services all with pay-as-you-use billing that lets
you easily scale up and out.
You can quickly deploy an IBM Power Virtual Servers on IBM Cloud instance to meet your
specific business needs. With IBM Power Virtual Servers on IBM Cloud, you can create a
hybrid cloud environment that allows you to easily control workload demands.
For more information, see IBM Power Systems Virtual Servers-Getting started.
Red Hat OpenShift Container Platform for Power brings developers and IT operations
together on a common platform. It provides applications, platforms, and services for creating
and delivering cloud-native applications and management so IT can ensure that the
environment is secure and available.
Red Hat OpenShift Container Platform for Power provides enterprises the same functions as
the Red Hat OpenShift Container Platform offering on other platforms.
For more information, see Red Hat OpenShift Container Platform for Power.
Collectively, the capabilities that are listed in this section work together to create a consistent
management platform between client data centers, public cloud providers, and multiple
hardware platforms (fully inclusive of IBM Power) to provide all the necessary elements for a
comprehensive hybrid cloud platform.
P0-C80 P0-C38
NVMe5 P1-C5
P0-C81 P0-C39
P0-C82 P0-C40
NVMe6 P1-C6 P0-C5
P0-C83 P0-C41
Power10 P0-C84 P0-C42 Power10
NVMe7 P1-C7 P0-C4
chip #1 P0-C85 P0-C43 chip #0
P0-C86 P0-C44
NVMe8 P1-C8 P0-C87 P0-C45
P0-T3
DCM #0 DCM #2
NVMe9 P1-C9 P0-C3
P0-C88 P0-C46
P0-C89 P0-C47
Power10 P0-C90 P0-C48 Power10 P0-C2
P1-T0
chip #0 P0-C91 P0-C49 chip #1
P0-C92 P0-C50 P0-T4
P1-T1
P0-C93 P0-C51
P0-C94 P0-C52 P0-T5
P0-C95 P0-C53
P0-C1
USB redriver USB 3.0
P0-C0
The remainder of this section provides more specific information about the Power10
processor technology as it is used in the Power E1050 server.
The IBM Power10 Processor session material that was presented at the 32nd HOT CHIPS
conference is available through the HC32 conference proceedings archive at this web page.
Each core has private access to 2 MB L2 cache and local access to 8 MB of L3 cache
capacity. The local L3 cache region of a specific core also is accessible from all other cores
on the processor chip. The cores of one Power10 processor share up to 120 MB of latency
optimized non-uniform cache access (NUCA) L3 cache.
1 https://hotchips.org/
Figure 2-2 The Power10 processor chip (die photo courtesy of Samsung Foundry)
Table 2-1 Summary of the Power10 processor chip and processor core technology
Technology Power10 processor chip
Processor compatibility modes Support for Power ISAb of Power8 and Power9
a. CMOS
b. Power ISA
The Power10 processor can be packaged as single-chip module (SCM) or DCM. The
Power E1050 server implements the DCM version. The DCM contains two Power10
processors plus more logic that is needed to facilitate power supply and external connectivity
to the module.
Eight OP (SMP) busses from each chip are bought to DCM module pins. Each chip has two
x32 PCIe busses brought to DCM module pins.
The details of all busses that are brought out to DCM modules pins are shown in Figure 2-3.
The Power E1050 server uses the Power10 enterprise-class processor variant in which each
core can run with up to eight independent hardware threads. If all threads are active, the
mode of operation is referred to as SMT8 mode. A Power10 core with SMT8 capability is
named a Power10 SMT8 core or SMT8 core for short. The Power10 core also supports
modes with four active threads (SMT4), two active threads (SMT2), and one single active
thread (single-threaded (ST)).
The SMT8 core includes two execution resource domains. Each domain provides the
functional units to service up to four hardware threads.
Figure 2-4 shows the functional units of an SMT8 core where all eight threads are active. The
two execution resource domains are highlighted with colored backgrounds in two different
shades of blue.
Each of the two execution resource domains supports 1 - 4 threads and includes four vector
scalar units (VSUs) of 128-bit width, two MMAs, and one quad-precision floating-point (QP)
and decimal floating-point (DF) unit.
One VSU and the directly associated logic are called an execution slice. Two neighboring
slices can also be used as a combined execution resource, which is then named super-slice.
When operating in SMT8 mode, eight simultaneous multithreading (SMT) threads are
subdivided in pairs that collectively run on two adjacent slices, as indicated through colored
backgrounds in different shades of green.
The SMT8 core supports automatic workload balancing to change the operational SMT
thread level. Depending on the workload characteristics, the number of threads that is
running on one chiplet can be reduced from four to two and even further to only one active
thread. An individual thread can benefit in terms of performance if fewer threads run against
the core’s executions resources.
The Power10 processor core includes the following key features and improvements that affect
performance:
Enhanced load and store bandwidth
Deeper and wider instruction windows
Enhanced data prefetch
Branch execution and prediction enhancements
Instruction fusion
Enhancements in the area of computation resources, working set size, and data access
latency are described next. The change in relation to the Power9 processor core
implementation is provided in parentheses.
If more than one hardware thread is active, the processor runs in SMT mode. In addition to
the ST mode, the Power10 processor supports the following different SMT modes:
SMT2: Two hardware threads active
SMT4: Four hardware threads active
SMT8: Eight hardware threads active
SMT enables a single physical processor core to simultaneously dispatch instructions from
more than one hardware thread context. Computational workloads can use the processor
core’s execution units with a higher degree of parallelism. This ability enhances the
throughput and scalability of multi-threaded applications and optimizes the compute density
for ST workloads.
IBM Power4 32 ST 32
The Power E1050 server supports the ST, SMT2, SMT4, and SMT8 hardware threading
modes. With the maximum number of 96 cores, a maximum of 768 hardware threads per
partition can be reached.
To efficiently accelerate MMA operations, the Power10 processor core implements a dense
math engine (DME) microarchitecture that effectively provides an accelerator for cognitive
computing, machine learning, and AI inferencing workloads.
The DME encapsulates compute efficient pipelines, a physical register file, and an associated
data flow that keeps the resulting accumulator data local to the compute units. Each MMA
pipeline performs outer-product matrix operations, reading from and writing back to a 512-bit
accumulator register.
Power10 implements the MMA accumulator architecture without adding an architected state.
Each architected 512-bit accumulator register is backed by four 128-bit Vector Scalar
eXtension (VSX) registers.
Code that uses the MMA instructions is included in OpenBLAS and Eigen libraries. This
library can be built by using the most recent versions of the GNU Compiler Collection (GCC)
compiler. The latest version of OpenBLAS is available at this web page.
OpenBLAS is used by the Python-NumPy library, PyTorch, and other frameworks, which
makes it easy to use the performance benefit of the Power10 MMA accelerator for AI
workloads.
Program code that is written in C/C++ or Fortran can benefit from the potential performance
gains by using the MMA facility if the code is compiled by the following IBM compiler products:
IBM Open XL C/C++ for AIX 17.1 (program numbers 5765-J18, 5765-J16, and 5725-C72)
IBM Open XL Fortran for AIX 17.1 (program numbers 5765-J19, 5765-J17, and 5725-C74)
For more information about the implementation of the Power10 processor’s high throughput
math engine, see A matrix math facility for Power ISA processors.
For more information about fundamental MMA architecture principles with detailed instruction
set usage, register file management concepts, and various supporting facilities, see
Matrix-Multiply Assist Best Practices Guide, REDP-5612.
Depending on the specific settings of the PCR, the Power10 core runs in a compatibility mode
that pertains to Power9 (Power ISA 3.0) or Power8 (Power ISA 2.07) processors. The support
for processor compatibility modes also enables older operating system (OS) versions of AIX,
IBM i, Linux, or Virtual I/O Server (VIOS) environments to run on Power10 processor-based
systems.
The Power10 processor-based Power E1050 server supports the Power8, Power9 Base,
Power9, and Power10 compatibility modes.
Note: All processor modules that are used in a Power E1050 server must be identical (the
same Feature Code).
Table 2-3 shows the processor features that are available for the Power E1050 server.
#EHC8 Solution Edition for Healthcare typical 2.95 - 3.9 GHz 24-core Processor Module
(North America only)
The minimum number of cores that must be activated is one socket. For example, in a server
with all four sockets populated with the 12-core option, the minimum number of cores to
activate is 12.
There are two kinds of activation features: general-purpose and Linux. Cores with a
general-purpose activation can run any supported OS, but cores with a Linux activation can
run only Linux OSs. The processor-specific activation features for the Power E1050 server
are shown in Table 2-4.
Capacity on Demand
Two types of Capacity on Demand (CoD) capability are available for processor and memory
on the Power E1050 server:
Capacity Upgrade on Demand (CUoD) processor activations
If not all cores were activated, it is possible to purchase more core activations through a
Miscellaneous Equipment Specification (MES) upgrade order, which results in another key
that can be integrated into the system by using the Hardware Management Console
(HMC) or the Advanced System Management Interface (ASMI) without requiring a restart
of the server or interrupting the business. After entering the code, the additional cores can
be used and assigned to LPARs.
Elastic CoD (Temporary)
With Elastic CoD, you can temporarily activate processors and memory as full-day
increments as needed. The processors and memory can be activated and turned off an
unlimited number of times whenever you need extra processing resources.
Hint: On the IBM ESS website, you can activate a demonstration mode. In the
demonstration mode, you can simulate how to order capacity and how to produce keys
without any real execution.
For more information about PEP2, see IBM Power Systems Private Cloud with Shared Utility
Capacity: Featuring Power Enterprise Pools 2.0, SG24-8478.
Note: The CUoD technology usage model and the Shared Utility Capacity (PEP2) offering
model are all mutually exclusive in respect to each other.
Each L3 region serves as a victim cache for its associated L2 cache, and it can provide
aggregate storage for the on-chip cache footprint.
Intelligent L3 cache management enables the Power10 processor to optimize the access to
L3 cache lines and minimize cache latencies. The L3 cache includes a replacement algorithm
with data type and reuse awareness. It also supports an array of prefetch requests from the
core, including instruction and data, and works cooperatively with the core, memory
controller, and SMP interconnection fabric to manage prefetch traffic, which optimizes system
throughput and data latency.
Each one of the AES and SHA engines, data compression, and Gzip units consist of a
co-processor type, and the NX unit features three co-processor types. The NX unit also
includes more support hardware to support co-processor invocation by user code, usage of
effective addresses, high-bandwidth storage accesses, and interrupt notification of job
completion.
In effect, this on-chip NX unit on Power10 systems implements a high-throughput engine that
can perform the equivalent work of multiple cores. The system performance can benefit by
offloading these expensive operations to on-chip accelerators, which can greatly reduce the
CPU usage and improve the performance of applications.
The accelerators are shared among the logical partitions (LPARs) under the control of the
PowerVM hypervisor and accessed through a hypervisor call. The OS, along with the
PowerVM hypervisor, provides a send address space that is unique per process requesting
the co-processor access. This configuration allows the user process to directly post entries to
the first in - first out (FIFO) queues that are associated with the NX accelerators. Each NX
co-processor type has a unique receive address space corresponding to a unique FIFO for
each of the accelerators.
For more information about the usage of the xgzip tool that uses the Gzip accelerator engine,
see the following resources:
Using the Power9 NX (gzip) accelerator in AIX
Power9 GZIP Data Acceleration with IBM AIX
Performance improvement in OpenSSH with on-chip data compression accelerator in
Power9
The nxstat command
Note: The OpenCAPI interface and the memory clustering interconnect are Power10
technology option for future usage.
OpenCAPI is an open interface architecture that allows any microprocessor to attach to the
following items:
Coherent user-level accelerators and I/O devices
Advanced memories accessible through read/write or user-level DMA semantics
The PowerAXON interface is implemented on dedicated areas that are at each corner of the
Power10 processor die.
The chip-to-chip DCM internal interconnects, and connections to the OpenCAPI ports, are
shown in Figure 2-6.
2 6 4 1
OpenCAPI port
3 Power10 7 5 Power10 0
chip 1 chip 0 OpenCAPI port
0 5 7 3
1 4 6 2
DCM3 DCM1
3 7 5 0
2 6 4 1
0 Power10 5 7 Power10 3
chip 0 chip 1
1 4 6 2 OpenCAPI port
2 6 4 1
OpenCAPI port
3 Power10 7 5 Power10 0
chip 1 chip 0
0 5 7 3
1 4 6 2
DCM0 DCM2
3 7 5 0
2 6 4 1
Figure 2-6 SMP xBus 1-hop interconnect and OpenCAPI port connections
Note: The left (front) DCM0 and DCM3 are placed in a 180-degrees rotation compared to
the two right (rear) DCM1 and DCM2 to optimize PCIe slots and Non-volatile Memory
Express (NVMe) bay wirings.
For the internal connection of the two chips in one DCM, two ports are available, but only one
is used. The used port connects the two chips inside the DCM with a 2x9 bus.
Note: The implemented OpenCAPI interfaces can be used in the future, but they are
currently not used by the available technology products.
Based on the extensive experience that was gained over the past few years, the Power10
EnergyScale technology evolved to use the following effective and simplified set of
operational modes:
Power-saving mode
Static mode (nominal frequency)
Maximum performance mode (MPM)
The Power9 dynamic performance mode (DPM) has many features in common with the
Power9 MPM. Because of this redundant nature of characteristics, the DPM for Power10
processor-based systems was removed in favor of an enhanced MPM. For example, the
maximum frequency is now achievable in the Power10 enhanced MPM (regardless of the
number of active cores), which was not always the case with Power9 processor-based
servers.
In the Power E1050 server, MPM is enabled by default. This mode dynamically adjusts the
processor frequency to maximize performance and enable a much higher processor
frequency range. Each of the power saver modes delivers consistent system performance
without any variation if the nominal operating environment limits are met.
For Power10 processor-based systems that are under control of the PowerVM hypervisor, the
MPM is a system-wide configuration setting, but each processor module frequency is
optimized separately.
The following factors determine the maximum frequency that a processor module can run at:
Processor utilization: Lighter workloads run at higher frequencies.
Number of active cores: Fewer active cores run at higher frequencies.
Environmental conditions: At lower ambient temperatures, cores are enabled to run at
higher frequencies.
Figure 2-7 Power10 power management modes and related frequency ranges
Table 2-5 shows the power-saving mode and the static mode frequencies and the frequency
ranges of the MPM for all processor module types that are available for the Power E1050
server.
Note: For all Power10 processor-based scale-out systems, the MPM is enabled by default.
Table 2-5 Characteristic frequencies and frequency ranges for Power E1050 servers
Feature Cores per Power-saving Static mode Maximum performance mode
Code single-chip mode frequency frequency frequency range
module (GHz) (GHz) (GHz)
The controls for all power saver modes are available on the ASMI, and can be dynamically
modified. A system administrator can also use the HMC to set power saver mode or to enable
static mode or MPM.
Figure 2-8 ASMI menu for Power and Performance Mode Setup
Figure 2-9 HMC menu for Power and Performance Mode Setup
Power E1050 servers use exclusively SCM modules with up to 15 active SMT8-capable
cores. These SCM processor modules are structural and performance-optimized for usage in
scale-up multi-socket systems.
DCM modules with up to 30 active SMT8 capable cores are used in 4-socket Power E1050
servers, and 2-socket Power S1022 and Power S1024 servers. eSCMs with up to eight active
SMT8-capable cores are used in 1-socket Power S1014 and 2-socket Power S1022s servers.
DCM and eSCM modules are designed to support scale-out 1- to 4-socket Power10
processor-based servers.
Table 2-6 Comparison of the Power10 processor technology to prior processor generations
Characteristics Power10 Power9 Power8
Technology 7 nm 14 nm 22 nm
Die size 2 x 602 mm2 2 x 602 mm2 602 mm2 693 mm2 649 mm2
Maximum cores 24 8 15 12 12
Maximum static frequency 3.4 - 4.0 GHz 3.0 - 3.9 GHz 3.6 - 4.15 GHz 3.9 - 4.0 GHz 4.15 GHz
or high-performance
frequency rangea
One Power10 processor chip supports the following functional elements to access main
memory:
Eight MCUs
Eight OMI ports that are controlled one-to-one through a dedicated MCU
Two OMI links per OMI port, for a total of 16 OMI links
Eight lanes per OMI link for a total of 128 lanes, all running at 32-Gbps speed
In summary, one DCM supports the following functional elements to access main memory:
Four active MCUs per chip, for a total of eight MCUs per module.
Each MCU maps one-to-one to an OMI port.
Four OMI ports per chip, for at total of eight OMI ports per module.
Two OMI links per OMI port for a total of eight OMI links per chip and 16 OMI links per
module.
Eight lanes per OMI link for a total of 128 lanes per module, all running at 32-Gbps speed.
Note: DDIMMs are also available in a 2U form factor. These 2U DDIMMs are not
supported in the Power E1050 server.
The memory bandwidth and the total memory capacity depend on the DDIMM density and
the associated DDIMM frequency that are configured for the Power E1050 server. Table 2-7
list the maximum memory and memory bandwidth per populated socket and the maximum
values for a fully populated server.
Table 2-7 Maximum theoretical memory and memory bandwidth for the Power E1050 server
Feature DIMM sizea DRAM Max Max Max Max memory
Code speed memory memory memory bandwidth
per per bandwidth per server
socket server per socket
#EM75 64 GB (2 x32 GB) 3200 MHz 512 GB 2 TB 409 GBps 1.636 GBps
#EM76 128 GB (2 x64 GB) 3200 MHz 1 TB 4 TB 409 GBps 1.636 GBps
#EM77 256 GB (2 x128 GB) 2933 MHz 2 TB 8 TB 375 GBps 1.500 GBps
#EM7J 512 GB (2 x256 GB) 2933 MHz 4 TB 16 TB 375 GBps 1.500 GBps
a. The 128 GB and 256 GB DDIMMs are planned to be available from 9 December 2022.
The minimum mount of memory that must be activated is 50% of the installed memory or at
least 256 GB. Static memory can be activated in the system configuration by using the
Feature Code #EMCP for the activation of 1 GB or #EMCQ for the activation of 100 GB.
Capacity on Demand
Two types of CoD capability are available for processor and memory on the Power E1050
server:
CUoD
If you have not activated memory in your server, you may use CUoD to purchase extra
permanent memory (and processor) capacity and dynamically activate it when you need
it. This goal can be achieved through a MES upgrade order, which results in another key
that can be integrated into the system by using the HMC or the ASMI without restarting the
server or interrupting the business. After entering the code, the additional memory can be
used and assigned to LPARs.
For activating memory in a configuration, use the Feature Code #EMCP to activate 1 GB
or Feature Code #EMCQ to activate 100 GB of memory for any OS.
Hint: On the IBM ESS website, you can activate a demonstration mode. In the
demonstration mode, you can simulate how to order capacity and produce keys without
any real execution.
Note: For processors, the metering measures used capacity cycles independent of the
entitled capacity that is assigned to LPARs. For memory, this process is different. If
memory is assigned to an LPAR, it is used from an Enterprise Pools perspective, even
when the OS does not use it.
For more information about PEP2, see IBM Power Systems Private Cloud with Shared Utility
Capacity: Featuring Power Enterprise Pools 2.0, SG24-8478.
Note: The CUoD technology usage model and the Shared Utility Capacity (PEP2) offering
model are all mutually exclusive in respect to each other.
P0-C64 P0-C22
P0-C65 P0-C23
P0-C66 P0-C24
P0-C67 P0-C25
P0-C68 P0-C26
P0-C69 P0-C27
P0-C70 Chip P0-C28 Chip
P0-C71 1 P0-C29 0
DCM 3 DCM 1
P0-C72 Chip P0-C30 Chip
P0-C73 P0-C31
0 1
P0-C74 P0-C32
P0-C75 P0-C33
P0-C76 P0-C34
P0-C77 P0-C35
P0-C78 P0-C36
P0-C79 P0-C37
P0-C80 P0-C38
P0-C81 P0-C39
P0-C82 P0-C40
P0-C83 P0-C41
P0-C84 P0-C42
P0-C85 P0-C43
P0-C86 Chip P0-C44 Chip
P0-C87 1 P0-C45 0
DCM 0 DCM 2
P0-C88 P0-C46
P0-C89 Chip P0-C47 Chip
P0-C90 0 P0-C48 1
P0-C91 P0-C49
P0-C92 P0-C50
P0-C93 P0-C51
P0-C94 P0-C52
P0-C95 P0-C53
Table 2-8 shows the order in which the DDIMM slots should be populated.
Note: The left (front) DCM0 and DCM3 are placed in a 180-degrees rotation compared to
the two right (rear) DCM1 and DCM2 to optimize PCIe slots and NVMe bay wirings.
Note: The pervasive memory encryption of the Power10 processor does not affect the
encryption status of a system dump content. All data that is coming from the DDIMMs is
decrypted by the MCU before it is passed onto the dump devices under the control of the
dump program code. This statement applies to the traditional system dump under the OS
control and the firmware assist dump utility.
Note: The PowerVM LPM data encryption does not interfere with the pervasive memory
encryption. Data transfer during an LPM operation uses the following general flow:
1. On the source server, the Mover Server Partition (MSP) provides the hypervisor with a
buffer.
2. The hypervisor of the source system copies the partition memory into the buffer.
3. The MSP transmits the data over the network.
4. The data is received by the MSP on the target server and copied in to the related buffer.
5. The hypervisor of the target system copies the data from the buffer into the memory
space of the target partition.
To facilitate LPM data compression and encryption, the hypervisor on the source system
presents the LPM buffer to the on-chip NX unit as part of process in step 2. The reverse
decryption and decompress operation is applied on the target server as part of the process
in step 4.
The pervasive memory encryption logic of the MCU decrypts the memory data before it is
compressed and encrypted by the NX unit on the source server. The logic also encrypts
the data before it is written to memory but after it is decrypted and decompressed by the
NX unit of the target server.
The hypervisor code logical memory blocks are mirrored on distinct DDIMMs to enable more
usable memory. There is no specific DDIMM that hosts the hypervisor memory blocks, so the
mirroring is done at the logical memory block level, not at the DDIMM level. To enable the
AMM feature, the server must have enough free memory to accommodate the mirrored
memory blocks.
It is possible to check whether the AMM option is enabled and changes its status by using the
HMC. The relevant information and controls are in the Memory Mirroring section of the
General Settings window of the selected Power E1050 server (Figure 2-12).
Figure 2-12 Memory Mirroring section in the General Settings window on the HMC enhanced GUI
After a failure occurs on one of the DDIMMs that contains hypervisor data, all the server
operations remain active and the enterprise Baseboard Management Controller (eBMC)
service processor isolates the failing DDIMMs. The system stays in the partially mirrored
state until the failing DDIMM is replaced.
Memory that is used to hold the contents of platform dumps is not mirrored, and AMM does
not mirror partition data. AMM mirrors only the hypervisor code and its components to protect
this data against a DDIMM failure. With AMM, uncorrectable errors in data that are owned by
a partition or application are handled by the existing Special Uncorrectable Error (SUE)
handling methods in the hardware, firmware, and OS.
SUE handling prevents an uncorrectable error in memory or cache from immediately causing
the system to stop. Rather, the system tags the data and determines whether it will be used
again. If the error is irrelevant, it does not force a checkstop. If the data is used, termination
can be limited to the program/kernel or hypervisor owning the data, or freeze of the I/O
adapters that are controlled by an I/O hub controller if data must be transferred to an I/O
device.
Each PEC supports up to three PCI host bridges (PHBs) that directly connect to PCIe slots or
devices. Both PEC0 and PEC1 can be configured as follows:
One x16 Gen4 PHB or one x8 Gen5 PHB
One x8 Gen5 and one x8 Gen4 PHB
One x8 Gen5 PHB and two x4 Gen4 PHBs
The usage or configurations of the PECs are shown in the notation of the ports. There are two
notations, the E-Bus notation and the PHB notation, which describe the split of the PEC.
Table 2-9 gives an overview.
One x8 and two x4 at DCM0 E0A, E0B, and E0C PHB0, PHB1, and PHB2
One x8 and two x4 at DCM1 E1A, E1B, and E1C PHB3, PHB4, and PHB5
NVMe P1-C0
right
left
x8 PEC1 PEC0
x16
Power10 Power10
chip 1 chip 0 PCIe Gen4 x16 / G5 x8 P0-C11
x8
x8
PEC0 PEC1 PCIe Gen5 x8 (x16 Conn) P0-C10
NVMe P1-C6
NVMe P1-C1
x8
DCM3 DCM1
x8
PCIe Gen4 x8 (x16 Conn) P0-C9
PEC1 PEC0 x16
Power10 Power10 PCIe Gen4 x16 / G5 x8 P0-C8
x8 chip 0 chip 1
x8 P0-C7
NVMe P1-C2
NVMe P1-C7
rear
x4 PCIe Gen4 x8 (x16 Conn) P0-C6
x16
x8 PEC1 PEC0 PCIe Gen4 x16 / G5 x8 P0-C5
Power10 Power10
NVMe P1-C3
NVMe P1-C8
x4
PCIe Gen4 x8 (x16 Conn) P0-C1
Power10 Power10 x8
chip 0 chip 1 eBMC
right
USB
USB
USB
x8
left
Eth 0 Eth 1 USB P0-C0
x4 PEC0 PEC1
USB USB USB Ctl x4
x4 x4
On the left (front) side, you find 10 NVMe bays. Six NVMe bays are connected to DCM0 and
four NVMe bays are connected to DCM3. To make all 10 NVMe bays available, all four
processor sockets must be populated. The NVMe bays are connected mainly by using a x8
PHB, some with a x4 PHB. But because the NVMe devices can use four lanes (x4), this fact is
not relevant from a performance point of view.
On the right (rear) side, you find 11 PCIe slots in a different manner. Six slots use up to 16
lanes (x16) and can operate either in PCI Gen4 x16 mode or in Gen5 x8 mode. The
remaining five slots are connected with eight lanes. Two of them are Gen5 x8, and three of
them are Gen4 x8. In a 2-socket processor configuration, seven slots are available for use
(P0-C1 and P0-C6 to P0-C11). To make all the slots available, at least three processor
sockets must be populated.
The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as
many PCIe lanes. PCIe Gen5 slots can support up to twice the bandwidth of a PCIe Gen4
slot, and PCIe Gen4 slots can support up to twice the bandwidth of a PCIe Gen3 slot,
assuming an equivalent number of PCIe lanes.
Note: Although some slots provide a x8 connection only, all slots have an x16 connector.
All PCIe slots support hot-plug adapter installation and maintenance and enhanced error
handling (EEH). PCIe EEH-enabled adapters respond to a special data packet that is
generated from the affected PCIe slot hardware by calling system firmware, which examines
the affected bus, allows the device driver to reset it, and continues without a system restart.
For Linux, EEH support extends to the most devices, although some third-party PCI devices
might not provide native EEH support.
The server PCIe slots are allocated DMA space by using the following algorithm:
All slots are allocated a 2 GB default DMA window.
All I/O adapter slots (except the embedded Universal Serial Bus (USB)) are allocated
Dynamic DMA Window (DDW) capability based on installed platform memory. DDW
capability is calculated assuming 4 K I/O mappings:
– The slots are allocated 64 GB of DDW capability.
– Slots can be enabled with Huge Dynamic DMA Window (HDDW) capability by using
the I/O Adapter Enlarged Capacity setting in the ASMI.
– HDDW-enabled slots are allocated enough DDW capability to map all installed platform
memory by using 64 K I/O mappings.
– Slots that are HDDW-enabled are allocated the larger of the calculated DDW capability
or HDDW capability.
The Power E1050 server is smarter about energy efficiency when cooling the PCIe adapter
environment. It senses which IBM PCIe adapters are installed in their PCIe slots, and if an
adapter requires higher levels of cooling, they automatically speed up fans to increase airflow
across the PCIe adapters. Faster fans increase the sound level of the server. Higher wattage
PCIe adapters include the PCIe3 serial-attached SCSI (SAS) adapters and solid-state drive
(SSD)/flash PCIe adapters (#EJ10, #EJ14, and #EJ0J).
USB ports
The first DCM (DCM0) also hosts the USB controller that is connected by using four PHBs,
although the USB controller uses only one lane. DCM0 provides four USB 3.0 ports, with two
in the front and two in the back. The two front ports provide up to 1.5 A USB current, mainly to
support the external USB DVD (Feature Code EUA5). The two rear USB ports current
capacity is 0.9 A.
Note: The USB controller is placed on the trusted platform module (TPM) card because of
space reasons.
Some customers require that USB ports must be deactivated for security reasons. You can
achieve this task by using the ASMI menu. For more information, see 2.5.1, “Managing the
system by using the ASMI GUI” on page 73.
Table 2-10 PCIe slot locations and capabilities for the Power E1080 servers
Location code Description Processor module OpenCAPI Cable card I/O adapter
for I/O enlarged
drawer capacity
enablement
ordera
The following characteristics are not part of the table because they apply for all slots:
All slots are SR-IOV capable.
All slots have an x16 connector, although some provide only a x8 connection.
All PCIe adapters are installed in an I/O Blind-Swap Cassette (BSC) as the basis for
concurrent maintenance. For more information, see 2.3.2, “I/O Blind-Swap Cassettes” on
page 69.
All I/O cassettes can hold Half Length Full Height (HLFH) and Half Length Half Height
(HLHH) PCIe adapters.
P0-C10
P0-C11
P0-C0
P0-C1
P0-C2
P0-C3
P0-C4
P0-C5
P0-C6
P0-C7
P0-C8
P0-C9
Figure 2-14 Rear view of a Power E1050 server with PCIe slots location codes
Note: Slot P0-C0 is not a PCIe slot; instead, it holds a special I/O Cassette for the eBMC
Service Processor Card.
Caution: To hot plug a BCS, first go to the HMC or AIX diag to start a hot-plug action to
remove the power from a slot. Do not pull a cassette when the slot is still under power.
Slot C0 is a special I/O cassette for the eBMC Service Processor card. The eBMC card is not
concurrently maintainable. For more information, see 2.3.5, “System ports” on page 71.
Plug in direction
Tailstock
latch
Card end stop
Slide card
support latches
Cassette Connector to
Cassette lock PCIe x16 connector
tailstock system planar
latch
Figure 2-15 Power E1050 Blind-Swap Cassette
The wiring strategy and backplane materials are chosen to ensure Gen4 signaling to all
NVMe drives. All NVMe connectors are PCIe Gen4 connectors. For more information about
the internal connection of the NVMe bays to the processor chips, see Figure 2-13 on page 66.
Each NVMe interface is a Gen4 x4 PCIe bus. The NVMe drives can be in an OS-controlled
RAID array. A hardware RAID is not supported on the NVMe drives. The NVMe thermal
design supports 18 W for 15-mm NVMe drives and 12 W for 7-mm NVMe drives.
For more information about the available NVMe drives and how to plug the drives for best
availability, see 3.5, “Internal storage” on page 92.
Feature Code #EJ2A is an IBM designed PCIe Gen4 x16 cable card. It is the only supported
cable card to attach fanout modules of an I/O Expansion Drawer in the Power E1050 server.
Previous cards from a Power E950 server cannot be used. Feature Code #EJ2A supports
copper and optical cables for the attachment of a fanout module.
Note: The IBM e-config configurator adds 3-meter copper cables (Feature Code #ECCS)
to the configuration if no cables are manually specified. If you want to have optical cables
make sure to configure them.
Table 2-11 lists the PCIe slot order for the attachment of an I/O Expansion Drawer, the
maximum number of I/O Expansion Drawers and Fanout modules, and the maximum number
of available slots (dependent on the populated processor sockets).
Table 2-11 I/O Expansion Drawer capabilities depend on the number of populated processor slots
Processor Expansion adapter slots order Maximum Maximum Total
sockets number of number of PCIe
populated I/O fanout slots
Expansion modules
Drawers
For more information about the #EMX0 I/O Expansion Drawer, see 3.9.1, “PCIe Gen3 I/O
expansion drawer” on page 99.
The two eBMC Ethernet ports are connected by using four PCIe lanes each, although the
eBMC Ethernet controllers need only one lane. The connection is provided by the DCM0, one
from each Power10 chip. For more information, see Figure 2-13 on page 66.
The eBMC module with its two eBMC USB ports also is connected to the DCM0 at chip 0 by
using a x4 PHB, although the eBMC module uses only one lane.
For more information about how to do a firmware update by using the eBMC USB ports, see
Installing the server firmware on the service processor or eBMC through a USB port.
To enter the ASMI GUI, go to an HMC, select the server, and then select Operations →
Launch Advanced System Management. A window opens and shows the name of the
system, MTM and serial number, and the IP address of the service processor (eBMC). Click
OK, and the ASMI window opens.
If the eBMC is not in a private but reachable network, you can connect directly by entering
https://<eBMC IP> into your web browser.
The default user when you log in the first time is admin and the default password is also admin
but invalidated. After the first login, you must immediately change the admin password, which
is also true after performing a factory reset of the system. This policy change ensures that the
eBMC is not left in a state with a well-known password. The password needs to be a strong
one, and not, for example, abcd1234. For more information about the password rules, see
Setting the password.
The new ASMI for eBMC managed servers have some major differences and some valuable
new features:
System firmware updates
It is possible to install a firmware update for the server by using the ASMI GUI, even if the
system is managed by an HMC. In this case, the firmware update is always disruptive. To
install a concurrent firmware update, you must use the HMC.
Download of dumps
Dumps can be downloaded by using the HMC, but if necessary, you also can download
them from the ASMI menu.
It is also possible to initiate a dump from the ASMI by selecting Logs → Dumps, selecting
the dump type, and clicking Initiate dump. The possible dump types are:
– Baseboard management controller (BMC) dump (nondisruptive)
– Resource dump
– System dump (disruptive)
Network Time Protocol (NTP) server support
Lightweight directory access protocol (LDAP) for user management
Note: The host console can also be accessed by using an SSH client over port 2200
and logging in with the admin user.
User management
In the eBMC, it is possible to create you own users. This feature also can be used to
create an individual user that can be used for the HMC to access the server. There are two
types of privileges for a user: Administrator or ReadOnly. As the name indicates, with the
ReadOnly privileges, you cannot modify something (except the password of that user),
and a user with that privilege cannot be used for HMC access to the server.
IBM Security® through Access Control Files (ACFs)
In FSP-managed servers, IBM Support generates a password by using the serial number
and the date to get “root access” to the service processor by using the user celogin. In
eBMC managed systems, IBM support generates an ACF. This file must be uploaded to
the server to get access. This procedure is, for example, needed if you lost the admin
password and want to reset it.
Jumper reset
It is possible to reset everything on the server by using a jumper. This reset is a factory
reset that resets everything, like LPARs, eBMC settings, or the NVRAM.
It is also possible to display the details of a component, which is helpful to see details such as
the size of a DIMM or the part numbers if something must be exchanged.
Sensors
The ASMI has many sensors for the server, which you can access by selecting Hardware
status → Sensors. The loading of the sensors takes some time, and during that time you see
a progress bar on the top of the window.
Note: Although the progress bar might be finished, it might take some extra time until the
sensors appear.
Network settings
The default network setting for the two eBMC ports is DHCP. Therefore, when you connect a
port to a private HMC network where the HMC is connected to a DHCP server, the new
system should get its IP address from the HMC during the startup of the firmware. Then, the
new system automatically appears in the HMC and can be configured. DHCP is the IBM
recommended way to attach a server to the HMC.
If you do not use DHCP and want to use a static IP address, you can set the IP address in the
ASMI GUI. However, because there are no default IP addresses that are the same for every
server, you first must discover the configured IP address.
To discover the configured IP address, use the operator panel and complete the following
steps:
1. Use the Increment or Decrement buttons to scroll to function 02.
2. Press Enter until the value changes from N to M, which activates access to function 30.
3. Scroll to function 30 and press Enter. Function 30** appears.
4. Scroll to 3000 and press Enter, which shows you the IP address of the eth0 port.
5. If you scroll to 3001 and press Enter, you see the IP address of eth1.
6. After you discover the IP address, scroll again to function 02 and set the value back from
M to N.
For more information about function 30 in the operator panel, see IBM Documentation.
Now that you have discovered the IP address, you can connect any computer with a web
browser to an IP address in the same subnet (class C) and connect the computer with the
correct Ethernet port of the Power E1050 server.
After connecting the cable, you can use a web browser to access the ASMI by using the URL
https://<IP address>, and then you can configure the network ports. To configure the
network ports, select Settings → Network and select the correct adapter to configure.
Figure 2-19 shows an example of changing eth1. Before you can configure a static IP
address, turn off DHCP. It is possible to configure several static IP addresses on one physical
Ethernet port.
You cannot configure the Virtualization Management Interface (VMI) address in the ASMI
network settings. The VMI address is another IP address that is configured on the physical
eBMC Ethernet port of the server to manage the virtualization of the server. The VMI address
can be configured only in the HMC.
Policies
In the Security and access → Policies menu, you can turn on and off security-related
functions, for example, whether you can manage your server by using Intelligent Platform
Management Interface (IPMI).
Some customers require that the USB ports of the server should be disabled. You can
accomplish this task by using policies. To do so, clear Host USB enablement, as shown in
Figure 2-20.
Figure 2-20 How to turn off the USB ports of the server
Before you can get data from the server or run systems management tasks by using Redfish,
you must authenticate against the server. After you authenticate by using a user ID and
password, you receive a token from the serve that you use later.
With the token, you can get data from the server. First, request the data of the Redfish root by
using /Redfish/v1. You get data with additional branches in the Redfish tree, for example,
Chassis (uppercase C). To dig deeper, use the newly discovered odata.id
/Redfish/v1/Chassis, as shown in Example 2-2.
Under Chassis (uppercase C), there is another chassis with a lowercase c. Use the tree with
both (/Redfish/c1/Chassis/chassis). After running the chassis, you can see in Example 2-2
on page 80 for example, that there are PCIeSlots and Sensors, among other resources of the
server.
In Example 2-3, you can see what is under Sensors. There you can find the same sensors as
in the ASMI GUI (see Figure 2-18 on page 77). In the output, you find, for example, the
sensor total_power. When you ask for details about that sensor, as shown in Example 2-3,
you can see that the server needed 1.426 watts at the time of running the command.
It is also possible to run operations on the server by using the POST method with the Redfish
API interface. In Example 2-5, you can see the curl commands that can start or stop the
server.
For more information about Redfish, see Managing the system by using DMTF Redfish APIs.
For more information about how to work with Redfish in IBM Power servers, see Managing
Power Systems servers by using DMTF Redfish APIs.
If you want to use IPMI, you first must enable it. To do so, select Security and access →
Policies. Then, enable the policy Network IPMI (out-of-band IPMI).
The Power E1050 supports 24 - 96 processor cores. A minimum of two and a maximum of
four processor modules are required for each system. The modules can be added to a system
later through a Miscellaneous Equipment Specification (MES) order, but they require
scheduled downtime to install. All processor modules in one server must be at the same
gigahertz frequency (that is, they must be the same processor module feature number).
Table 3-1 lists the processor card Feature Codes that are available at initial order for Power
E1050 servers: the related module type, the number of function cores, the typical frequency
range in maximum performance mode (MPM), and the socket options.
Note: IBM intends to support SAP HANA on the Power E1050 after initial general
availability (GA). SAP HANA on Power E1050 will be certified with four sockets (4S),
24 cores, and 16 TB of memory. All processor core options, including 12-core and 18-core,
also will be offered after certification.
For the Healthcare solution edition, the Processor Activation (24) for Healthcare Solution
#EHC8 (#EHCA) activation feature is available.
Both Elastic and Shared Utility Capacity options are available on all Power E1050 servers
through the Virtual Capacity (4586-COD) machine type and model (MTM). For more
information, see the IBM Entitled Systems Support (IBM ESS) website.
With Elastic Capacity on Power E1050 servers, you can deploy pay-for-use consumption of
processors by the day across a collection of Power E1050 and Power E950 servers.
Shared Utility Capacity on Power E1050 servers provides enhanced multisystem resource
sharing and by-the-minute tracking and consumption of compute resources across a
collection of systems within a Power Enterprise Pools 2.0 (PEP2). Shared Utility Capacity on
Power E1050 servers has the flexibility to tailor initial system configurations with the right mix
of purchased and pay-for-use consumption of processor.
Table 3-2 lists the Feature Codes for processor activation with PEP2.
#EPRW 1 core Base Processor Activation (Pools 2.0) for #EPEU Linux only
#EPRX 1 core Base Processor Activation (Pools 2.0) for #EPEV Linux only
#EPRZ 1 core Base Processor Activation (Pools 2.0) for #EPGW Linux only
Table 3-3 provides a compilation of all processor-related Feature Codes for processor
conversion for the Power E1050 servers.
#EPSA 1 core Base Processor Activation (Pools 2.0) for #EPEU (conv. from Static)
#EPSB 1 core Base Processor Activation (Pools 2.0) for #EPEV (conv. from Static)
#EPSD 1 core Base Processor Activation (Pools 2.0) for #EPGW (conv. from Static)
#EPSE 1 core Base Processor Activation (Pools 2.0) for #EPEU (conv. from Static) Linux only
#EPSF 1 core Base Processor Activation (Pools 2.0) for #EPEV (conv. from Static) Linux only
#EPSH 1 core Base Processor Activation (Pools 2.0) for #EPGW (conv.from Static) Linux only
#ERQ0 1 core Base Processor Activation (Pools 2.0) for EPEU (from prev.)
#ERQ1 1 core Base Processor Activation (Pools 2.0) for #EPEV (from prev.)
#ERQ2 1 core Base Processor Activation (Pools 2.0) for #EPGW (from prev.)
#ERQ4 1 core Base Processor Activation (Pools 2.0) for #EPEU (from prev.) Linux only
#ERQ5 1 core Base Processor Activation (Pools 2.0) for #EPEV (from prev.) Linux only
#ERQ6 1 core Base Processor Activation (Pools 2.0) for #EPGW (from prev.) Linux only
EPSU 256 GB Base Memory Activation for Pools 2.0 - Linux only
EPSQ 512 GB Base Memory Activation for Pools 2.0 (from Static)
EPSS 100 GB Base Memory Activation for Pools 2.0 (from Static)
ERQN 256 GB Base Linux only Memory Activation (Pools 2.0) (from prev.)
EPST 512 GB Base Memory Activation for Pools 2.0 (from Linux)
The E1050 9043-MXR server supports, as an option, Active Memory Expansion (AME) with
Feature Code #EMBM, and also as an option Active Memory Mirroring (AMM) for the
Hypervisor with Feature Code #EM81.
Note: A Titanium-certified (best in class) power supply has an efficiency of 90% - 95% on
load.
Every 1% efficiency loss at the PSU requires more than 1% more power to accomplish the
same output. Efficient power supplies help data centers to achieve their sustainability
goals.
The following sections describe the supported adapters and provide tables of orderable and
supported feature numbers. The tables indicate OS support (AIX and Linux) for each of the
adapters.
Note: The maximum number of adapters in each case might require the server to have an
external PCIe expansion drawer.
The Order type table column in the following subsections is defined as follows:
Initial Denotes the orderability of a feature only with the purchase of a new
system.
MES Denotes the orderability of a feature only as part of an MES upgrade
purchase for an existing system.
Both Denotes the orderability of a feature as part of both new and MES
upgrade purchases.
Supported Denotes that feature is not orderable with a system, but is supported;
that is, the feature can be migrated from existing systems, but cannot
be ordered new.
#EC21 PCIe3 LP 2-port 25/10 Gb NIC&ROCE SR/Cu AIX and Linux Supported
Adapter
#EC2U PCIe3 2-Port 25/10 Gb NIC&ROCE SR/Cu AIX and Linuxb Both
Adaptera
#EC66 PCIe4 LP 2-port 100 Gb ROCE EN adapterc AIX and Linuxb Both
#EN0S PCIe2 4-port (10 Gb+1 GbE) SR+RJ45 Adapter AIX and Linux Supported
#EN0U PCIe2 4-port (10 Gb+1 GbE) Copper SFP+RJ45 AIX and Linux Supported
Adapter
#EN0W PCIe2 2-port 10/1 GbE BaseT RJ45 Adapter AIX and Linux Both
a. Requires one or two appropriate transceivers to provide 10 Gbps SFP+ (#EB46), 25 Gbps
SFP28 (#EB47), or 1 Gbps RJ45 (#EB48) connectivity as required.
b. Linux support requires Red Hat Enterprise Linux (RHEL) 8.4 or later, Red Hat Enterprise Linux
for SAP 8.4 or later, SUSE Linux Enterprise Server 15 Service Pack 3 or later, SUSE Linux
Enterprise Server for SAP with SUSE Linux Enterprise Server 15 Service Pack 3 or later, or
Red Hat OpenShift Container Platform 4.9 or later. All require Mellanox OFED 5.5 drivers or
later.
c. To deliver the full performance of both ports, each 100 Gbps Ethernet adapter must be
connected to a PCIe slot with 16 lanes (x16) of PCIe Gen4 connectivity.
All supported FC adapters have LC connections. If you are attaching a switch or a device with
an SC type fiber connector, then an LC-SC 50-Micron Fiber Converter Cable (#2456) or an
LC-SC 62.5-Micron Fiber Converter Cable (#2459) is required.
Table 3-7 lists the FC adapters that are supported within the Power E1050 servers.
Table 3-7 Fibre Channel adapters for the Power E1050 servers
Feature Description OS support Order type
Code
#EN1A PCIe3 32 Gb 2-port Fibre Channel Adapter AIX and Linux Both
#EN1C PCIe3 16 Gb 4-port Fibre Channel Adapter AIX and Linux Both
#EN1E PCIe3 16 Gb 4-port Fibre Channel Adapter AIX and Linux Both
#EN1G PCIe3 2-port 16 Gb Fibre Channel Adapter AIX and Linux Both
#EN1J PCIe4 32 Gb 2-port Optical Fibre Channel Adapter AIX and Linux Both
#EN2A PCIe3 16 Gb 2-port Fibre Channel Adapter AIX and Linux Both
Table 3-8 lists the SAS adapters that are supported within the Power E1050 servers and with
the PCIe expansion drawer (EMX0) connected.
#EJ0J PCIe3 RAID SAS Adapter Quad-Port 6 Gb x8 AIX and Linux Both
#EJ14 PCIe3 12 GB Cache RAID PLUS SAS Adapter AIX and Linux Both
Quad-port 6 Gb x8
Table 3-9 list the USB adapter that is supported within the Power E1050 with the PCIe
expansion drawer (EMX0) connected to it.
#EC6K PCIe2 LP 2-port USB 3.0 Adapter AIX and Linux Both
The 4769 PCIe Cryptographic Coprocessor includes acceleration for AES, DES, TDES,
HMAC, Cipher-based Message Authentication Code (CMAC), MD5, multiple Secure Hash
Algorithm (SHA) hashing methods, modular-exponentiation hardware, such as RSA and error
correction code (ECC), and full-duplex direct memory access (DMA) communications.
The 4769 PCIe Cryptographic Coprocessor is verified by NIST at FIPS 140-2 Level 4, which
is the highest level of certification currently achievable for commercial cryptographic devices.
Table 3-10 summarizes the cryptographic co-processor and accelerator adapters that are
supported in the Power10 processor-based Enterprise Midrange servers.
Table 3-10 Crypto adapter features for the Power E 1050 servers
Feature Description OS support Order type
Code
#EJ35a PCIe3 Crypto Coprocessor no BSC 4769 (E1050 AIX and Linux Both
chassis only) Direct only
#EJ37a PCIe3 Crypto Coprocessor BSC-Gen3 4769 (PCIe AIX and Linux Both
expansion drawer only) Direct only
a. Feature Codes #EJ35 and #EJ37 are both Feature Codes representing the same physical
card. #EJ35 indicates no BSC. #EJ37 indicates a Gen 3 BSC.
Every Power E1050 configuration supports single-port NVMe mode only. Dual-port NVMe
mode is not supported. Only the Power E1050 4-sockets configuration supports up to 10
NVMe disk drives. Configurations for three and two sockets support up to six NVMe disk
drives. NVMe disk drives that are used as mirrors for operating system (OS) and dual Virtual
I/O Server (VIOS) redundancy must be plugged according to the rules to ensure as many as
possible separate hardware paths for the dual and mirrored pairs.
Table 3-11 shows the NVMe location codes inside the server.
Location P1-C0 P1-C1 P1-C2 P1-C3 P1-C4 P1-C5 P1-C6 P1-C7 P1-C8 P1-C9
codes
All NVMe disk drives are driven directly from the system backplane, so there is no need to
have a PCIe card or cables that are dedicated to this purpose. The 7-mm NVMe disk drives
from the IBM Power E950 are also supported on the Power E1050, but you must order the
NVMe carrier conversion kit (#EC7X) to hold these drives.
If you have an odd number of NVMe disk drives, the disks must be installed by following the
same order in Table 3-12 on page 92. As example, if the server configuration provides three
NVMe disk drives, the suggested order is NVMe3, NVMe4, and NVMe8.
Figure 3-1 shows the NVMe bays according to th NVMe location codes.
#ECSJ Mainstream 800 GB solid-state drive (SSD) NVMe U.2 0 10 AIX and Linux are
module supported.
#EC5K Mainstream 1.6 TB SSD NVMe U.2 module 0 10 AIX and Linux are
supported.
#EC5L Mainstream 3.2 TB SSD NVMe U.2 module 0 10 AIX and Linux are
supported.
#EC5V Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#EC5X Mainstream 800 GB SSD PCIe3 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#EC7Q 800 GB Mainstream NVMe U.2 SSD 4k for AIX/Linux 0 10 AIX and Linux are
supported.
#EC7T 800 GB Mainstream NVMe U.2 SSD 4k for AIX/Linux 0 10 AIX and Linux are
supported.
#ES1E Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#ES1G Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#ES3B Enterprise 1.6 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#ES3D Enterprise 3.2 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
#ES3F Enterprise 6.4 TB SSD PCIe4 NVMe U.2 module for 0 10 AIX and Linux are
AIX/Linux supported.
Table 3-14 on page 95 shows the drive features for the 7226 Model 1U3.
Note: Existing 32 GB (#EU08) and 1.5 TB (#EU15) removable disk drive cartridges are still
supported on the RDX docking stations.
PCIe3 RAID SAS Tape/DVD Adapter Quad-port 6 Gb x8 (#EJ10) supports Tape/DVD with the
following cable options:
SAS AE1 Cable 4 m - HD Narrow 6 Gb Adapter to Enclosure (#ECBY)
SAS YE1 Cable 3 m - HD Narrow 6 Gb Adapter to Enclosure (#ECBZ)
Note: Any of the existing 7216-1U2 and 7214-1U2 multimedia drawers also are supported.
The NVMe plug rules that are shown in Table 3-17 are recommended for the Power E1050 to
provide the most redundancy in hardware for OS mirroring.
For OS NVMe mirror pairs, only an OS-controlled RAID 0, 1 is supported. There is no support
for hardware mirroring with the NVMe backplane.
Note: It is recommended (but not required) that the mirrored two NVMe drives of the pairs
have the same capacity.
Note: In actual operation, there are many running partitions in the system, so the mirror
pairs selections depend on the drives that are available or allocated to the partition.
Table 3-18 shows the internal storage option that is installed in the Power E1050 server.
Table 3-19 lists the available NVMe drive Feature Codes for the Power E1050 server.
#EC5V Enterprise 6.4 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
#EC7Q 800 GB Mainstream NVMe U.2 SSD 4k for 0 10 AIX and Linux
AIX/Linux (7 mm)
#EC7T 800 GB Mainstream NVMe U.2 SSD 4k for 0 10 AIX and Linux
AIX/Linux (15 mm)
#ES1E Enterprise 1.6 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
#ES1G Enterprise 3.2 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
#ES3B Enterprise 1.6 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
#ES3D Enterprise 3.2 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
#ES3F Enterprise 6.4 TB SSD PCIe4 NVMe U.2 0 10 AIX and Linux
module for AIX/Linux (15 mm)
Attention: A minimum quantity of one SSD NVMe drive must be ordered with an AIX or
Linux OS if SAN Boot (#0837) or Remote Load Source (#EHR2) is not ordered:
If SAN Boot (#0837) is ordered, then an adapter (FC or FCoE) that supports FC
protocols to attach the system to a SAN must be ordered or present on the system
instead.
If Remote Load Source (#EHR2) is ordered, then at least one HDD or SSD drive must
be present in the EXP24SX (#ESLS) drawer (or existing EXP12SX (#ESLL).
There is no SAS backplane that is supported on the Power E1050 server. SAS drives can be
placed in the IBM EXP24SX SAS Storage Enclosure that is connected to the system only by
using SAS adapters.
The default priority of slots should be 15-mm NVMe drives options as a higher priority than
#EC7X.
If #EC7X is ordered without drives, customers must self-install their existing 7-mm NVMe
devices into the conversion kit.
If you need more disks than are available with the internal disk bays, you can attach external
disk subsystems that can be attached to the Power E1050 server, such as:
EXP24SX SAS Storage Enclosure
IBM System Storage
Note: The existing EXP12SX SAS Storage Enclosure (#ESLL) is still supported. Earlier
storage enclosures like the EXP12S SAS Drawer (#5886) and EXP24 SCSI Disk Drawer
(#5786) are not supported on the Power E1050.
The PCIe Gen3 I/O Expansion Drawer has two redundant, hot-plug power supplies. Each
power supply has its own separately ordered power cord. The two power cords plug in to a
power supply conduit that connects to the power supply. The single-phase AC power supply is
rated at 1030 W and can use 100 - 120 V or 200 - 240 V. If using 100 - 120 V, then the
maximum is 950 W. It is a best practice that the power supply connects to a power distribution
unit (PDU) in the rack. Power Systems PDUs are designed for a 200 - 240 V electrical source.
The drawer has fixed rails that can accommodate racks with depths 27.5" (69.9 cm) - 30.5"
(77.5 cm).
A BSC is used to house the full-height adapters that go into these slots. The BSC is the same
BSC that is used with the previous generation server's 12X attached I/O drawers (#5802,
#5803, #5877, and #5873). The drawer includes a full set of BSCs, even if the BSCs are
empty.
Concurrent repair, and adding or removing PCIe adapters is done by HMC-guided menus or
by OS support utilities.
Figure 3-4 Rear view of the PCIe Gen3 I/O expansion drawer
Figure 3-5 Rear view of a PCIe Gen3 I/O expansion drawer with PCIe slots location codes
Table 3-20 PCIe slot locations for the PCIe Gen3 I/O expansion drawer with two fanout modules
Slot Location code Description
All slots support full-length, full-height adapters or short (LP) adapters with a full-height
tailstock in a single-wide, Gen3 BSC.
Slots C1 and C4 in each PCIe3 6-slot fanout module are x16 PCIe3 buses, and slots C2,
C3, C5, and C6 are x8 PCIe buses.
All slots support enhanced error handling (EEH).
All PCIe slots are hot-swappable and support concurrent maintenance.
Table 3-21 summarizes the maximum number of I/O drawers that are supported and the total
number of PCI slots that are available.
Table 3-21 Maximum number of I/O drawers that are supported and total number of PCI slots
Server Maximum number of Maximum number of Maximum PCIe
I/O expansion I/O fanout modules slots
drawers
Power E1050 4 8 51
(3- and 4-socket)
The cable adapter can be placed in any slot in the system node. However, if an I/O expansion
drawer is present, the PCIe x16 to CXP Converter Adapter must be given the highest priority.
The maximum number of adapters that is supported is eight per system.
Note: The 3.0-meter copper cable pair connects a PCIe3 fanout module (#EMXH) in the
PCIe Gen3 I/O Expansion Drawer to a PCIe Optical Converter Adapter (#EJ2A) in the
system unit. There are two identical copper cables in the cable pair, each with two CXP
connectors.
The output of the adapter is a CXP interface that also can be used for this copper cable
pair.
Although these cables are not redundant, the loss of one cable reduces the I/O bandwidth
(that is, the number of lanes that are available to the I/O module) by 50%.
Cable lengths: Use the 3.0-m cables for intra-rack installations. Use the 10.0-m cables for
inter-rack installations.
Limitation: You cannot mix copper and optical cables on the same PCIe Gen3 I/O drawer.
Both fanout modules either both use copper cables or both use optical cables.
A minimum of one PCIe x16 to CXP Converter adapter for PCIe3 Expansion Drawer is
required to connect to the PCIe3 6-slot fanout module in the I/O expansion drawer. The fanout
module has two CXP ports. The top CXP port of the fanout module is cabled to the top CXP
port of the PCIe x16 to CXP Converter adapter. The bottom CXP port of the fanout module is
cabled to the bottom CXP port of the same PCIe x16 to CXP Converter adapter.
Figure 3-6 shows the connector locations for the PCIe Gen3 I/O Expansion Drawer.
Figure 3-6 Connector locations for the PCIe Gen3 I/O expansion drawer
PCIe Gen3 I/O expansion drawer system power control network cabling
There is no system power control network (SPCN) that is used to control and monitor the
status of power and cooling within the I/O drawer. SPCN capabilities are integrated into the
optical cables.
Note: Feature conversions are available for earlier versions of the optical cables (#ECC7 -
#ECCX, and #ECC8 - #ECCY) and fan-out modules (#EMXG - #EMXH).
Table 3-23 provides an overview of all the PCIe adapters for the I/O expansion drawer that are
connected to the Power E1050 server. Available means that the adapter is available and is
orderable. Supported means that the adapter is supported on the Power E1050 server during
a model conversion, that is, the adapter works, but additional adapters cannot be ordered on
the new system.
Table 3-23 Available and supported I/O adapters for the I/O expansion drawer
Type Feature Adapter Available or
Code Supported
RIO #EJ2A PCIe3 x16 to CXP Converter Adapter (support AOC) Available
#EN0U PCIe2 4-port (10 Gb+1 GbE) Copper SFP+RJ45 Adapter Supported
The EXP24SX drawer is a storage expansion enclosure with twenty-four 2.5-inch SFF SAS
bays. It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA of space in a 19-inch rack.
The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.
The enclosure has adjustable depth rails and can accommodate rack depths from 59.5 - 75
cm (23.4 - 29.5 inches). Slot filler panels are provided for empty bays when initially shipped
from IBM.
With AIX, Linux, or VIOS, the EXP24SX can be ordered with four sets of six bays (mode 4),
two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). It is possible to change the
mode setting in the field by using software commands along with a documented procedure.
Note: For the EXP24SX drawer, a maximum of twenty-four 2.5-inch SSDs or 2.5-inch
HDDs are supported in the #ESLS 24 SAS bays. HDDs and SSDs cannot be mixed in the
same mode-1 drawer. HDDs and SSDs can be mixed in a mode-2 or mode-4 drawer, but
they cannot be mixed within a logical split of the drawer. For example, in a mode-2 drawer
with two sets of 12 bays, one set can hold SSDs and one set can hold HDDs, but you
cannot mix SSDs and HDDs in the same set of 12 bays.
Important: When changing modes, a skilled, technically qualified person must follow the
special documented procedures. Improperly changing modes can destroy RAID sets,
which prevent access to data, or allow other partitions to access another partition’s data.
1 2 3 4 5 6 7 8 9 0
1 1
1 2
1 3
1 4
1 5
1 6
1 7
1 8
1 9
1 0
2 1
2 2
2 3
2 4
2
-D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1 -D
1
P P P P P P P P P P P P P P P P P P P P P P P P
Figure 3-9 Front view of the ESLS storage enclosure with mode groups and drive locations
Four mini-SAS HD ports on the EXP24SX are attached to PCIe Gen3 SAS adapters. The
following PCIe3 SAS adapters support the EXP24SX:
PCIe3 RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0J)
PCIe3 12 GB Cache RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0L)
PCIe3 LP RAID SAS Adapter Quad-port 6 Gb x8 (#EJ0M)
PCIe3 12 GB Cache RAID Plus SAS Adapter Quad-port 6 Gb x8 (#EJ14)
The attachment between the EXP24SX drawer and the PCIe Gen 3 SAS adapter is through
SAS YO12 or X12 cables. The PCIe Gen 3 SAS adapters support 6 Gb throughput. The
EXP24SX drawer can support up to 12 Gb throughput if future SAS adapters support that
capability. Thee Cable options are:
3.0M SAS X12 Cable (Two Adapter to Enclosure) (#ECDJ)
4.5M SAS X12 Active Optical Cable (Two Adapter to Enclosure) (#ECDK)
10M SAS X12 Active Optical Cable (Two Adapter to Enclosure) (#ECDL)
1.5M SAS YO12 Cable (Adapter to Enclosure) (#ECDT)
3.0M SAS YO12 Cable (Adapter to Enclosure) (#ECDU)
4.5M SAS YO12 Active Optical Cable (Adapter to Enclosure) (#ECDV)
10M SAS YO12 Active Optical Cable (Adapter to Enclosure) (#ECDW)
Figure 3-10 shows the connector locations for the EXP24SX storage enclosure.
Figure 3-10 Rear view of the EXP24SX with location codes and different split modes
For more information about SAS cabling and cabling configurations, see SAS cabling for the
ESLS storage enclosures.
For more information about the various offerings, see Data Storage Solutions.
Order information: It is a best practice that the Power E1050 server be ordered with an
IBM 42U enterprise rack #ECR0 (7965-S42). This rack provides a complete and
higher-quality environment for IBM Manufacturing system assembly and testing, and
provides a complete package.
If a system is installed in a rack or cabinet that is not from IBM, ensure that the rack meets the
requirements that are described in 3.10.6, “Original equipment manufacturer racks” on
page 121.
Responsibility: The customer is responsible for ensuring the installation of the drawer in
the preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and
compatible with the drawer requirements for power, cooling, cable management, weight,
and rail security.
Vertical PDUs: All PDUs that are installed in a rack that contains a Power E1050 server
must be installed horizontally to allow for cable routing in the sides of the rack.
Compared to the 7965-94Y Slim Rack, the Enterprise Slim Rack provides extra strength and
shipping and installation flexibility.
The 7965-S42 rack includes space for up to four PDUs in side pockets. Extra PDUs beyond
four are mounted horizontally and each uses 1U of rack space.
The Enterprise Slim Rack comes with options for the installed front door:
Basic Black/Flat (#ECRM)
High-End appearance (#ECRF)
OEM Black (#ECRE)
All options include perforated steel, which provides ventilation, physical security, and visibility
of indicator lights in the installed equipment within. All options come with a lock and
mechanism included that is identical to the lock on the rear doors. Only one front door must
be included for each rack ordered. The basic door (#ECRM) and OEM door (#ECRE) can be
hinged on the left or right side.
Orientation: #ECRF must not be flipped because the IBM logo would be upside down.
At the rear of the rack, either a perforated steel rear door (#ECRG) or a Rear Door Heat
Exchanger (RDHX) can be installed. The only supported RDHX is IBM machine type
1164-95X, which can remove up to 30,000 watts (102,000 BTU) of heat per hour by using
chilled water. The no additional charge Feature Code #ECR2 is included with the Enterprise
Slim Rack as an indicator when ordering the RDHX.
The basic door (#ECRG) can be hinged on the left or right side, and includes a lock and
mechanism identical to the lock on the front door. Either the basic rear door (#ECRG) or the
RDHX indicator (#ECR2) must be included with the order of a new Enterprise Slim Rack.
Due to the depth of the Power E1050 server model, the 5-inch rear rack extension (#ECRK) is
required for the Enterprise Slim Rack to accommodate this system. This extension expands
the space that is available for cable management and allows the rear door to close safely.
Lifting considerations
Three to four service personnel are required to manually remove or insert a system unit into a
rack, given its dimensions, weight, and content. To avoid the need for this many people to
assemble at a client site for a service action, a lift tool can be useful. Similarly, if the client has
chosen to install this customer setup (CSU) system, similar lifting considerations apply.
The Power E1050 server has a maximum weight of 70.3 kg (155 lb). However, by temporarily
removing the power supplies, fans, and RAID assembly, the weight is easily reduced to a
maximum of 55 kg (121 lb).
IBM Manufacturing integrates only the newer PDUs with the Power E1050 server.
IBM Manufacturing does not support integrating earlier PDUs, such as #7188, #7109, or
#7196. Clients can choose to use older IBM PDUs in their racks, but must install those earlier
PDUs at their site.
Table 3-25 summarizes the high-function PDU FCs for 7965-S42 followed by a descriptive list.
Table 3-25 High-function PDUs that are available with IBM Enterprise Slim Rack (7965-S42)
PDUs 1-phase or 3-phase wye 3-phase 208 V depending on
depending on country wiring country wiring standards
standards
Power sockets: The Power E1050 server takes IEC 60320 C19/C20 mains power and not
C13. Ensure that the correct power cords and PDUs are ordered or available in the rack.
The PDU receives power through a UTG0247 power-line connector. Each PDU requires one
PDU-to-wall power cord. Various power cord features are available for various countries and
applications by varying the PDU-to-wall power cord, which must be ordered separately.
Each power cord provides the unique design characteristics for the specific power
requirements. To match new power requirements and save previous investments, these
power cords can be requested with an initial order of the rack or with a later upgrade of the
rack features.
Table 3-26 shows the available wall power cord options for the PDU features, which must be
ordered separately.
Table 3-26 PDU-to-wall power cord options for the PDU features
Feature Wall plug Rated voltage Phase Rated amperage Geography
Code (V AC)
#6492 IEC 309, 2P+G, 200 - 208, 240 1 48 amps US, Canada,
60 A Latin America
(LA), and Japan
#6654 NEMA L6-30 200 - 208, 240 1 24 amps US, Canada, LA,
and Japan
To better enable electrical redundancy, the Power E1050 server has four power supplies that
must be connected to separate PDUs, which are not included in the base order.
For maximum availability, a best practice is to connect power cords from the same system to
two separate PDUs in the rack, and to connect each PDU to independent power sources.
For more information about power requirements of and the power cord for the 7965-94Y rack,
see IBM Documentation.
Order information: The racking approach for the initial order must be 7965-S42 or
#ECR0. If an extra rack is required for I/O expansion drawers, an MES to a system or an
#0553 must be ordered.
The IBM System Storage 7226 Multi-Media Enclosure supports LTO Ultrium and DAT160
Tape technology, DVD-RAM, and RDX removable storage requirements on the following IBM
systems:
IBM Power6 processor-based systems
IBM Power7 processor-based systems
IBM Power8 processor-based systems
IBM Power9 processor-based systems
#5763 DVD Front USB Port Sled with DVD-RAM USB Drive Available
Removable RDX drives are in a rugged cartridge that inserts in to an RDX removable (USB)
disk docking station (#1103 or #EU03). RDX drives are compatible with docking stations,
which are installed internally in Power8, Power9, and Power10 processor-based servers,
where applicable.
The IBM System Storage 7226 Multi-Media Enclosure offers a customer-replaceable unit
(CRU) maintenance service to help make the installation or replacement of new drives
efficient. Other 7226 components also are designed for CRU maintenance.
The IBM System Storage 7226 Multi-Media Enclosure is compatible with most Power8,
Power9, and Power10 processor-based systems that offer current level AIX and Linux OSs.
For a complete list of host software versions and release levels that support the IBM System
Storage 7226 Multi-Media Enclosure, see IBM System Storage Interoperation Center (SSIC).
Note: Any of the existing 7216-1U2, 7216-1U3, and 7214-1U2 multimedia drawers are
also supported.
The Model TF5 is a follow-on product to the Model TF4 and offers the following features:
A slim, sleek, and lightweight monitor design that occupies only 1U (1.75 in.) in a 19-inch
standard rack
A 18.5 inch (409.8 mm x 230.4 mm) flat panel TFT monitor with truly accurate images and
virtually no distortion
The ability to mount the IBM Travel Keyboard in the 7316-TF5 rack keyboard tray
Support for the IBM 1x8 Rack Console Switch (#4283) IBM Keyboard/Video/Mouse (KVM)
switches
The #4283 is a 1x8 Console Switch that fits in the 1U space behind the TF5. It is a
CAT5-based switch. It contains eight analog rack interface (ARI) ports for connecting PS/2
or USB console switch cables. It supports chaining of servers that use an IBM Conversion
Options switch cable (#4269). This feature provides four cables that connect a KVM switch
to a system, or can be used in a daisy-chain scenario to connect up to 128 systems to a
single KVM switch. It also supports server-side USB attachments.
IBM Documentation provides the general rack specifications, including the following
information:
The rack or cabinet must meet the EIA Standard EIA-310-D for 19-inch racks, which was
published August 24, 1992. The EIA-310-D standard specifies internal dimensions, for
example, the width of the rack opening (width of the chassis), the width of the module
mounting flanges, and the mounting hole spacing.
The front rack opening must be a minimum of 450 mm (17.72 in.) wide, and the
rail-mounting holes must be 465 mm plus or minus 1.6 mm (18.3 in. plus or minus 0.06 in.)
apart on center (horizontal width between vertical columns of holes on the two
front-mounting flanges and on the two rear-mounting flanges).
Figure 3-12 is a top view showing the rack specification dimensions.
The vertical distance between mounting holes must consist of sets of three holes that are
spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.7 mm
(0.5 in.) on center, which makes each three-hole set of vertical hole spacing 44.45 mm
(1.75 in.) apart on center.
The following rack hole sizes are supported for racks where IBM hardware is mounted:
– 7.1 mm (0.28 in.) plus or minus 0.1 mm (round)
– 9.5 mm (0.37 in.) plus or minus 0.1 mm (square)
The rack or cabinet must be capable of supporting an average load of 20 kg (44 lb.) of product
weight per EIA unit. For example, a four EIA drawer has a maximum drawer weight of 80 kg
(176 lb.).
Chapter 4. Resiliency
The reliability of systems starts with components, devices, and subsystems that are highly
reliable. During the design and development process, subsystems go through rigorous
verification and integration testing processes. During system manufacturing, systems go
through a thorough testing process to help ensure the highest level of product quality.
At a system level, availability is about how infrequent failures cause workload interruptions.
The longer the interval between interruptions, the more available a system is.
Serviceability is about how efficiently failures are identified and dealt with, and how
application outages are minimized during repair.
The Power10 E1050 comes with the following reliability, availability, and serviceability (RAS)
characteristics:
Enterprise baseboard management controller (BMC) service processor for system
management and service
Open Memory Interface (OMI) and Differential Dual Inline Memory Modules (DDIMMS)
RAS
Power10 processor RAS
I/O subsystem RAS
Serviceability
Additionally, the following points also are part of the Power E1050 RAS:
Redundant and hot-plug cooling
Redundant and hot-plug power
Redundant voltage regulators
Time of day battery concurrent maintenance
Dynamic Memory Row repair and spare 2U DDIMM: No spare Yes: Base. Yes: Base.
dynamic RAM (DRAM) capability DRAM. 4U DDIMM with 2 4U DDIMM with 2
Yes: Dynamic Row spare DRAMs per spare DRAMs per
Repair. rank. rank.
Yes: Dynamic Row Yes: Dynamic Row
Repair. Repair.
Active Memory Mirroring (AMM) for Yes: Base. Yes: Base. Yes: Base.
Hypervisor New to scale-out.
Diagnostic monitoring of recoverable errors from the processor chipset is performed on the
system processor itself, and the unrecoverable diagnostic monitoring of the processor chipset
is performed by the service processor. The service processor runs on its own power
boundary and does not require resources from a system processor to be operational to
perform its tasks.
The service processor supports surveillance of the connection to the Hardware Management
Console (HMC) and to the system firmware (hypervisor). It also provides several remote
power control options, environmental monitoring, reset, restart, remote maintenance, and
diagnostic functions, including console mirroring. The BMC service processors menus (ASMI)
can be accessed concurrently during system operation, allowing nondisruptive abilities to
change system default parameters, view and download error logs, and check system health.
Redfish, an industry-standard API for server management, enables IBM Power servers to be
managed individually or in a large data center. Standard functions such as inventory, event
logs, sensors, dumps, and certificate management are all supported by Redfish. In addition,
new user management features support multiple users and privileges on the BMC through
Redfish or ASMI. User management through lightweight directory access protocol (LDAP)
also is supported. The Redfish events service provides a means for notification of specific
critical events such that actions can be taken to correct issues. The Redfish telemetry service
provides access to a wide variety of data (such as power consumption, and ambient, core,
DIMMs, and I/O temperatures) that can be streamed at periodic intervals.
The service processor monitors the operation of the firmware during the boot process and
also monitors the hypervisor for termination. The hypervisor monitors the service processor
and reports a service reference code when it detects surveillance loss. In the PowerVM
environment, it performs a reset/reload if it detects the loss of the service processor.
This new memory subsystem design delivers solid RAS. Unlike the processor RAS
characteristics, the E1050 memory RAS varies significantly from that of the IBM Power E950.
The Power E1050 supports the same 4U DDIMM height as the IBM Power E1080.
Table 4-2 compares memory DIMMs, and highlights the differences between the Power E950
DIMM and the Power E1050 DDIMM. It also provides the RAS impacts of the DDIMMs, which
are applicable to the Power E1080 servers.
Table 4-2 Power E950 DIMMs versus Power E1050 DDIMMs RAS comparison
Item Power E950 Power E1050 RAS impact
memory memory
DIMM form Riser card plus 4U DIMM Power E1050 4U DDIMM: Single field-replaceable unit
factor industry-standard (FRU) or fewer components to replace.
DIMMs Power E950 DIMM: A separate FRU is used for a memory
buffer on a riser card and the industry-standard DIMMs.
Symbol Single symbol Dual symbol Power E1050 4U DDIMM: A data pin failure (1 symbol) that
correction correction correction lines up with single cell failure on another DRAM is still
correctable.
Power E950 DIMM: A data pin failure (1 symbol) that lines
up with single cell failure on another DRAM is
uncorrectable.
DRAM row Static Dynamic Power E1050 4U DDIMM: Detect, fix, and restore at run
repair time without a system outage.
Power E950 DIMM: Detect at run time, but a fix and restore
requires a system restart.
The Power E1050 processor module is a dual-chip module (DCM) that differs from that of the
Power E950, which has single-chip module (SCM). Each DCM has 30 processor cores, which
is 120 cores for a 4-socket (4S) Power E1050. In comparison, a 4S Power E950 supports
48 cores. The internal processor buses are twice as fast with the Power E1050 running at
32 Gbps.
Despite the increased cores and the faster high-speed processor bus interfaces, the RAS
capabilities are equivalent, with features like PIR, L2/L3 Cache ECC protection with cache
line delete, and the CRC fabric bus retry that is a characteristic of Power9 and Power10
processors. As with the Power E950, when an internal fabric bus lane encounters a hard
failure in a Power E1050, the lane can be dynamically spared out.
Unlike the Power E950, the Power E1050 location codes start from index 0, as with all Power
10 systems. However, slot c0 is not a general-purpose PCIe slot because it is reserved for the
eBMC Service Processor card.
Another difference between the Power E950 and the Power E1050 is that all the Power E1050
slots are directly connected to a Power10 processor. In the Power E950, some slots are
connected to the Power9 processor through I/O switches.
All 11 PCIe slots are available if 3-socket or 4-socket DCMs are populated. In the 2-socket
DCM configuration, only seven PCIe slots are functional.
DASD options
The Power E1050 provides 10 internal Non-volatile Memory Express (NVMe) drives at Gen4
speeds, which means that they are concurrently maintainable. The NVMe drives are
connected to DCM0 and DCM3. In a 2-socket DCM configuration, only six of the drives are
available. To access to all 10 internal NVMe drives, you must have a 4S DCM configuration.
Unlike the Power E950, the Power E1050 has no internal serial-attached SCSI (SAS) drives
You can use an external drawer to provide SAS drives.
The internal NVMe drives support OS-controlled RAID 0 and RAID 1 array, but no hardware
RAID. For best redundancy, use an OS mirror and dual Virtual I/O Server (VIOS) mirror. To
ensure as much separation as possible in the hardware path between mirror pairs, the
following NVMe configuration is recommended:
Mirrored OS: NVMe3 and NVMe4 pairs, or NVMe8 and NVMe9 pairs
Mirrored dual VIOS:
– Dual VIOS: NVMe3 for VIOS1, NVMe4 for VIOS2.
– Mirrored dual VIOS: NVMe9 mirrors NVMe3, and NVMe8 mirrors NVMe4.
4.5 Serviceability
The purpose of serviceability is to efficiently repair the system while attempting to minimize or
eliminate any impact to system operation. Serviceability includes system installation,
Miscellaneous Equipment Specification (MES) (system upgrades/downgrades), and system
maintenance or repair. Depending on the system and warranty contract, service may be
performed by the client, an IBM representative, or an authorized warranty service provider.
The serviceability features that are delivered in this system help provide a highly efficient
service environment by incorporating the following attributes:
Designed for IBM System Services Representative (IBM SSR) setup, install, and service.
Error Detection and Fault Isolation (ED/FI).
FFDC.
Light path service indicators.
Service and FRU labels that are available on the system.
FFDC information, error data analysis, and fault isolation are necessary to implement the
advanced serviceability techniques that enable efficient service of the systems and to help
determine the failing items.
In the rare absence of FFDC and Error Data Analysis, diagnostics are required to re-create
the failure and determine the failing items.
4.5.4 Diagnostics
The general diagnostic objectives are to detect and identify problems so that they can be
resolved quickly. Elements of th IBM diagnostics strategy include:
Provides a common error code format equivalent to a system reference code with a
PowerVM, system reference number, checkpoint, or firmware error code.
Provides fault detection and problem isolation procedures. Supports remote connection,
which can be used by the IBM Remote Support Center or IBM Designated Service.
Provides interactive intelligence within the diagnostics with detailed online failure
information while connected to the IBM back-end system.
4.5.9 QR labels
QR labels are placed on the system to provide access to key service functions through a
mobile device. When the QR label is scanned, it goes to a landing page for Power10
processor-based systems, which contains each machine type and model (MTM) service
functions of interest while physically at the server. These functions include things installation
and repair instructions, reference code lookup, and other items.
The system can call home through the OS to report platform-recoverable errors and errors
that are associated with PCI adapters or devices.
In the HMC-managed environment, a Call Home service request is initiated from the HMC,
and the pertinent failure data with service parts information and part locations is sent to an
IBM service organization. Customer contact information and specific system-related data,
such as the MTM and serial number, along with error log data that is related to the failure, are
sent to IBM Service.
The goal is to briefly define what RAS is and look at how reliability and availability are
measured.
A 50-year MTBF might suggest that a system runs 50 years between failures, but what it
actually means is that among 50 identical systems, one per year will fail on average over a
large population of systems.
A power supply failing in a system with a redundant power supply must be replaced. However,
by itself a failure of a single power supply should not cause a system outage, and it should be
a concurrent repair with no downtime.
Other components in a system might fail and cause a system-wide outage where concurrent
repair is not possible. Therefore, it is typical to talk about different MTBF numbers:
MTBF – Results in repair actions.
MTBF – Requires concurrent repair.
MTBF – Requires a non-concurrent repair.
MTBF – Results in an unplanned application outage.
MTBF – Results in an unplanned system outage.
For example, consider a system that always runs exactly one week between failures and each
time it fails it is down for 10 minutes. For 168 hours in a week, the system is down (10/60)
hours. It is up 168 hrs – (10/60) hrs. As a percentage of the hours in the week, it can be said
that the system is (168-(1/6))*100% = 99.9% available.
When talking about modern server hardware availability, short weekly failures like in the
example is not the norm. Rather, the failure rates are much lower, and the MTBF is often
measured in terms of years, perhaps more years than a system will be kept in service.
Therefore, when an MTBF of 10 years, for example, is quoted, it is not expected that on
average each system will run 10 years between failures. Rather, it is more reasonable to
expect that on average in a year that 1 server out of 10 will fail. If a population of 10 servers
always had exactly one failure a year, a statement of 99.999% availability across that
population of servers would mean that the one server that failed would be down about 53
minutes when it failed.
In theory, five 9s of availability can be achieved by having a system design that fails
frequently, multiple times a year, but whose failures are limited to small periods of time.
Conversely, five 9 s of availability might mean a server design with a large MTBF, but where a
server takes a fairly long time to recover from the rare outage.
Figure 4-3 shows that five 9s of availability can be achieved with systems that fail frequently
for minuscule amounts of time, or infrequently with much larger downtime per failure.
Note: PowerVM Enterprise Edition License Entitlement is included with each Power10
processor-based mid-range server. PowerVM Enterprise Edition is available as a hardware
feature (#EPVV); supports up to 20 partitions per core, VIOS, multiple shared processor
pools (MSPPs); and also offers LPM.
Combined with features in the Power10 processor-based mid-range servers, the Power
Hypervisor delivers functions that enable other system technologies, including logical
partitioning technology, virtualized processors, IEEE virtual local area network
(VLAN)-compatible virtual switches, virtual Small Computer Serial Interface (SCSI) adapters,
virtual Fibre Channel (FC) adapters, and virtual consoles.
The Power Hypervisor is a basic component of the system’s firmware and offers the following
functions:
Provides an abstraction between the physical hardware resources and the LPARs that use
them.
Enforces partition integrity by providing a security layer between LPARs.
Controls the dispatch of virtual processors to physical processors.
Saves and restores all processor state information during a logical processor context
switch.
Controls hardware I/O interrupt management facilities for LPARs.
The Power Hypervisor is always active, regardless of the system configuration or whether it is
connected to the managed console. It requires memory to support the resource assignment
of the LPARs on the server. The amount of memory that is required by the Power Hypervisor
firmware varies according to several factors:
Memory usage for hardware page tables (HPTs)
Memory usage to support I/O devices
Memory usage for virtualization
The amount of memory for the HPT is based on the maximum memory size of the partition
and the HPT ratio. The default HPT ratio is 1/128th (for AIX, VIOS, and Linux partitions) of the
maximum memory size of the partition. AIX, VIOS, and Linux use larger page sizes (16 and
64 KB) instead of using 4 KB pages. The use of larger page sizes reduces the overall number
of pages that must be tracked; therefore, the overall size of the HPT can be reduced. For
example, the HPT is 2 GB for an AIX partition with a maximum memory size of 256 GB.
When defining a partition, the maximum memory size that is specified is based on the amount
of memory that can be dynamically added to the dynamic logical partition (DLPAR) without
changing the configuration and restarting the partition.
In addition to setting the maximum memory size, the HPT ratio can be configured. The
hpt_ratio parameter for the chsyscfg Hardware Management Console (HMC) command can
be issued to define the HPT ratio that is used for a partition profile. The valid values are 1:32,
1:64, 1:128, 1:256, or 1:512.
Specifying a smaller absolute ratio (1/512 is the smallest value) decreases the overall
memory that is assigned to the HPT. Testing is required when changing the HPT ratio
because a smaller HPT might incur more CPU consumption because the OS might need to
reload the entries in the HPT more frequently. Most customers choose to use the IBM
provided default values for the HPT ratios.
The Power Hypervisor must set aside save areas for the register contents for the maximum
number of virtual processors that are configured. The greater the number of physical
hardware devices, the greater the number of virtual devices, the greater the amount of
virtualization, and the more hypervisor memory is required. For efficient memory
consumption, wanted and maximum values for various attributes (processors, memory, and
virtual adapters) must be based on business needs, and not set to values that are
significantly higher than actual requirements.
The Power Hypervisor provides the following types of virtual I/O adapters:
Virtual SCSI
The Power Hypervisor provides a virtual SCSI mechanism for the virtualization of storage
devices. The storage virtualization is accomplished by using two paired adapters: a virtual
SCSI server adapter and a virtual SCSI customer adapter.
Virtual Ethernet
The Power Hypervisor provides a virtual Ethernet switch function that allows partitions fast
and secure communication on the same server without any need for physical
interconnection or connectivity outside of the server if a Layer 2 bridge to a physical
Ethernet adapter is set in one VIOS partition, also known as Shared Ethernet Adapter
(SEA).
Logical partitions
Logical partitions (LPARs) and virtualization increase the usage of system resources and add
a level of configuration possibilities.
Logical partitioning is the ability to make a server run as though it were two or more
independent servers. When you logically partition a server, you divide the resources on the
server into subsets, which are called LPARs. You can install software on an LPAR, and the
LPAR runs as an independent logical server with the resources that you allocated to the
LPAR.
LPAR also is referred to in some documentation as a virtual machine (VM), which makes it
look like what other hypervisors offer. However, LPARs provide a higher level of security and
isolation and other features that are described in this chapter.
Processors, memory, and I/O devices can be assigned to LPARs. AIX, IBM i, Linux, and VIOS
can run on LPARs. VIOS provides virtual I/O resources to other LPARs with general-purpose
OSs.
LPARs share a few system attributes, such as the system serial number, system model, and
processor FCs. All other system attributes can vary from one LPAR to another.
Micro-Partitioning
When you use the Micro-Partitioning technology, you can allocate fractions of processors to
an LPAR. An LPAR that uses fractions of processors is also known as a shared processor
partition or micropartition. Micropartitions run over a set of processors that is called a shared
processor pool (SPP), and virtual processors are used to enable the OS manage the fractions
of processing power that are assigned to the LPAR.
Processing mode
When you create an LPAR, you can assign entire processors for dedicated use, or you can
assign partial processing units from an SPP. This setting defines the processing mode of the
LPAR.
Dedicated mode
In dedicated mode, physical processors are assigned as a whole to partitions. The SMT
feature in the Power10 processor core allows the core to run instructions from two, four, or
eight independent software threads simultaneously.
Shared mode
In shared mode, LPARs use virtual processors to access fractions of physical processors.
Shared partitions can define any number of virtual processors (the maximum number is 20
times the number of processing units that are assigned to the partition). The Power
Hypervisor dispatches virtual processors to physical processors according to the partition’s
processing units entitlement. One processing unit represents one physical processor’s
processing capacity. All partitions receive a total CPU time equal to their processing unit’s
entitlement. The logical processors are defined on top of virtual processors. Therefore, even
with a virtual processor, the concept of a logical processor exists, and the number of logical
processors depends on whether SMT is turned on or off.
Micropartitions are created and then identified as members of the default processor pool or a
user-defined SPP. The virtual processors that exist within the set of micropartitions are
monitored by the Power Hypervisor. Processor capacity is managed according to
user-defined attributes.
If specific micropartitions in an SPP do not use their processing capacity entitlement, the
unused capacity is ceded, and other uncapped micropartitions within the same SPP can use
the extra capacity according to their uncapped weighting. In this way, the entitled pool
capacity of an SPP is distributed to the set of micropartitions within that SPP.
All IBM Power servers that support the MSPP capability have a minimum of one (the default)
SPP and up to a maximum of 64 SPPs.
This capability helps customers reduce total cost of ownership (TCO) when the cost of
software or database licenses depends on the number of assigned processor cores.
The VIOS eliminates the requirement that every partition owns a dedicated network adapter,
disk adapter, and disk drive. The VIOS supports OpenSSH for secure remote logins. It also
provides a firewall for limiting access by ports, network services, and IP addresses.
By using the SEA, several customer partitions can share one physical adapter. You also can
connect internal and external VLANs by using a physical adapter. The SEA service can be
hosted only in the VIOS (not in a general-purpose AIX or Linux partition) and acts as a Layer
2 network bridge to securely transport network traffic between virtual Ethernet networks
(internal) and one or more (Etherchannel) physical network adapters (external). These virtual
Ethernet network adapters are defined by the Power Hypervisor on the VIOS.
Virtual SCSI
Virtual SCSI is used to view a virtualized implementation of the SCSI protocol. Virtual SCSI is
based on a client/server relationship. The VIOS LPAR owns the physical I/O resources and
acts as a server or in SCSI terms a target device. The client LPARs access the virtual SCSI
backing storage devices that are provided by the VIOS as clients.
The virtual I/O adapters (a virtual SCSI server adapter and a virtual SCSI client adapter) are
configured by using an HMC. The virtual SCSI server (target) adapter is responsible for
running any SCSI commands that it receives, and it is owned by the VIOS partition. The
virtual SCSI client adapter allows a client partition to access physical SCSI and SAN-attached
devices and LUNs that are mapped to be used by the client partitions. The provisioning of
virtual disk resources is provided by the VIOS.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) is a technology that allows multiple LPARs to access one or
more external physical storage devices through the same physical FC adapter. This adapter
is attached to a VIOS partition that acts only as a pass-through that manages the data
transfer through the Power Hypervisor.
Each partition features one or more virtual FC adapters, each with their own pair of unique
worldwide port names. This configuration enables you to connect each partition to
independent physical storage on a SAN. Unlike virtual SCSI, only the client partitions see the
disk.
For more information and requirements for NPIV, see IBM PowerVM Virtualization Managing
and Monitoring, SG24-7590.
LPM provides systems management flexibility and improves system availability by avoiding
the following situations:
Planned outages for hardware upgrade or firmware maintenance.
Unplanned downtime. With preventive failure management, if a server indicates a potential
failure, you can move its LPARs to another server before the failure occurs.
HMC 10.1.1020.0 and VIOS 3.1.3.21 or later provide the following enhancements to the LPM
feature:
Automatically choose fastest network for LPM memory transfer.
Allow LPM when a virtual optical device is assigned to a partition.
A portion of available memory can be proactively partitioned such that a duplicate set can be
used on non-correctable memory errors. This partition can be implemented at the granularity
of DIMMs or logical memory blocks.
HMC 10R1 provides an enhancement to the Remote Restart feature that enables remote
restart when a virtual optical device is assigned to a partition.
On IBM Power servers, partitions can be configured to run in several modes, including the
following modes:
Power8
This native mode for Power8 processors implements version 2.07 of the IBM Power
instruction set architecture (ISA). For more information, see Processor compatibility mode
definitions.
Power9
This native mode for Power9 processors implements version 3.0 of the IBM Power ISA.
For more information, see Processor compatibility mode definitions.
Power10
This native mode for Power10 processors implements version 3.1 of the IBM Power ISA.
For more information, see Processor compatibility mode definitions.
Processor compatibility mode is important when LPM migration is planned between different
generation of servers. An LPAR that might be migrated to a machine that is managed by a
processor from another generation must be activated in a specific compatibility mode.
SR-IOV is PCI standard architecture that enables PCIe adapters to become self-virtualizing. It
enables adapter consolidation through sharing, much like logical partitioning enables server
consolidation. With an adapter capable of SR-IOV, you can assign virtual slices of a single
physical adapter to multiple partitions through logical ports, which is done without a VIOS.
IBM PowerVC can manage AIX and Linux based VMs that are running under PowerVM
virtualization that are connected to an HMC or that use NovaLink. This release supports the
scale-out and the enterprise IBM Power servers that are built on IBM Power8, IBM Power9,
and Power10 servers.
Note: The Power E1050 server is supported by PowerVC 2.0.3 or later. If an additional fix
is needed, see IBM Fix Central.
IBM PowerVC is an addition to the PowerVM set of enterprise virtualization technologies that
provide virtualization management. It is based on open standards and integrates server
management with storage and network management.
Because IBM PowerVC is based on the OpenStack initiative, IBM Power can be managed by
tools that are compatible with OpenStack standards. When a system is controlled by
IBM PowerVC, it can be managed in one of three ways:
By a system administrator by using the IBM PowerVC GUI
By a system administrator that uses scripts that contain the IBM PowerVC
Representational State Transfer (REST) application programming interfaces (APIs)
By higher-level tools that call IBM PowerVC by using standard OpenStack API
The following PowerVC offerings are positioned within the available solutions for IBM Power
cloud:
IBM PowerVC: Advanced Virtualization Management
IBM PowerVC for Private Cloud: Basic Cloud
For more information about PowerVC, see IBM PowerVC Version 2.0 Introduction and
Configuration, SG24-8477.
It also means changing the culture and experiences to meet the changing needs of the
business and the market. This reimaging of business in the digital age is digital
transformation.
For IBM, digital transformation takes a customer-centric and digital-centric approach to all
aspects of a business, from business models to customer experiences, processes, and
operations. It uses artificial intelligence (AI), automation, hybrid cloud, and other digital
technologies to leverage data and drive intelligent workflows, faster and smarter
decision-making, and a real-time response to market disruptions. Ultimately, it changes
customer expectations and creates business opportunities.
To date, one of the main technologies that is recognized to enable the digital transformation
and the path to application modernization is Red Hat OpenShift. There are advantages to
associate Red Hat OpenShift with the IBM Power processor-based platform.
Red Hat OpenShift is a container orchestration platform that is based on Kubernetes that
helps develop containerized applications with open-source technology. It facilitates
management and deployments in hybrid and multicloud environments by using full-stack
automated operations.
Containers are key elements of the IT transformation and journey toward modernization.
Containers are software executable units in which the application code is packaged, along
with its libraries and dependencies, in common ways so that it can be run anywhere, both on
desktop and on any type of server or on the cloud. To do this task, containers take advantage
of a form of OS virtualization in which OS functions are used effectively both to isolate
processes and to control the amount of CPU, memory, and disk that those processes have
access to.
Containers are small, fast, and portable because unlike a VM, they do not need to include a
guest OS in every instance, and can instead simply leverage the functions and resources of
the host OS.
One way to better understand a container is to understand how it differs from a traditional VM.
In traditional virtualization (both on-premises and in the cloud), a hypervisor is used to
virtualize the physical hardware. Each VM contains a guest OS, a virtual copy of the hardware
that the OS requires to run, with an application and its associated libraries and dependencies.
Instead of virtualizing the underlying hardware, containers virtualize the OS (usually Linux) so
that each individual container includes only the application and its libraries and
dependencies. The absence of the guest OS is the reason why containers are so light, fast,
and portable.
With IBM Power, you enable the possibility of having a high container ratio per core with
multiple CPU threads and co-localized cloud-native applications and applications that are
related to AIX, which use API connections to business-critical data for higher bandwidth and
lower latency than other technologies. Only with IBM Power do you have a flexible and
efficient usage of resources, manage peaks, and support both traditional and modern
workloads with the eventual support of Capacity on Demand (CoD) or sharing processor
pools.
The Red Hat Ansible product complements the solution with best-in-class automation.
The ability to automate by using Ansible returns valuable time to the system administrators.
Red Hat Ansible Automation Platform for IBM Power is fully enabled, so enterprises can
automate several tasks within AIX and Linux that include deploying applications. Ansible also
can be combined with HMC, PowerVC, and Power Virtual Server to provision infrastructure
anywhere you need, including cloud solutions from other IBM Business Partners or third-party
providers based on IBM Power processor-based servers.
A first task after the initial installation or setup of a new LPAR is to ensure that the correct
patches are installed. Also, extra software (whether it is open-source software, independent
software vendor (ISV) software, or perhaps the business’ own enterprise software) must be
installed. Ansible features a set of capabilities to roll out new software, which makes it popular
in Continuous Delivery/Continuous Integration (CD/CI) environments. Orchestration and
integration of automation with security products represent other ways in which Ansible can be
applied within the data center.
Despite the wide adoption of AIX in many different business sectors by different types of
customers, Ansible can help introduce IBM Power processor-based technology to customers
who believe that AIX skills are a rare commodity that is difficult to find in the marketplace but
want to take advantage of all the features of the hardware platform. The Ansible experience is
identical across IBM Power or x86 processor-based technology, and the same steps can be
repeated in IBM Cloud.
AIX skilled customers also can benefit from the extreme automation solutions that are
provided by Ansible.
The IBM Power stack engineering teams work closely to deliver an enterprise server platform,
which results in an IT architecture with industry-leading performance, scalability, and security
(see Figure 5-3).
Every layer in the IBM Power stack is optimized to make the Power10 processor-based
technology the platform of choice for mission-critical enterprise workloads. This stack
includes the Ansible Automation Platform, which is described next.
The various Ansible collections for IBM Power processor-based technology, which (at the time
of writing) were downloaded more than 25,000 times by customers, are now included in the
Red Hat Ansible Automation Platform. As a result, these modules are covered by the Red Hat
24x7 enterprise support team, which collaborates with the respective IBM Power
processor-based technology development teams.
Our OS teams develop modules that are sent to the Ansible open-source community (named
Ansible Galaxy). Every developer can post any object that can be a candidate for a collection
in the open Ansible Galaxy community and possibly certified to be supported by IBM with a
subscription to Red Hat Ansible Automation Platform (see Figure 5-4).
The collection includes modules and sample playbooks that help to automate tasks. You can
find it at this web page.
For more information about this collection, see this web page.
Many organizations also are adapting their business models, and many organizations have
thousands of people connecting from home computers outside the control of an IT
department. Users, data, and resources are scattered worldwide, making it difficult to connect
them quickly and securely. Without a traditional local infrastructure for security, employees'
homes are more vulnerable to compromise, putting the business at risk.
Many companies are operating with a set of security solutions and tools even without them
being fully integrated. As a result, security teams spend more time on manual tasks. They
lack the context and information that are needed to effectively reduce their organization's
attack surface. Rising data breaches and rising global regulations have made securing
networks difficult.
Applications, users, and devices need fast and secure access to data, so much so that an
entire industry of security tools and architectures was created to protect them.
Although enforcing a data encryption policy is an effective way to minimize the risk of a data
breach that, in turn, minimizes costs, only a few enterprises at the worldwide level have an
encryption strategy that is applied consistently across the entire organization, largely
because it adds complexity, costs, and negatively affects performance, which means missed
service-level agreements (SLAs) to the business.
The rapidly evolving cyberthreat landscape requires focus on cyber-resilience. Persistent and
end-to-end security is the only way to reduce exposure to threats.
The Power10 processor-based servers are enhanced to simplify and integrate security
management across the stack, which reduces the likelihood of administrator errors.
In the Power10 processor-based scale-out servers, all data is protected by a greatly simplified
end-to-end encryption that extends across the hybrid cloud without detectable performance
impact and prepares for future cyberthreats.
Quantum-safe cryptography refers to the efforts to identify algorithms that are resistant to
attacks by classical and quantum computers in preparation for the time when large-scale
quantum computers are built.
The co-processor holds a security-enabled subsystem module and batteries for backup
power. The hardened encapsulated subsystem contains two sets of two 32-bit PowerPC
476FP reduced-instruction-set-computer (RISC) processors running in lockstep with
cross-checking to detect soft errors in the hardware.
IBM offers an embedded subsystem control program and a cryptographic API that
implements the CCA Support Program that can be accessed from the internet at no charge to
the user.
Security features are beneficial only if they can be easily and accurately managed. Power10
processor-based scale-out servers benefit from the integrated security management
capabilities that are offered by IBM PowerSC, and the IBM Power software portfolio for
managing security and compliance on IBM Power processor-based platforms (AIX and Linux
on IBM Power).
PowerSC is sitting on top of the Power solution stack, and it provides features such as
compliance automation to help with various industry standards, real-time file integrity
monitoring, reporting to support security audits, patch management, and trusted logging.
By providing all these capabilities through a web-based user interface (UI), PowerSC
simplifies management of security and compliance.
The terms Secure Boot and Trusted Boot have specific connotations. The terms are used as
distinct yet complementary concepts.
Secure Boot
The Secure Boot feature protects system integrity by using digital signatures to perform a
hardware-protected verification of all firmware components. It also distinguishes between the
host system trust domain and the Flexible Service Processor (FSP) trust domain by
controlling service processor and service interface access to sensitive system memory
regions.
Trusted Boot
The Trusted Boot feature creates cryptographically strong and protected platform
measurements that prove that particular firmware components have run on the system. You
can assess the measurements by using trusted protocols to determine the state of the system
and use that information for security decisions.
The baseboard management controller (BMC) chip is connected to the two network interface
cards through Network Connectivity Status Indicator (NCSI) (to support the connection to
HMCs) and also have a PCIe x1 connection that connects to the backplane. This connection
is used by PowerVM for partition management traffic, but it cannot be used for Guest LPAR
traffic. A Guest LPAR needs its physical or virtual network interface PCIe card (or cards) for
an external connection.
Hardware assist is necessary to avoid tampering with the stack. The IBM Power platform
added four instructions (hashst, hashchk, hashstp, and hashchkp) to handle ROP in
IBM Power ISA 3.1B.
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
IBM PowerAI: Deep Learning Unleashed on IBM Power Systems Servers, SG24-8409
IBM Power E1080 Technical Overview and Introduction, REDP-5649
IBM Power S1014, S1022s, S1022, and S1024 Technical Overview and Introduction,
REDP-5675
IBM Power System AC922 Technical Overview and Introduction, REDP-5494
IBM Power System E950: Technical Overview and Introduction, REDP-5509
IBM Power System E980: Technical Overview and Introduction, REDP-5510
IBM Power System L922 Technical Overview and Introduction, REDP-5496
IBM Power System S822LC for High Performance Computing Introduction and Technical
Overview, REDP-5405
IBM Power Systems H922 and H924 Technical Overview and Introduction, REDP-5498
IBM Power Systems LC921 and LC922: Technical Overview and Introduction,
REDP-5495
IBM Power Systems S922, S914, and S924 Technical Overview and Introduction
Featuring PCIe Gen 4 Technology, REDP-5595
IBM PowerVC Version 2.0 Introduction and Configuration, SG24-8477
IBM PowerVM Best Practices, SG24-8062
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
SAP HANA Data Management and Performance on IBM Power Systems, REDP-5570
You can search for, view, download, or order these documents and other Redbooks
publications, Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
REDP-5684-00
ISBN 073846077x
Printed in U.S.A.
®
ibm.com/redbooks