IBM Flex System p24L, p260 and p460 Compute Nodes: IBM Redbooks Product Guide
IBM Flex System p24L, p260 and p460 Compute Nodes: IBM Redbooks Product Guide
IBM Flex System p24L, p260 and p460 Compute Nodes: IBM Redbooks Product Guide
The IBM Flex System™ p260 and p460 Compute Nodes are servers based on IBM® POWER®
architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to
provide a high-density, high-performance compute node environment, using advanced processing
technology. The nodes support IBM AIX, IBM i, or Linux operating environments and are designed to run
a wide variety of workloads. The p260 and p24L are standard compute nodes with two POWER7 or
POWER7+ processors, and the p460 is a double-wide compute node with four POWER7 or POWER7+
processors.
The compute nodes offers numerous features to boost performance, improve scalability, and reduce
costs:
The IBM POWER7 and POWER7+ processors, which improve productivity by offering superior
system performance with AltiVec floating point and integer SIMD instruction set acceleration.
Integrated PowerVM technology, providing superior virtualization performance and flexibility.
Choice of processors, including an 8-core POWER7+ processor operating at 4.1 GHz with 80 MB of
L3 cache (10 MB per core).
Up to four processors, 32 cores, and 128 threads to maximize the concurrent execution of
applications.
Three levels of integrated cache including 10 MB (POWER7+) or 4 MB (POWER7) of L3 cache per
core.
Up to 16 (p24L, p260) or 32 (p460) DDR3 ECC memory RDIMMs that provide a memory capacity of
up to 256 GB (p260) or 512 GB (p460).
Support for Active Memory Expansion, which allows the effective maximum memory capacity to be
much larger than the true physical memory through innovative compression techniques.
The use of solid-state drives (SSDs) instead of traditional spinning drives (HDDs), which can
significantly improve I/O performance. An SSD can support up to 100 times more I/O operations per
second (IOPS) than a typical HDD.
Up to eight (p24L, p260) or 16 (p460) 10Gb Ethernet ports per compute node to maximize networking
resources in a virtualized environment.
Includes two (p24L, p260) or four (p460) P7IOC high-performance I/O bus controllers to maximize
throughput and bandwidth.
Support for high-bandwidth I/O adapters, up to two in each p24L or p260 Compute Node or up to four
in each p460 Compute Node. Support for 10 Gb Ethernet, 8 Gb Fibre Channel, and QDR InfiniBand.
The p24L, p260 and p460 provide many features to simplify serviceability and increase system uptime:
ECC and chipkill provide error detection and recovery in the event of a non-correctable memory
failure.
Tool-less cover removal provides easy access to upgrades and serviceable parts, such as drives,
memory, and adapter cards.
A light path diagnostics panel and individual light path LEDs quickly lead the technician to failed (or
failing) components. This simplifies servicing, speeds up problem resolution, and helps improve
system availability.
Predictive Failure Analysis (PFA) detects when system components (for example, processors,
memory, and hard disk drives) operate outside of standard thresholds and generates proactive alerts
in advance of possible failure, therefore increasing uptime.
Powerful systems management features simplify management of the p260 and p460:
Energy efficiency
The compute nodes offer the following energy-efficiency features to save energy, reduce operational
costs, increase energy availability, and contribute to the green environment:
The component-sharing design of the IBM Flex System chassis provides ultimate power and cooling
savings.
Support for IBM EnergyScale to dynamically optimize processor performance versus power
consumption and system workload.
Active Energy Manager provides advanced power management features with actual real-time energy
monitoring, reporting, and capping features.
SSDs consume as much as 80% less power than traditional spinning 2.5-inch HDDs.
The servers use hexagonal ventilation holes, a part of IBM Calibrated Vectored Cooling™ technology.
Hexagonal holes can be grouped more densely than round holes, providing more efficient airflow
through the system.
Figure 2. Front view of the IBM Flex System p260 Compute Node
Figure 3. Inside view of the IBM Flex System p260 Compute Node
Figure 4. Front view of the IBM Flex System p460 Compute Node
Figure 5. Inside view of the IBM Flex System p460 Compute Node
POWER7 processors: Each processor is a single-chip module (SCM) that contains either eight
cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each
processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each
with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++
I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads
to run simultaneously per core. Uses 45 nm fabrication technology.
POWER7+ processors: Each processor is a single-chip module (SCM) that contains either eight
cores (up to 4.1 GHz or 3.6 GHz and 80 MB L3 cache) , four cores (4.0 GHz and 40 MB L3
cache) or two cores (4.0 GHz and 20 MB L3 cache). Each processor has 10 MB L3 cache per
core. There is an integrated memory controller in each processor, each with four memory
channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection
per processor. Supports SMT4 mode, which enables four instruction threads to run
simultaneously per core. Uses 32 nm fabrication technology.
Disk drive bays Two 2.5-inch non-hot-swap drive bays supporting 2.5-inch SAS HDD or 1.8-inch SATA SSD
drives. If LP DIMMs are installed, then only 1.8-inch SSDs are supported. If VLP DIMMs are
installed, then both HDDs and SSDs are supported. An HDD and an SSD cannot be installed
together.
Maximum 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.
internal storage
PCI Expansion p24L: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
slots p260: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
p460: Four I/O connectors for adapters. PCIe 2.0 x16 interface.
Systems FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial
management over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems
Director, and Active Energy Manager.
Video None. Remote management via Serial over LAN and IBM Flex System Manager.
Limited 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
warranty
Operating IBM AIX, IBM i, and Linux. See "Supported operating systems" for details.
systems
supported
Service and Optional service upgrades are available through IBM ServicePacs®: 4-hour or 2-hour response
support time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM
hardware and selected IBM and OEM software.
Dimensions p24L: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
p260: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
p460: Width: 437 mm (17.2"), height 51mm (2.0”), depth 493mm (19.4”).
Chassis support
The p24L, p260 and p460 are supported in the IBM Flex System Enterprise Chassis. Up to fourteen p24L
or p260 compute nodes or up to seven x440 compute nodes (or a combination of all three) can be
installed in the chassis in 10U of rack space. The actual number of compute nodes that can be installed in
a chassis depends on these factors:
In the table:
Green = No restriction to the number of compute nodes that are installable
Yellow = Some bays must be left empty in the chassis
Table 3. Maximum number of x440 Compute Nodes installable based on power supplies installed and
power redundancy policy used
2100 W power supplies installed 2500 W power supplies installed
N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3
Compute 6 power 5 power 5 power 6 power 6 power 5 power 4 power 6 power
node supplies supplies supplies supplies supplies supplies supplies supplies
p24L 14 12 9 10 14 14 12 13
p260 14 12 9 10 14 14 12 13
p460 7 6 4 5 7 7 6 6
The IBM Flex System p24L, p260 and p460 Compute Nodes can be ordered as part of an PureFlex
System solution. There are two PureFlex System offerings available:
An IBM Flex System Compute Node, either the p260, p460, or x240
An IBM Flex System Enterprise Chassis (7893-92X)
An IBM Flex System Manager (7955-01M)
An IBM Storwize V7000 Disk System (2076-124)
Two IBM System Network 1455 Rack Switches G8264R Model 64C (with PureFlex System
Enterprise only, with the p460)
Two IBM 2498 SAN24B-4 Express Model B24 (with PureFlex System Enterprise only, with the p460)
An IBM 42U rack (7953-94X)
Notes:
The p260 and p24L cannot be used in an initial PureFlex System Multi-chassis configuration.
The IBM Flex System p460 cannot be used in an initial PureFlex System Single-chassis
configuration.
An initial PureFlex System Multi-chassis requires at least two compute nodes.
Additional compute nodes, chassis, and IBM 42U racks can be ordered after the basic requirements for
the PureFlex System solution are met. These additional orders will be indicated by feature number EFD4
(Expansion Option) or EFD5 (Custom Expansion).
EPRD 8-core 4.0 GHz POWER7+ Processor Module (two 4-core processors)
EPRB 16-core 3.6 GHz POWER7+ Processor Module (two 8-core processors)
EPRA 16-core 4.1 GHz POWER7+ Processor Module (two 8-core processors)
EPRC 4-core 4.0 GHz POWER7+ Processor Module (two 2-core processors)
EPR1 8-core 3.3 GHz POWER7 Processor Module (two 4-core processors)
EPR3 16-core 3.2 GHz POWER7 Processor Module (two 8-core processors)
EPR5 16-core 3.55 GHz POWER7 Processor Module (two 8-core processors)
EPR2 16-core 3.3 GHz POWER7 Processor Module (four 4-core processors)
EPR4 32-core 3.2 GHz POWER7 Processor Module (four 8-core processors)
EPR6 32-core 3.55 GHz POWER7 Processor Module (four 8-core processors)
EPRK 16-core 4.0 GHz POWER7+ Processor Module (four 4-core processors)
EPRH 32-core 3.6 GHz POWER7+ Processor Module (four 8-core processors)
EPRJ 32-core 4.1 GHz POWER7+ Processor Module (four 8-core processors)
EPR7 12-core 3.72 GHz POWER7 Processor Module (two 6-core processors)
EPR8 16-core 3.2 GHz POWER7 Processor Module (two 8-core processors)
EPR9 16-core 3.55 GHz POWER7 Processor Module (two 8-core processors)
The compute nodes support low profile (LP) or very low profile (VLP) DDR3 memory RDIMMs. If LP
memory is used, 2.5-inch HDDs are not supported in the system due to physical space restrictions.
However, 1.8-inch SSDs are still supported. If VLP memory is used, either 2.5-inch HDDs or 1.8-inch
SSDs are supported.
The p260 and p24L supports up to 16 DIMMs. The p460 supports up to 32 DIMMs. Each processor has
four memory channels, and there are two DIMMs per channel. All supported DIMMs operate at 1066 MHz.
The following table lists memory features available for the compute nodes. DIMMs are ordered and can
be installed two at a time, but to maximize memory performance, install them in sets of eight (one for
each of the memory channels).
8196 2x 4 GB DDR3 RDIMM 1066 MHz VLP Yes Yes Yes Yes Yes Yes Yes
EEMD 2x 8 GB DDR3 RDIMM 1066 MHz VLP Yes Yes Yes Yes Yes Yes Yes
EEME* 2x 16 GB DDR3 RDIMM 1066 MHz LP Yes Yes Yes Yes Yes Yes Yes
EEMF* 2x 32 GB DDR3 RDIMM 1066 MHz LP Yes Yes Yes Yes Yes Yes Yes
* If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used - EM04, 8145, EEME and EEMF cannot
be used.
The choice of drive also determines what cover is used for the compute node, because the drives are
attached to the cover. The following table lists the options.
Optical drives
The server does not support an internal optical drive option. However, you can connect an external USB
optical drive, such as IBM and Lenovo part number 73P4515 or 73P4516.
The following figures show the location of the I/O adapters in the p24L, p260 and p460.
Figure 6. Location of the I/O adapter slots in the IBM Flex System p24L and p260 Compute Nodes
Figure 7. Location of the I/O adapter slots in the IBM Flex System p460 Compute Node
All I/O adapters are the same shape and can be used in any available slot.. A compatible switch or
Figure 8. Location of the switch bays in the IBM Flex System Enterprise Chassis
The following figure shows how two-port adapters are connected to switches installed in the chassis.
Figure 9. Logical layout of the interconnects between I/O adapters and I/O modules
10 Gb Ethernet
EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter Yes Yes Yes Yes
EC26 IBM Flex System EN4132 2-port 10Gb RoCE Adapter No Yes Yes Yes
1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes Yes
1 Gb Ethernet
1763 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes
InfiniBand
1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter No Yes No Yes
When adapters are installed in slots, ensure that compatible switches are installed in the corresponding
bays of the chassis:
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (#ESW2)
IBM Flex System Fabric EN4093R 10Gb Scalable Switch (#ESW7)
IBM Flex System Fabric EN4093 10Gb Scalable Switch (#3593)
IBM Flex System Fabric SI4093 System Interconnect Module (#ESWA)
IBM Flex System EN4091 10Gb Ethernet Pass-thru (#3700)
IBM Flex System EN2092 1Gb Ethernet Scalable Switch (#3598)
Fibre Channel
1764 IBM Flex System FC3172 2-port 8Gb FC Adapter No Yes No Yes
EC23 IBM Flex System FC5052 2-port 16Gb FC Adapter No Yes No Yes
EC2E IBM Flex System FC5054 4-port 16Gb FC Adapter No Yes No Yes
Power supplies
Server power is derived from the power supplies installed in the chassis. There are no server options
regarding power supplies.
Support for the following number of maximum virtual servers (or logical partitions, LPARs):
p24L: Up to 160 virtual servers
p260: Up to 160 virtual servers
p460: Up to 320 virtual servers
Role-based access control (RBAC)
RBAC brings an added level of security and flexibility to the administration of the Virtual I/O Server
(VIOS), a part of PowerVM. With RBAC, you can create a set of authorizations for the user
management commands. You can assign these authorizations to a role named UserManagement,
and this role can be given to any other user. So one user with this role can manage the users on the
system, but will not have any further access. With RBAC, the VIOS has the capability of splitting
management functions that presently can be done only by the padmin user, providing better security
by giving only the necessary access to users, and easy management and auditing of system
functions.
Suspend/resume
Using suspend/resume, you can provide long-term suspension (greater than five to ten seconds) of
partitions, saving partition state (that is, memory, NVRAM, and VSP state) on persistent storage. This
makes server resources available that were in use by that partition, restoring partition state to server
resources, and resuming operation of that partition and its applications, either on the same server or
on another server.
Shared storage pools
VIOS allows the creation of storage pools that can be accessed by VIOS partitions that are deployed
across multiple Power Systems servers. Therefore, an assigned allocation of storage capacity can be
efficiently managed and shared. Up to four systems can participate in a Shared Storage Pool
configuration. This can improve efficiency, agility, scalability, flexibility, and availability.
The Storage Mobility feature allows data to be moved to new storage devices within Shared Storage
Pools, while the virtual servers remain completely active and available. The VM Storage
Snapshots/Rollback feature allows multiple point-in-time snapshots of individual virtual server
storage, and these copies can be used to quickly roll back a virtual server to a particular snapshot
image. The VM Storage Snapshots/Rollback functionality can be used to capture a VM image for
cloning purposes or before applying maintenance.
Thin provisioning
VIOS supports highly efficient storage provisioning, whereby virtualized workloads in VMs can have
storage resources from a shared storage pool dynamically added or released, as required.
VIOS grouping
Multiple VIOS partitions can utilize a common shared storage pool to more efficiently utilize limited
storage resources and simplify the management and integration of storage subsystems.
Network node balancing for redundant Shared Ethernet Adapters (SEAs)
This is a useful function when multiple VLANs are being supported in a dual VIOS environment. The
implementation is based on a more granular treatment of trunking, where there are different trunks
defined for the SEAs on each VIOS. Each trunk serves different VLANs, and each VIOS can be the
primary for a different trunk. This occurs with just one SEA definition on each VIOS.
The light path diagnostics panel is visible when you remove the server from the chassis. The panel is
located on the top right-hand side of the compute node, as shown in the following figure.
To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and
press the power button. The power button doubles as the light path diagnostics remind button when the
server is removed from the chassis.
The meanings of the LEDs in the light path diagnostics panel are listed in the following table.
ETE A fault has been detected with the expansion unit (p260 only).
Typically, an administrator has already obtained this information from the IBM Flex System Manager or
Chassis Management Module before removing the node, but having the LEDs helps with repairs and
troubleshooting if onsite assistance is needed.
AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284
AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later
AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later
AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283
AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later
AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later
AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (AIX 5L V5.3 Service
Extension is required)
IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
IBM i 7.1 TR4, or later; Requires VIOS
VIOS 2.2.1.4, or later
SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
Red Hat Enterprise Linux 5.7, for POWER, or later
Red Hat Enterprise Linux 6.2, for POWER, or later
The p260 model 23A and p460 model 43X support the following operating systems:
AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later
AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later
Note: Support by some of these operating system versions will be post generally availability. See the IBM
ServerProven® website for the latest information about the specific versions and service levels supported
and any other prerequisites:
http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.shtml
Physical specifications
Dimensions and weight of the p24L and p260:
Supported environment
The IBM Flex System p24L, p260 and p460 compute nodes and the IBM Flex System Enterprise Chassis
comply with ASHRAE Class A3 specifications.
IBM ServicePac offerings are country-specific. That is, each country might have its own service types,
service levels, response times, and terms and conditions. Not all covered types of ServicePac might be
available in a particular country. For more information about IBM ServicePac offerings available in your
country visit the IBM ServicePac Product Selector at
https://www-304.ibm.com/sales/gss/download/spst/servicepac.
IBM onsite A service technician will come to the server's location for equipment repair.
repair (IOR)
24x7x2 hour A service technician is scheduled to arrive at your customer’s location within two hours after
remote problem determination is completed. We provide 24-hour service, every day, including IBM
holidays.
24x7x4 hour A service technician is scheduled to arrive at your customer’s location within four hours after
remote problem determination is completed. We provide 24-hour service, every day, including IBM
holidays.
9x5x4 hour A service technician is scheduled to arrive at your customer’s location within four business hours
after remote problem determination is completed. We provide service from 8:00 a.m. to 5:00 p.m.
in the customer's local time zone, Monday through Friday, excluding IBM holidays. If after 1:00
p.m. it is determined that on-site service is required, the customer can expect the service
technician to arrive the morning of the following business day. For noncritical service requests, a
service technician will arrive by the end of the following business day.
9x5 next A service technician is scheduled to arrive at your customer’s location on the business day after we
business day receive your call, following remote problem determination. We provide service from 8:00 a.m. to
5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays.
In general, these are the types of IBM ServicePac warranty and maintenance service upgrades:
One, two, three, four, or five years of 9x5 or 24x7 service coverage
Onsite repair from next-business-day to four or two hours
One or two years of warranty extension
ASHRAE Class A3
U.S.: FCC - Verified to comply with Part 15 of the FCC Rules Class A
Canada: ICES-004, issue 3 Class A
EMEA: EN55022: 2006 + A1:2007 Class A
EMEA: EN55024: 1998 + A1:2001 + A2:2003
Australia and New Zealand: CISPR 22, Class A
U.S.: (UL Mark) UL 60950-1 1st Edition
CAN: (cUL Mark) CAN/CSA22.2 No.60950-1 1st Edition
Europe: EN 60950-1:2006+A11:2009
CB: IEC60950-1, 2nd Edition
Russia: (GOST Mark) IEC60950-1
IBM Flex System p260 and p460 Compute Node product pages
http://www.ibm.com/systems/flex/compute-nodes/power/bto/p24l
http://www.ibm.com/systems/flex/compute-nodes/power/bto/p260-p460
IBM Flex System Information Center
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp
IBM Flex System p260 and p460 Compute Node Installation and Service Guide
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.7895.doc/printable_doc.html
IBM Redbooks Product Guides for IBM Flex System servers and options
http://www.redbooks.ibm.com/portals/puresystems?Open&page=pgbycat
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on
their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common
law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered
or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at
http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States, other
countries, or both:
Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries,
or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.