0% found this document useful (0 votes)
15 views324 pages

Detailed Spec IBM-Power-L1024_merged (1)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 324

IBM IT Infrastructure

IBM Power L1024


Create agility with a flexible and protected
hybrid cloud infrastructure

Highlights The core applications, data stores and processes that run your business
Protect data from core simply cannot go down, no matter what. With accelerated digital adoption,
to cloud with memory the demands on these applications are increasing, along with the related
encryption at the processor security risks. To stay ahead of the curve, your IT system needs to be
level and four times more modernized to meet the challenges of today. This requires an infrastructure
crypto engines in every core platform that efficiently scales to meet new demands, protects your
compared to Power9 applications and data with pervasive and layered defenses, and enables
you to transform data into insights quickly.
Streamline insights and
automation with four Matrix The IBM® Power® L1024 is a 2-socket, 4U Power10 processor-based server
Math Accelerators per core optimized for Linux®-based workloads such as SAP HANA. With more than
for faster AI inferencing double the cores compared to IBM Power9® processor-based servers,
workloads can be consolidated on fewer systems, reducing software licensing,
Deliver two times better electrical and cooling costs. With the Power L1024 server, you only pay for
memory reliability and what you need while retaining the ability to share resources across your
availability than industry- systems, including previous generations. Data is protected from end-to-end
standard DIMMs with Active with memory encryption on the processor, while downtime is minimized thanks
Memory Mirroring to the industry-leading reliability and availability of Active Memory Mirroring.
Protect data from core to cloud with memory encryption at the processor
level and four times more crypto engines in every core compared to Power9
With data residing in increasingly distributed environments, you cannot
set a perimeter to it anymore. This reinforces the need for layered security
across your IT stack. The Power10 family of servers introduces a new
layer of defense with transparent memory encryption. With this feature, all
stored data remains encrypted when in transit between the memory storage
and processor. Since this capability is enabled at the silicon level, there is
no additional management setup or performance impact. Power10 also
includes four times more crypto engines in every core compared to Power9
processor-based servers to accelerate encryption performance across the
stack. These innovations, along with new in-core defense for return-oriented
programming attacks and support for post quantum encryption and fully
homomorphic encryption, makes one of the most secure server platforms
even better.

Streamline insights and automation with four Matrix Math Accelerators


per core for faster AI inferencing
As more AI models are deployed in production, the challenges around AI
infrastructure are beginning to increase. A typical AI deployment involves
sending data from an operational platform to a GPU system. This usually
induces latency and may even increase security risks by leaving more data
in-network. Power10 addresses this challenge with core AI inferencing and
machine learning. The Matrix Math Accelerators (MMAs) in Power10 cores
provide the computational strength to tackle demanding AI inferencing and
machine learning at multiple levels of precision and data bandwidth.

Deliver two times better memory reliability and availability than


industry-standard DIMMs with Active Memory Mirroring
Power L1024 makes the most reliable server platform in its class even better
with advanced recovery, diagnostic capabilities, and open memory interface
(OMI) attached advanced memory DDIMMs. The continuous operations of
today’s in-memory systems depend on memory reliability because of their
large memory footprint. Power10 DDIMMs deliver two times better memory
reliability and availability than industry-standard DIMMs1, with the option to
increase uptime and improve availability even more by implementing Active
Memory Mirroring.

2 IBM Power L1024


Conclusion
IBM Power L1024 delivers on key enterprise needs, allowing organizations to
respond faster to business demands with world record performance scalability
for core enterprise workloads and a frictionless hybrid cloud experience. Power
L1024 also helps businesses protect their data from core to cloud with accelerated
encryption and new in-core defense against return-oriented programming attacks.
MMAs in Power10 cores allow IT teams to streamline insights and automation
with in-core AI inferencing and machine learning, while OMI attached-memory
DDIMMs maximize reliability and availability.

For more information


To learn more about IBM Power L1024 and Linux on Power, please contact your
IBM representative or IBM Business Partner, or visit ibm.com/it-infrastructure/
power/os/linux.

IBM Power L1024 L1024


MTM: 9786-42H

Processor module offerings 12, 16 and 24 Power10 cores

Processor interconnect 4x2B at 32 Gbps

Memory channels per system 16 OMI channels

Memory bandwidth per system (peak) 818 Gbps with 16, 32


and 64 GB DDIMMs

DIMMs per system 32 DDIMMs

Memory capacity per system (max) 8 TB

Acceleration ports 6 ports at 25 Gbps

PCIe lanes per system (max) 128 PCIe G4 lanes at 16 Gbps

PCIe slots per system 4 PCIe G4 x16 or G5 x8 slots


4 PCIe G5 x8 slots
2 PCIe G4 x8 slots

Slots for internal storage controller General purpose

Internal storage 16 NVMe U.2

I/O expansion drawers (max) 2

Service processor Enterprise BMC (eBMC)

RAS Active Memory Mirroring support

Security Transparent memory encryption (TME)

3
Notes
1. Based on IBM’s internal analysis of the IBM product failure rate of DDIMMS
versus industry-standard DIMMs

© Copyright IBM Corporation 2024 IBM, the IBM logo, IBM Power, and POWER9 are trademarks or registered trademarks of International
Business Machines Corporation, in the United States and/or other countries. Other product and service
IBM Corporation names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on
New Orchard Road ibm.com/trademark.
Armonk, NY 10504
The registered trademark Linux is used pursuant to a sublicense from the Linux Foundation,
Produced in the the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.
United States of America
March 2024 This document is current as of the initial date of publication and may be changed by IBM at any
time. Not all offerings are available in every country in which IBM operates.

THE INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT ANY WARRANTY,
EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

IBM products are warranted according to the terms and conditions of the agreements under
which they are provided.
Power Systems

Installing the IBM Power S1024


(9105-42A), IBM Power L1024
(9786-42H), and IBM Power S1014
(9105-41B)

IBM

GI11-9900-02
Note
Before using this information and the product it supports, read the information in “Safety notices”
on page v, “Notices” on page 37, the IBM Systems Safety Notices manuals, G229-1110 and
G229-9054, and the IBM Environmental Notices and User Guide, Z125–5823.

This edition applies to IBM Power Systems servers that contain the POWER10 processor and to all associated models.
© Copyright International Business Machines Corporation 2022, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents

Safety notices........................................................................................................v

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and
IBM Power S1014 (9105-41B)............................................................................ 1
Installing a rack-based server..................................................................................................................... 1
Prerequisite for installing the rack-mounted server..............................................................................1
Completing inventory for your server.................................................................................................... 2
Determining and marking the location in the rack................................................................................ 2
Attaching the mounting hardware to the rack....................................................................................... 4
Installing the system into the rack.........................................................................................................7
Installing the cable-management arm.................................................................................................. 8
Cabling the server and setting up a console.......................................................................................... 9
Cabling the server and connecting expansion units............................................................................15
Completing the server setup................................................................................................................16
Installing a stand-alone server..................................................................................................................19
Prerequisite for installing the stand-alone server...............................................................................19
Moving the server to the installation site.............................................................................................20
Completing inventory for your stand-alone server..............................................................................20
Cabling the server and setting up a console........................................................................................20
Completing the server setup................................................................................................................27
Setting up a preinstalled server.................................................................................................................30
Prerequisite for installing the preinstalled server............................................................................... 30
Completing inventory for your preinstalled server..............................................................................30
Removing the shipping bracket and connecting power cords and power distribution unit (PDU)
for your preinstalled server.............................................................................................................31
Setting up a console............................................................................................................................. 31
Routing cables through the cable-management arm and connecting expansion units.....................33
Completing the server setup................................................................................................................33

Notices................................................................................................................37
Accessibility features for IBM Power servers........................................................................................... 38
Privacy policy considerations ................................................................................................................... 39
Trademarks................................................................................................................................................ 39
Electronic emission notices.......................................................................................................................39
Class A Notices..................................................................................................................................... 40
Class B Notices..................................................................................................................................... 43
Terms and conditions.................................................................................................................................45

iii
iv
Safety notices
Safety notices may be printed throughout this guide:
• DANGER notices call attention to a situation that is potentially lethal or extremely hazardous to people.
• CAUTION notices call attention to a situation that is potentially hazardous to people because of some
existing condition.
• Attention notices call attention to the possibility of damage to a program, device, system, or data.

World Trade safety information


Several countries require the safety information contained in product publications to be presented in
their national languages. If this requirement applies to your country, safety information documentation is
included in the publications package (such as in printed documentation, on DVD, or as part of the product)
shipped with the product. The documentation contains the safety information in your national language
with references to the U.S. English source. Before using a U.S. English publication to install, operate, or
service this product, you must first become familiar with the related safety information documentation.
You should also refer to the safety information documentation any time you do not clearly understand any
safety information in the U.S. English publications.
Replacement or additional copies of safety information documentation can be obtained by calling the IBM
Hotline at 1-800-300-8751.

German safety information


Das Produkt ist nicht für den Einsatz an Bildschirmarbeitsplätzen im Sinne § 2 der
Bildschirmarbeitsverordnung geeignet.

Laser safety information


IBM® servers can use I/O cards or features that are fiber-optic based and that utilize lasers or LEDs.
Laser compliance
IBM servers may be installed inside or outside of an IT equipment rack.
DANGER: When working on or around the system, observe the following precautions:
Electrical voltage and current from power, telephone, and communication cables are hazardous. To
avoid a shock hazard: If IBM supplied the power cord(s), connect power to this unit only with the
IBM provided power cord. Do not use the IBM provided power cord for any other product. Do not
open or service any power supply assembly. Do not connect or disconnect any cables or perform
installation, maintenance, or reconfiguration of this product during an electrical storm.

• The product might be equipped with multiple power cords. To remove all hazardous
voltages, disconnect all power cords. For AC power, disconnect all power cords from their AC power
source. For racks with a DC power distribution panel (PDP), disconnect the customer’s DC power source
to the PDP.
• When connecting power to the product ensure all power cables are properly connected. For racks with
AC power, connect all power cords to a properly wired and grounded electrical outlet. Ensure that the
outlet supplies proper voltage and phase rotation according to the system rating plate. For racks with a
DC power distribution panel (PDP), connect the customer’s DC power source to the PDP. Ensure that the
proper polarity is used when attaching the DC power and DC power return wiring.
• Connect any equipment that will be attached to this product to properly wired outlets.

© Copyright IBM Corp. 2022, 2023 v


• When possible, use one hand only to connect or disconnect signal cables.
• Never turn on any equipment when there is evidence of fire, water, or structural damage.
• Do not attempt to switch on power to the machine until all possible unsafe conditions are corrected.
• When performing a machine inspection: Assume that an electrical safety hazard is present. Perform
all continuity, grounding, and power checks specified during the subsystem installation procedures to
ensure that the machine meets safety requirements. Do not attempt to switch power to the machine
until all possible unsafe conditions are corrected. Before you open the device covers, unless instructed
otherwise in the installation and configuration procedures: Disconnect the attached AC power cords,
turn off the applicable circuit breakers located in the rack power distribution panel (PDP), and
disconnect any telecommunications systems, networks, and modems.
• Connect and disconnect cables as described in the following procedures when installing, moving, or
opening covers on this product or attached devices.
To Disconnect: 1) Turn off everything (unless instructed otherwise). 2) For AC power, remove the power
cords from the outlets. 3) For racks with a DC power distribution panel (PDP), turn off the circuit
breakers located in the PDP and remove the power from the Customer's DC power source. 4) Remove
the signal cables from the connectors. 5) Remove all cables from the devices.
To Connect: 1) Turn off everything (unless instructed otherwise). 2) Attach all cables to the devices. 3)
Attach the signal cables to the connectors. 4) For AC power, attach the power cords to the outlets. 5)
For racks with a DC power distribution panel (PDP), restore the power from the Customer's DC power
source and turn on the circuit breakers located in the PDP. 6) Turn on the devices.

• Sharp edges, corners and joints may be present in and around the system. Use
care when handling equipment to avoid cuts, scrapes and pinching. (D005)
(R001 part 1 of 2):
DANGER: Observe the following precautions when working on or around your IT rack system:
• Heavy equipment–personal injury or equipment damage might result if mishandled.
• Always lower the leveling pads on the rack cabinet.
• Always install stabilizer brackets on the rack cabinet if provided, unless the earthquake option is
to be installed.
• To avoid hazardous conditions due to uneven mechanical loading, always install the heaviest
devices in the bottom of the rack cabinet. Always install servers and optional devices starting
from the bottom of the rack cabinet.
• Rack-mounted devices are not to be used as shelves or work spaces. Do not place objects on top
of rack-mounted devices. In addition, do not lean on rack mounted devices and do not use them
to stabilize your body position (for example, when working from a ladder).

• Stability hazard:
– The rack may tip over causing serious personal injury.
– Before extending the rack to the installation position, read the installation instructions.
– Do not put any load on the slide-rail mounted equipment mounted in the installation position.
– Do not leave the slide-rail mounted equipment in the installation position.
• Each rack cabinet might have more than one power cord.
– For AC powered racks, be sure to disconnect all power cords in the rack cabinet when directed
to disconnect power during servicing.

vi Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
– For racks with a DC power distribution panel (PDP), turn off the circuit breaker that controls
the power to the system unit(s), or disconnect the customer’s DC power source, when directed
to disconnect power during servicing.
• Connect all devices installed in a rack cabinet to power devices installed in the same rack
cabinet. Do not plug a power cord from a device installed in one rack cabinet into a power device
installed in a different rack cabinet.
• An electrical outlet that is not correctly wired could place hazardous voltage on the metal parts
of the system or the devices that attach to the system. It is the responsibility of the customer to
ensure that the outlet is correctly wired and grounded to prevent an electrical shock. (R001 part
1 of 2)
(R001 part 2 of 2):
CAUTION:
• Do not install a unit in a rack where the internal rack ambient temperatures will exceed the
manufacturer's recommended ambient temperature for all your rack-mounted devices.
• Do not install a unit in a rack where the air flow is compromised. Ensure that air flow is not
blocked or reduced on any side, front, or back of a unit used for air flow through the unit.
• Consideration should be given to the connection of the equipment to the supply circuit so that
overloading of the circuits does not compromise the supply wiring or overcurrent protection.
To provide the correct power connection to a rack, refer to the rating labels located on the
equipment in the rack to determine the total power requirement of the supply circuit.
• (For sliding drawers.) Do not pull out or install any drawer or feature if the rack stabilizer brackets
are not attached to the rack or if the rack is not bolted to the floor. Do not pull out more than one
drawer at a time. The rack might become unstable if you pull out more than one drawer at a time.

• (For fixed drawers.) This drawer is a fixed drawer and must not be moved for servicing unless
specified by the manufacturer. Attempting to move the drawer partially or completely out of the
rack might cause the rack to become unstable or cause the drawer to fall out of the rack. (R001
part 2 of 2)
CAUTION: Removing components from the upper positions in the rack cabinet improves rack
stability during relocation. Follow these general guidelines whenever you relocate a populated rack
cabinet within a room or building.
• Reduce the weight of the rack cabinet by removing equipment starting at the top of the rack
cabinet. When possible, restore the rack cabinet to the configuration of the rack cabinet as you
received it. If this configuration is not known, you must observe the following precautions:
– Remove all devices in the 32U position (compliance ID RACK-001 or 22U (compliance ID
RR001) and above.
– Ensure that the heaviest devices are installed in the bottom of the rack cabinet.

Safety notices vii


– Ensure that there are little-to-no empty U-levels between devices installed in the rack cabinet
below the 32U (compliance ID RACK-001 or 22U (compliance ID RR001) level, unless the
received configuration specifically allowed it.
• If the rack cabinet you are relocating is part of a suite of rack cabinets, detach the rack cabinet
from the suite.
• If the rack cabinet you are relocating was supplied with removable outriggers they must be
reinstalled before the cabinet is relocated.
• Inspect the route that you plan to take to eliminate potential hazards.
• Verify that the route that you choose can support the weight of the loaded rack cabinet. Refer to
the documentation that comes with your rack cabinet for the weight of a loaded rack cabinet.
• Verify that all door openings are at least 760 x 2083 mm (30 x 82 in.).
• Ensure that all devices, shelves, drawers, doors, and cables are secure.
• Ensure that the four leveling pads are raised to their highest position.
• Ensure that there is no stabilizer bracket installed on the rack cabinet during movement.
• Do not use a ramp inclined at more than 10 degrees.
• When the rack cabinet is in the new location, complete the following steps:
– Lower the four leveling pads.
– Install stabilizer brackets on the rack cabinet or in an earthquake environment bolt the rack to
the floor.
– If you removed any devices from the rack cabinet, repopulate the rack cabinet from the lowest
position to the highest position.
• If a long-distance relocation is required, restore the rack cabinet to the configuration of the rack
cabinet as you received it. Pack the rack cabinet in the original packaging material, or equivalent.
Also lower the leveling pads to raise the casters off of the pallet and bolt the rack cabinet to the
pallet.
(R002)
(L001)

DANGER: Hazardous voltage, current, or energy levels are present inside any component that has
this label attached. Do not open any cover or barrier that contains this label. (L001)
(L002)

DANGER: Rack-mounted devices are not to be used as shelves or work spaces. Do not place
objects on top of rack-mounted devices. In addition, do not lean on rack-mounted devices and do
not use them to stabilize your body position (for example, when working from a ladder). Stability
hazard:
• The rack may tip over causing serious personal injury.

viii Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
• Before extending the rack to the installation position, read the installation instructions.
• Do not put any load on the slide-rail mounted equipment mounted in the installation position.
• Do not leave the slide-rail mounted equipment in the installation position.
(L002)
(L003)

or

or

or

or

Safety notices ix
DANGER: Multiple power cords. The product might be equipped with multiple AC power cords
or multiple DC power cables. To remove all hazardous voltages, disconnect all power cords and
power cables. (L003)
(L007)

CAUTION: A hot surface nearby. (L007)

(L008)

CAUTION: Hazardous moving parts nearby. (L008)

(L018)

or
CAUTION: High levels of acoustical noise are (or could be under certain circumstances) present.
Use approved hearing protection and/ or provide mitigation or limit exposure. (L018)
(L031)

x Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
CAUTION:

Enclosure Integrity.
• Access covers are intended only for occasional removal.
• Follow documented procedures when opening during live or temporary service.
• When service is complete, promptly reinstall all covers, lids, and/or doors for correct operation.
(L031)
All lasers are certified in the U.S. to conform to the requirements of DHHS 21 CFR Subchapter J for class
1 laser products. Outside the U.S., they are certified to be in compliance with IEC 60825 as a class 1 laser
product. Consult the label on each part for laser certification numbers and approval information.
CAUTION: This product might contain one or more of the following devices: CD-ROM drive, DVD-
ROM drive, DVD-RAM drive, or laser module, which are Class 1 laser products. Note the following
information:
• Do not remove the covers. Removing the covers of the laser product could result in exposure to
hazardous laser radiation. There are no serviceable parts inside the device.
• Use of the controls or adjustments or performance of procedures other than those specified
herein might result in hazardous radiation exposure.
(C026)
CAUTION: Data processing environments can contain equipment transmitting on system links with
laser modules that operate at greater than Class 1 power levels. For this reason, never look into
the end of an optical fiber cable or open receptacle. Although shining light into one end and looking
into the other end of a disconnected optical fiber to verify the continuity of optic fibers may not
injure the eye, this procedure is potentially dangerous. Therefore, verifying the continuity of optical
fibers by shining light into one end and looking at the other end is not recommended. To verify
continuity of a fiber optic cable, use an optical light source and power meter. (C027)
CAUTION: This product contains a Class 1M laser. Do not view directly with optical instruments.
(C028)
CAUTION: Some laser products contain an embedded Class 3A or Class 3B laser diode. Note the
following information:
• Laser radiation when open.
• Do not stare into the beam, do not view directly with optical instruments, and avoid direct
exposure to the beam. (C030)
(C030)
CAUTION: The battery contains lithium. To avoid possible explosion, do not burn or charge the
battery.
Do Not:
• Throw or immerse into water
• Heat to more than 100 degrees C (212 degrees F)
• Repair or disassemble

Safety notices xi
Exchange only with the IBM-approved part. Recycle or discard the battery as instructed by
local regulations. In the United States, IBM has a process for the collection of this battery. For
information, call 1-800-426-4333. Have the IBM part number for the battery unit available when
you call. (C003)
CAUTION: Regarding IBM provided VENDOR LIFT TOOL:
• Operation of LIFT TOOL by authorized personnel only.
• LIFT TOOL intended for use to assist, lift, install, remove units (load) up into rack elevations. It is
not to be used loaded transporting over major ramps nor as a replacement for such designated
tools like pallet jacks, walkies, fork trucks and such related relocation practices. When this is not
practicable, specially trained persons or services must be used (for instance, riggers or movers).
• Read and completely understand the contents of LIFT TOOL operator's manual before using.
Failure to read, understand, obey safety rules, and follow instructions may result in property
damage and/or personal injury. If there are questions, contact the vendor's service and support.
Local paper manual must remain with machine in provided storage sleeve area. Latest revision
manual available on vendor's web site.
• Test verify stabilizer brake function before each use. Do not over-force moving or rolling the LIFT
TOOL with stabilizer brake engaged.
• Do not raise, lower or slide platform load shelf unless stabilizer (brake pedal jack) is fully
engaged. Keep stabilizer brake engaged when not in use or motion.
• Do not move LIFT TOOL while platform is raised, except for minor positioning.
• Do not exceed rated load capacity. See LOAD CAPACITY CHART regarding maximum loads at
center versus edge of extended platform.
• Only raise load if properly centered on platform. Do not place more than 200 lb (91 kg) on edge
of sliding platform shelf also considering the load's center of mass/gravity (CoG).
• Do not corner load the platforms, tilt riser, angled unit install wedge or other such accessory
options. Secure such platforms -- riser tilt, wedge, etc options to main lift shelf or forks in all four
(4x or all other provisioned mounting) locations with provided hardware only, prior to use. Load
objects are designed to slide on/off smooth platforms without appreciable force, so take care not
to push or lean. Keep riser tilt [adjustable angling platform] option flat at all times except for final
minor angle adjustment when needed.
• Do not stand under overhanging load.
• Do not use on uneven surface, incline or decline (major ramps).
• Do not stack loads.
• Do not operate while under the influence of drugs or alcohol.
• Do not support ladder against LIFT TOOL (unless the specific allowance is provided for one
following qualified procedures for working at elevations with this TOOL).
• Tipping hazard. Do not push or lean against load with raised platform.
• Do not use as a personnel lifting platform or step. No riders.
• Do not stand on any part of lift. Not a step.
• Do not climb on mast.
• Do not operate a damaged or malfunctioning LIFT TOOL machine.
• Crush and pinch point hazard below platform. Only lower load in areas clear of personnel and
obstructions. Keep hands and feet clear during operation.
• No Forks. Never lift or move bare LIFT TOOL MACHINE with pallet truck, jack or fork lift.
• Mast extends higher than platform. Be aware of ceiling height, cable trays, sprinklers, lights, and
other overhead objects.
• Do not leave LIFT TOOL machine unattended with an elevated load.
• Watch and keep hands, fingers, and clothing clear when equipment is in motion.

xii Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
• Turn Winch with hand power only. If winch handle cannot be cranked easily with one hand, it
is probably over-loaded. Do not continue to turn winch past top or bottom of platform travel.
Excessive unwinding will detach handle and damage cable. Always hold handle when lowering,
unwinding. Always assure self that winch is holding load before releasing winch handle.
• A winch accident could cause serious injury. Not for moving humans. Make certain clicking sound
is heard as the equipment is being raised. Be sure winch is locked in position before releasing
handle. Read instruction page before operating this winch. Never allow winch to unwind freely.
Freewheeling will cause uneven cable wrapping around winch drum, damage cable, and may
cause serious injury.
• This TOOL must be maintained correctly for IBM Service personnel to use it. IBM shall inspect
condition and verify maintenance history before operation. Personnel reserve the right not to use
TOOL if inadequate. (C048)
CAUTION: This equipment is not suitable for use in locations where children are likely to be
present. (C052)

Power and cabling information for NEBS (Network Equipment-Building System)


GR-1089-CORE
The following comments apply to the IBM servers that have been designated as conforming to NEBS
(Network Equipment-Building System) GR-1089-CORE:
The equipment is suitable for installation in the following:
• Network telecommunications facilities
• Locations where the NEC (National Electrical Code) applies
The intra-building ports of this equipment are suitable for connection to intra-building or unexposed
wiring or cabling only. The intra-building ports of this equipment must not be metallically connected to the
interfaces that connect to the OSP (outside plant) or its wiring. These interfaces are designed for use as
intrabuilding interfaces only (Type 2 or Type 4 ports as described in GR-1089-CORE) and require isolation
from the exposed OSP cabling. The addition of primary protectors is not sufficient protection to connect
these interfaces metallically to OSP wiring.
Note: All Ethernet cables must be shielded and grounded at both ends.
The AC-powered system does not require the use of an external surge protection device (SPD).
The DC-powered system employs an isolated DC return (DC-I) design. The DC battery return terminal
shall not be connected to the chassis or frame ground.
The DC-powered system is intended to be installed in a common bonding network (CBN) as described in
GR-1089-CORE.

Safety notices xiii


xiv Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Installing the IBM Power S1024 (9105-42A), IBM
Power L1024 (9786-42H), and IBM Power S1014
(9105-41B)
Use this information to learn about installing theInstalling the IBM Power S1024 (9105-42A), IBM Power
L1024 (9786-42H), and IBM Power S1014 (9105-41B)

Installing a rack-based server


Use this information to learn about installing a rack-based server.

Prerequisite for installing the rack-mounted server


Use the information to understand the prerequisites that are required for installing the server.

About this task


Important: If you are installing a ENZ0 PCIe4 expansion drawer below the following IBM systems, ensure
that you leave at least 1 EIA unit of open space between the system and the drawer, and install a single
EIA unit rack filler in that space. This allows for proper servicing of the drawer.
1. NED24 NVMe expansion drawer
2. 9105-22A
3. 9105-22B
4. 9105-41B
5. 9105-42A
6. 9786-22H
7. 9786-42H
8. 9043-MRX
This ensures that the ENZ0 PCIe4 expansion drawer's cable management arm has enough clearance for
service procedures.
You might need to read the following documents before you begin to install the server:
• The latest version of this document is maintained online. See Installing the Installing the IBM
Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014 (9105-41B) (http://
www.ibm.com/support/knowledgecenter/POWER10/p10jae/p10jae_roadmap.htm).
• To plan your server installation, see Planning for the system (http://www.ibm.com/support/
knowledgecenter/POWER10/p10jae/p10jae_kickoff.htm.
• To download HMC updates and fixes, see the Hardware Management Console Support and downloads
website (https://www14.software.ibm.com/webapp/set2/sas/f/hmcl/home.html).
Consider the following prerequisites before you install the server:

Procedure
1. Ensure that you have the following items before you start your installation:
• Phillips screwdriver
• Flat-head screwdriver
• Rack with four units of space

© Copyright IBM Corp. 2022, 2023 1


2. Ensure that you have one of the following consoles:
• HMC at version 10 release 2.0, or later.
• Graphic monitor with keyboard and mouse.
• Teletype (tty) monitor with keyboard.

Completing inventory for your server


Use this information to complete inventory for your server.

About this task


To complete the inventory, complete the following steps:

Procedure
1. Verify that you received all the boxes you ordered.
2. Unpack the server components as needed.
3. Complete a parts inventory before you install each server component by following these steps:
a. Locate the inventory list for your server.
b. Ensure that you received all the parts that you ordered.
Note: Your order information is included with your product. You can also obtain the order
information from your marketing representative or the IBM Business Partner.

Determining and marking the location in the rack


You might need to determine where to install the system unit into the rack.

About this task


To determine where to install the system unit into a rack, complete the following steps:

Procedure
1. Read the Rack safety notices (http://www.ibm.com/support/knowledgecenter/POWER10/p10hbf/
p10hbf_racksafety.htm).
2. Determine where to place the system unit in the rack. As you plan for installing the system unit in a
rack, consider the following information:
• Organize larger and heavier units into the lower part of the rack.
• Plan to install system units into the lower part of the rack first.
• Record the Electronic Industries Alliance (EIA) locations in your plan.
Note: This server is four EIA units high. An EIA unit is 44.45 mm (1.75 in.) in height. The rack
contains three mounting holes for each EIA unit of height. This system unit therefore, is 177.8 mm (7
in.) high and covers 12 mounting holes in the rack.
3. If necessary, remove the filler panels to allow access to the inside of the rack enclosure where you
plan to place the unit, as shown in Figure 1 on page 3.

2 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
Figure 1. Removing the filler panels
4. Determine to place the system in the rack. Record the EIA location.
Note: An EIA unit on your rack consists of a grouping of three holes.
5. Facing the front of the rack and working from the right side of the rack, use tape, a marker, or pencil
to mark the lowest two holes of the lowest EIA unit. Next, mark the lowest hole on the EIA unit
directly above this EIA unit.
6. Repeat step “5” on page 3 for the corresponding holes located on the left side of the rack.
7. Go to the rear of the rack.
8. On the right side, find the EIA unit that corresponds to the bottom EIA unit marked on the front of the
rack.
9. Mark the bottom hole in the EIA unit and the top hole in the EIA unit.
10. Mark the corresponding holes on the left side of the rack.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 3
Attaching the mounting hardware to the rack
You might need to attach the mounting hardware to the rack. Use the procedure to complete this task.
The information is intended to promote safety and reliable operation, and includes illustrations of the
related hardware components and shows how these components relate to each other.

About this task


Attention: To avoid rail failure and potential danger to yourself and to the unit, ensure that you
have the correct rails and fittings for your rack. If your rack has square support flange holes or
screw-thread support flange holes, ensure that the rails and fittings match the support flange
holes that are used on your rack. Do not install mismatched hardware by using washers or spacers.
If you don’t have the correct rails and fittings for your rack, contact your reseller.
To install the rack-mounting hardware into the rack, complete the following steps:

Procedure
1. Standing at the front of the rack, align the pins on end of the left rail (1) with the rear of the rack.

Figure 2. Aligning the end of the left rail to the rear of the rack
2. Push the rails into the rear rack flanges until they click into place (2).

4 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
Figure 3. Pushing the rails into the rear rack flanges until they click into place
3. Swivel the rail retention bracket out (3) and pull the front of the rail toward the front of the rack, until
the pins are aligned with the correct holes in the rack (4).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 5
Figure 4. Swiveling the retention bracket and aligning the pins
4. Swivel the rail retention bracket so that it locks onto the rack flange (5).

6 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
Figure 5. Locking the rail retention bracket onto the rack flange
5. Repeat these steps for the right rail.

Installing the system into the rack


Use the procedure to install the system into the rack.

About this task


Attention:
• Attach an electrostatic discharge (ESD) wrist strap to the front ESD jack, to the rear ESD jack,
or to an unpainted metal surface of your hardware to prevent the electrostatic discharge from
damaging your hardware.
• When you use an ESD wrist strap, follow all electrical safety procedures. An ESD wrist strap
is used for static control. It does not increase or decrease your risk of receiving electric shock
when using or working on electrical equipment.
• If you do not have an ESD wrist strap, just prior to removing the product from ESD packaging and
installing or replacing hardware, touch an unpainted metal surface of the system for a minimum
of 5 seconds.
CAUTION: This system requires three people to install the system into the rack.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 7
To install the system into the rack, complete the following steps:

Procedure
1. Remove the shipping cover on the rear and the front of the system, if present.
2. Extend the outer slide rail forward until it stops, and then extend inner slide rail until it click into place.
Carefully lift the server and tilt it into position over the slide rails so that the rear nail heads on the
server line up with the rear slots on the slide rails. Slide the server down until the rear nail heads slip
into the two rear slots. Then, slowly lower the front of the server, until the other nail heads slip into
the other slots on the slide rails. Ensure that the front latch slides over the nail heads until it clicks into
place.
3. Push the release buttons on both rails and push the server all the way into the rack until it clicks into
place.
4. Secure the system to the rack by installing two screws through the threaded holes.

Installing the cable-management arm


The cable-management arm is used to efficiently route the cables so that you have proper access to the
rear of the system. Use the procedure to install the cable-management arm.

About this task


To install the cable-management arm, complete the following steps:

Procedure
1. The cable-management arm can be installed on either side of the server. For this procedure, it is
illustrated that you are installing it on the right side, while you are facing the server from the rear. If
you want to install the cable management arm on the other side of the rack, you can press the button
on the extension tab (1) so that it swivels in the opposite direction (2).

8 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power
S1014 (9105-41B)
Figure 6. Swiveling the cable management arm extension tab
2. Insert the inner cable management arm tab to the inner mounting bracket until the outer mounting
bracket clicks into place.
Note: To avoid damage when the system is placed in the service position, ensure that the middle pin is
between each arm.
3. To route the cables through the cable management arm, press the latches on the cable management
arm to open the baskets, route the cables through the arm, and then re-latch the baskets until they are
fully seated.

Cabling the server and setting up a console


Your console, monitor, or interface choices are guided by whether you create logical partitions, which
operating system you install in your primary partition, and whether you install a Virtual I/O Server (VIOS)
in one of your logical partitions.

Determining which console to use


Your console, monitor, or interface choices are guided by whether you create logical partitions, which
operating system you install in your primary partition, and whether you install a Virtual I/O Server (VIOS)
in one of your logical partitions.
Go to cabling setup instructions for available console types in the following table.

Table 1. Available console types


Cabling setup
Console type Operating system Logical partitions Cable required instructions
ASCII terminal AIX®, Linux®, or Yes for VIOS, no for Ethernet (or cross- Logging on to the
VIOS AIX and Linux over cable) ASMI GUI (http://
www.ibm.com/
support/
knowledgecenter/
POWER10/p10eih/
p10eih_gui_loggin
gon.htm
Hardware AIX, IBM i, Linux, Yes Ethernet (or cross- “Cabling the server
Management or VIOS over cable) to the HMC” on
Console (HMC) page 11.
Operations Console IBM i Yes Ethernet cable for “Cabling the server
LAN connection and accessing
Use your
Operations
Operations Console
Console” on page
to manage existing
12
IBM i partitions.

Accessing the eBMC so that you can manage the system


IBM® Power Systems servers use a enterprise baseboard management controller (eBMC) for system
service management, monitoring, maintenance, and control. The eBMC also provides access to the
system event log files (SEL). The eBMC is a specialized service processor that monitors the physical
state of the system by using sensors. A system administrator or service representative can communicate
with the eBMC through an independent connection.

About this task


Note: To manage POWER10 processor-based systems, the HMC must be at version 10 release 1.0,
service pack 1020, or later.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 9
To access the eBMC by using your HMC, complete the following steps:

Procedure
1. Identify the port on the HMC that is enabled as a DHCP server and connect the new system to the
managed system network.
2. Connect each end of the power cables to the power supplies on the rear of the system, and connect
the other ends to a power source.
3. The HMC discovers the system and assigns it a default name. The name is the DHCP IP address you
are using, without the decimals. The server displays the Pending Authentication state.
4. You are prompted to set the HMC Access password that your HMC will use to authenticate and mange
the system. This is the same password that you will use to access the ASMI as admin. To set the
system password, select the server, then select Actions > Update System Password.
Note: The HMC Access password is also the eBMC ASMI admin password.
5. Click Finish.
6. Select System Actions > VMI configuration. Select the network interface, then select Modify.
Note: You can choose either T0 or T1. If you previously connected to T0, configure Eth0. If you
previously connected to T1 on the HMC network, configure Eth1.
7. Select DHCP and click OK.
8. Use the HMC to power on the system.
a. In the navigation area, select Resources > All Systems.
b. In the content pane, select the managed system.
c. In the navigation area, select System Actions > Operations > Power On.

Accessing the eBMC without an HMC


To access the eBMC without using the HMC, complete the steps in this procedure.

About this task


To access the eBMC without using an HMC, complete the following steps:

Procedure
1. Connect an Ethernet cable between the ETH0 port on the rear of the system to a PC equipped with an
Ethernet port.
2. If you haven't already done so, connect the power cables to the power supplies. The panel displays
01 N.
3. Press the up arrow key to select 02 and press Enter.
4. Press Enter again. A < (less than symbol) appears next to N. Press the Up Arrow key. The N changes
to an M.
5. Press Enter.
6. Press Enter twice. 02 displays on the control panel.
7. Press the Up Arrow key until it returns 30 and press Enter.
8. Press enter again. The panel now displays 3000. Press Enter.
9. Record the information that displays. You will need this information for a later step.
10. Move to your Ethernet-equipped device. Open your device's network configuration panel and assign
an IP that is the same as what you recorded in the previous step, but subtract 1. For instance, if you
recorded 169.254.176.9, then assign your laptop 169.254.176.8. Use subnet mask 255.255.0.0 on
the device. This will be the BMC's default value.
11. Use your device to verify that you can connect using the address you used in the previous step, and
then attach a web browser to that IP and open ASMI.
12. Use the ASMI interface to set a new admin password. The initial login is admin/admin.
10 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
13. Set a new password. Ensure that you enter an acceptable password before proceeding to the next
step.
14. Configure ETH1 as a static IP. To configure ETH1 as a static IP, complete the following steps:
Note: You will need one available IP address for ETH1 on the BMC.
a. on the BMC, select Settings > Network > Eth1.
b. Select Add Static IPv4 Address.
c. Enter your IP address, gateway, and subnet information.
d. Click Add.

Cabling the server with an ASCII terminal


Access the ASMI by using the eBMC interface.

About this task


Lean more about the accessing the ASMI by using the eBMC interface.
To access the ASMI by using the eBMC interface, complete the following steps:

Procedure
1. For more information about launching the ASMI using the eBMC interface, see Launching the host
console (http://www.ibm.com/support/knowledgecenter/POWER10/p10eih/p10eih_gui_sol.htm.
2. Once you have performed the steps to launch the host console, return to these procedures.
3. Continue with “Completing the server setup” on page 16.

Cabling the server to the HMC


The Hardware Management Console (HMC) controls managed systems, including the management of
logical partitions, the creation of a virtual environment, and the use of capacity on demand. Using service
applications, the HMC can also communicate with managed systems to detect, consolidate, and forward
information to IBM service for analysis.

Before you begin


If you have not installed and configured your HMC, do so now. For instructions, see
Installation and configuration tasks (http://www.ibm.com/support/knowledgecenter/POWER10/p10hai/
p10hai_taskflow.htm).
To manage POWER10 processor-based systems, the HMC must be at version 10 release 2.0, or later. To
view the HMC version and release, complete the following steps:
1. In the navigation area, click Updates.
2. In the work area, view and record the information that appears in the HMC Code Level section,
including the HMC version, release, Service Pack, build level, and base versions.
To cable the server to the HMC, complete the following steps:

Procedure
1. If you want to directly attach your HMC to the managed system, connect ETH0 on the HMC to the
HMC0 port on the managed system.
2. To learn how to connect an HMC to a private network so that it can manage more than one
managed system, see HMC network connections (http://www.ibm.com/support/knowledgecenter/
POWER9/p10hai/p10hai_netconhmc.htm).
Notes:

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 11
• You can also have multiple systems that are attached to a switch that is then connected to the HMC.
For instructions, see HMC network connections (http://www.ibm.com/support/knowledgecenter/
POWER10/p10hai/p10hai_netconhmc.htm).
• If you are using a switch, ensure that the speed in the switch is set to Autodetection. If the
server is directly attached to the HMC, ensure the Ethernet adapter speed on the HMC is set to
Autodetection. For information about how to set media speeds, see Setting the media speed (http://
www.ibm.com/support/knowledgecenter/POWER9/p10hai/p10hai_lanmediaspeed_enh.htm).
3. If you are connecting a second HMC to your managed server, connect it to the Ethernet port that is
labeled HMC2 on the managed server.
4. Continue with “Cabling the server and connecting expansion units” on page 15.

Cabling the server and accessing Operations Console


You can use Operations Console to manage a server that is running the IBM i operating system even if you
do not have logical partitions.

Before you begin


You can access the Operations Console via a LAN connection to IBM i by using IBM i Access Client
Solutions (http://www-01.ibm.com/support/docview.wss?uid=isg3T1026805).
Note:
For more information about supported operating systems for IBM i Access for Windows, see IBM i Access
for Windows - Supported Operating Systems.
To cable the server and to access the Operations Console, complete the following steps:
1. Ensure that your server is powered off.
2. Obtain a static IP address that is assigned to the LAN console adapter on the server so that the
console can use it. Note the Internet Protocol (IP) address, subnet mask, and default gateway.
Optionally, select a unique host name and register the host name and the IP address in your site's
Domain Name System (DNS).
Note: This IP address is used by the Operations Console stack on the IBM i interface and is different
from the IP address that is used to connect a normal Telnet session. The IP address must not be in use
by another server. Ping the IP address on a PC connected to a network to verify that no other device is
using the IP address. You should not receive replies.
To set up the Operations Console, complete the following steps:
1. Install IBM i Access Client Solutions (ACS) (http://www-01.ibm.com/support/docview.wss?
uid=isg3T1026805) on a network-connected personal computer.
Note: To run IBM i Access Client Solutions (ACS) on a workstation, you must install Java. ACS is a Java-
based program and Java is required to run ACS. For information about ACS Java requirements, see
IBM i Access - ACS Getting Started (https://www.ibm.com/support/pages/ibm-i-access-acs-getting-
started#3.0).
Note: It is recommended that you log onto the PC as the local administrator. This ensures that you
have all the privileges that you need to modify the PC and to start a console session. Also, ensure that
you are running the latest version of ACS. For more information, see IBM i Access - Client Solutions
5733XJ1 (https://www.ibm.com/support/pages/ibm-i-access-client-solutions-5733xj1).
2. Cable the PC to a server. Plug a Cat 5e or Cat 6 (recommended) Ethernet cable to the PC and into
a valid Ethernet adapter port. To determine the server adapter port that you must use, refer to the
following table:
Note: The T1 resource is required for console connectivity on any adapter. The T1 resource is either
the top or far-right port depending on how you are viewing the system.

12 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Table 2. Server Operations Console LAN slots
Server Operations Console - LAN slot
9105-41B C7, C8, C9, C10, C11
9105-22A, 9105-22B, 9105-42A, 9786-22H, or C0, C1, C2, C3, C4, C7, C8, C9, C10, C11
9786-42H

Note: Make the initial connection with the PC that is directly cabled to the server. The PC and
server can be re-cabled to the network after the initial connection is made and a static IP address
has been assigned to the Operations Console port. A cross-over cable is not needed. For more
information, see Adapter requirements (http://www.ibm.com/support/knowledgecenter/POWER10/
p10hbx/hardwarereq_adapter.htm)
3. Configure the PC network. To configure the PC network, complete the following steps:
a. Open Windows Control Panel and access the adapter settings. If you are using Windows 10,
select Control Panel > Network and Internet > Network and Sharing Center > Change Adapter
Settings.
b. Disable any additional adapters other than the Local Area Connection.
c. Right click the adapter and select Properties.
d. Click Internet Protocol Version 4 (TCP/IPv4) and select Properties.
Note: If you are returning the device to the network after you set up the Operations Console, record
the IP information that is displayed.
e. Select Obtain an IP address automatically. This ensures that the PC receives an IP address in the
169.254.x.x range.
4. To disable the PC firewall, complete the following steps.
Note: All PC firewalls must be disabled for the initial connection.
a. In the Windows control panel, click Firewall settings and disable the firewall.
b. In the Windows control panel, click Security center. Check for a firewall and, if present, disable it.
c. Scan all tasks that are running on the PC for any other software firewalls and disable the firewall.
5. Power on the server by completing the following steps:
a. Set the manual initial program load (IPL) by completing the following steps:
i) Locate the server's control panel.
ii) Press the Up arrow key until you see 02, and press Enter.
iii) Press Enter again. A < (less than symbol) appears next to N.
iv) Press the Up Arrow key. The N changes to an M.
v) Press Enter.
vi) Press Enter twice. A 02 is displayed on the control panel.
b. After you have the server set to a manual IPL, push the white power button to power on the server.
Note: During the IPL, the system displays C6004031 on the control panel, which indicates that the
system is searching for an Operations Console. The system might take 20 - 30 minutes to complete
this action. If A6005008 is displayed on the control panel, this means that no Operations Console
is available. This might indicate that the system is not preinstalled with IBM i and you must set the
console type to LAN.
6. Perform this step if the system is not preinstalled with IBM i. For setting the console type to LAN,
complete the following steps:
a. Enable the control panel functions by completing the following steps:
i) Select function 25 on the control panel and press Enter. The return code must be 00.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 13
ii) Select function 26 on the control panel and press Enter.
Note: If you see a FF return code, go back to function 25 and press Enter, then return to function
26 and press Enter.
b. Check your current setting(s). Use console service functions (65+21+11) to check the current
setting.
• A600 500A = No console defined
• A603 500A = LAN console
• A604 500A = HMC console
If the system reference code (SRC) = A603500A, skip to step “7” on page 14. For all others SRCs,
continue with the next step.
c. Set console type to LAN.
For release 7.4 and earlier, complete the following steps.
i) Use the 65+21+11 sequences until it returns A603500B. This indicates that the console type
will be changed to LAN.
ii) Use the 21. This performs the change console type function.
iii) Use the 11, until it returns A6C3500C. This indicates that settings have been saved successfully.
If not, repeat function 11 until it returns A6C3500C
d. For release 7.5 and later, complete the following steps.
i) Use the 65+11 sequences until it returns A603500B. This indicates that the console type will be
changed to LAN.
ii) Use the 21. This performs the change console type function.
iii) Use the 11, until it returns A6C3500C. This indicates that settings have been saved successfully.
If not, repeat function 11 until it returns A6C3500C
Note: 65+21+11 functions are no longer needed unless directed by IBM support. The functions to set
an adapter location are now performed automatically by the Licensed Internal Code.
7. Connect the Operations Console by completing the following steps:
a. Open IBM i Access Client Solutions (ACS).
b. Under Management, click System Configurations.
c. Select Locate Console.
d. Click Search. After a few seconds, a connection displays. Click the connection and then click
Console.
e. In the Pending Authorization window, type the User ID and Password.
f. Accept the security certificate. Ensure that you accept it, otherwise your connection will not
continue. A console window opens. If the window is blank at first but the cursor is in the upper
left corner, it means that the screen is waiting for the Drive or DVD to provide the information to be
displayed.
8. To set a static IP address for the Operations Console, complete the following steps:
a. Sign on with QSECOFR. The default password is QSECOFR, and it is case-sensitive.
b. At the DST Main Menu b, select Option 3- Use Dedicated Service Tools.
c. Select Option 5- Work with DST environment.
d. Select Option 2- System Devices.
e. Select Option 7- Configure service tools LAN adapter.
f. Type the IP settings that you want to use. Optional: For the host name for Service Tools, you can
type a host name if it is also registered in your network DNS. It is recommended that you type the
word Default and enter the IP address that you want to use.
g. Press F7 to store the information.

14 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
h. Press F17 to Deactivate the session and then press it again to Activate. This causes your session
to go blank. Close the session.
9. To create a connection to the static IP, complete the following steps:
a. Either move the PC and Operations Console port both to the network or re-configure the PC IP
settings to be in the same subnet that you just configured for the service tools LAN adapter.
b. Return to the ACS interface and select the window labeled System Configurations.
c. Click New.
d. If you will use this connection to connect to other functions, type the system name that you plan to
use in the General tab.
e. Click the Console tab.
f. Under the LAN Console/Virtual Control panel, type the IP address of the service tools LAN adapter
in the Service Host Name field.
g. Click OK.
h. In the main ACS menu, click System and select the system that you created.
i. Under Console, click 5250 Console. Continue with your IPL.
Note: The IP configuration of the PC must be reset before cabling the PC back to the network because the
PC is configured with the gateway IP address. The PC and server console port (T1) can now be re-cabled
to the network.
Continue with “Completing the server setup” on page 16.

Cabling the server and connecting expansion units


Learn how to cable the server and to connect expansion units.

About this task


To cable the server and to connect expansion units, complete the following steps:

Procedure
1. Complete the following steps:
a. Plug the power cord into the power supply.
Note: If present, remove and discard any plug that covers the ports on the rear of the system.
The port covers ensure that you are reminded about resetting the Administrator password of your
managed system after the initial program load (IPL) completes.
b. Plug the system power cords and the power cords for any other attached devices into the power
source.
c. If your system uses a power distribution unit (PDU), complete the following steps:
i) Connect the system power cords from the server and I/O drawers to the PDU with an IEC 320
type receptacle.
ii) Attach the PDU input power cord and plug it into the power source.
iii) If your system uses two PDUs for redundancy, complete the following steps:
• If your system has two power supplies, attach one power supply to each of the two PDUs.
• If your system has four power supplies, plug E0 and E1 to PDU A and E2 and E3 to PDU B.
Note: Confirm that the system is in standby mode. The green power status indicator on the front
control panel is flashing, and the dc out indicator lights on the power supplies are flashing. If
none of the indicators are flashing, check the power cord connections.
2. For information about connecting enclosures and expansion units, see Enclosures and expansion units
(http://www.ibm.com/support/knowledgecenter/POWER10/p10ham/p10ham_kickoff.htm).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 15
Completing the server setup
Learn about the tasks you must complete to set up your managed system.
Select from the following options:
• “Completing the server setup using an HMC” on page 16
• “Completing the server setup without using an HMC” on page 18

Completing the server setup using an HMC


Perform these tasks to complete the server setup by using an HMC.

Completing the server setup by using an HMC with DHCP


Perform these tasks to complete the server setup by using an HMC that uses a DHCP network
configuration.

About this task


Note: Before you continue with this step, ensure that you have removed the orange system-to-rail locking
clips on each slide rail and pushed the system into the rack.
IBM® Power Systems servers use a enterprise baseboard management controller (eBMC) for system
service management, monitoring, maintenance, and control. The eBMC also provides access to the
system event log files (SEL). The eBMC is a specialized service processor that monitors the physical
state of the system by using sensors. A system administrator or service representative can communicate
with the eBMC through an independent connection.
Important: Intelligent Platform Management Interface (IPMI) is disabled by default on your system.
Inherent security vulnerabilities are associated with using the IPMI. Consider using Redfish APIs or the
GUI to manage your system. You must enable the IPMI and authorize the user before you can use the
service.
Note: To manage your system using the eBMC using your HMC, your HMC must be at Version 10 Release 1
Service Pack 1020, or later.
To access the eBMC by using your HMC, complete the following steps:

Procedure
1. Attach one end of the system power supply cable to a power source.
Note: No not apply power at this time.
2. Identify the port on the HMC that is enabled as a DHCP server and connect the new system to the
managed system network.
3. Connect each end of the power cables to the power supplies on the rear of the system, and connect
the other ends to a power source.
4. The HMC discovers the system and assigns it a default name. The name is the DHCP IP address you
are using, without the decimals. The server displays the Pending Authentication state.
5. You are prompted to set the HMC Access password that your HMC will use to authenticate and mange
the system. This is the same password that you will use to access the ASMI as admin. To set the
system password, select the server, then select Actions > Update System Password.
Note: The HMC Access password is also the eBMC ASMI admin password.
6. Click Finish.
7. Select System Actions > VMI configuration. Select the network interface, then select Modify.
Note: You can choose either T0 or T1. If you previously connected to T0, configure Eth0. If you
previously connected to T1 on the HMC network, configure Eth1.
8. Select DHCP and click OK.

16 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
9. Use the HMC to power on the system.
a. In the navigation area, select Resources > All Systems.
b. In the content pane, select the managed system.
c. In the navigation area, select System Actions > Operations > Power On.
10. Check the time of day.
a. On the ASMI Welcome pane, specify your user ID and password, and click Log In.
b. In the navigation area, expand System Configuration.
c. Select Time of Day. The content pane displays a form that shows the current date (day, month,
and year) and time (hours, minutes, and seconds).
11. Check the firmware level of your managed system.
To check your managed system's firmware level, select Actions > Update Firmware > System
Firmware > View Current Levels.
12. If necessary, update your managed system firmware. Select Actions > Update Firmware > System
Firmware > Update.

Completing the server setup by using an HMC with a static network configuration
Perform these tasks to complete the server setup by using an HMC that uses a static network
configuration.

Before you begin


To complete this procedure, you must have two static IPs to complete the connection and authentication
process; one for the HMC1 port and one for VMI. When you log in using your PC to set static IPs and to set
the admin password, that is the password that you will use when you select Connect Systems... . This is
because the client is using static IPs.

Procedure
1. Connect an Ethernet cable between the T2 (ETH0) port on the rear of the system to a PC equipped
with an Ethernet port, assuming that T3 (ETH1) is connected to the HMC.
2. If you haven't already done so, connect the power cables to the power supplies. The panel displays
01 N.
3. Press the up arrow key to select 02 and press Enter.
4. Press Enter again. A < (less than symbol) appears next to N. Press the Up Arrow key. The N changes
to an M.
5. Press Enter.
6. Press Enter twice. 02 displays on the control panel.
7. Press the Up Arrow key until it returns 30 and press Enter.
8. Press enter again. The panel now displays 3000. Press Enter.
9. Record the information that displays. You will need this information for a later step.
10. Move to your Ethernet-equipped device. Open your device's network configuration panel and assign
an IP that is the same as what you recorded in the previous step, but subtract 1. For instance, if you
recorded 169.254.176.9, then assign your laptop 169.254.176.8. Use subnet mask 255.255.0.0 on
the device. This will be the BMC's default value.
11. Use your device to verify that you can connect using the address you used in the previous step, and
then attach a web browser to that IP and open ASMI.
12. Log in using the default user ID and password.
Note: The default user ID is admin and the default password is admin.
13. Use the ASMI interface to set a new admin password. The initial login is admin/admin.
14. Set a new password. Ensure that you enter an acceptable password before proceeding to the next
step.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 17
15. Configure ETH1 as a static IP. To configure ETH1 as a static IP, complete the following steps:
Note: You will need one available IP address for ETH1 on the BMC.
a. on the BMC, select Settings > Network > Eth1.
b. Select Add Static IPv4 Address.
c. Enter your IP address, gateway, and subnet information.
d. Click Add.
16. Using the IP address that you configured above, add the system to your HMC. To add a managed
system so that it can be managed by your HMC, in the contents area, click Connect Systems... and
complete the fields.
Note: In the Connect Systems... window, you must provide the static IP address for the server being
added, and specify the username admin and the password that you set for admin. If you do not make
these specifications, the server will be unable to connect to the HMC. If you attempt to authenticate
using incorrect credentials too many times, the system will lock the admin password. If the admin
password is locked, remote support must generate and send the ACF file so that you can reset the
admin password before you continue.
Click OK.
17. Configure VMI. To configure VMI, select Operations > VMI Settings.
18. Type the VMI IP information and configure the IP type to be Static.
19. Use the HMC to power on the system.
a. In the navigation area, select Resources > All Systems.
b. In the content pane, select the managed system.
c. In the navigation area, select System Actions > Operations > Power On.
20. Check the firmware level of your managed system.
To check your managed system's firmware level, select Actions > Update Firmware > System
Firmware > View Current Levels.
21. If necessary, update your managed system firmware. Select Actions > Update Firmware > System
Firmware > Update.

Completing the server setup without using an HMC


If you do not have an Hardware Management Console (HMC), use this procedure to complete the server
setup.

About this task


To complete the server setup without using a management console, complete the following steps:

Procedure
1. Attach the server to the rack using the shipping screws that were provided with your system.
2. To check the firmware level on the managed system and the time of day, complete the following steps:
a. Access the Advanced System Management Interface (ASMI). For instructions, see
Accessing the ASMI without an HMC (www.ibm.com/support/knowledgecenter/POWER10/p10hby/
connect_asmi.htm).
b. On the ASMI Welcome pane, note the existing level of server firmware in the upper-right corner
under the copyright statement.
c. Update the date and time.
To automatically set the date and time, select NTP. Enter the NTP server address or addresses.
Click Save settings.
To manually set the date and time, Select Manual. Enter the date and time. Click Save settings.

18 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
3. To start a system, complete the following steps:
a. Open the front door of the managed system.
b. Press the power button on the control panel.
The power-on light begins to flash faster.
a. The system cooling fans are activated after approximately 30 seconds and begin to accelerate to
operating speed.
b. Progress indicators appear on the control panel display while the system is being started.
c. The power-on light on the control panel stops flashing and remains on, indicating that the system is
powered on.
For instructions, see Starting a system that is not managed by an HMC (www.ibm.com/support/
knowledgecenter/POWER10/p10haj/startsysnohmc.htm).
4. Install an operating system and update the operating system.
• Install the AIX operating system. For instructions, see Installing AIX (http://www.ibm.com/support/
knowledgecenter/POWER10/p10hdx/p10hdx_installaix.htm).
• Install the Linux operating system. For instructions, see Installing Linux (http://www.ibm.com/
support/knowledgecenter/POWER10/p10hdx/p10hdx_installlinux.htm).
• Install the VIOS operating system. For instructions, see Installing VIOS (https://www.ibm.com/
support/knowledgecenter/POWER10/p10hb1/p10hb1_vios_install.htm).
• Install the IBM i operating system. For instructions, see Installing the IBM i operating system (http://
www.ibm.com/support/knowledgecenter/POWER10/p10hdx/p10hdx_ibmi.htm).
5. You have now completed the steps to install your server.

Installing a stand-alone server


Use this information to learn about setting up a stand-alone server.

Prerequisite for installing the stand-alone server


Use the information to understand the prerequisites that are required for setting up the preinstalled
server.

About this task


You might need to read the following documents before you begin to install the server:
• The latest version of this document is maintained online. See Installing the Installing the IBM
Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014 (9105-41B) (http://
www.ibm.com/support/knowledgecenter/POWER10/p10jae/p10jae_roadmap.htm).
• To plan your server installation, see Planning for the system (http://www.ibm.com/support/
knowledgecenter/POWER10/p10jae/p10jae_kickoff.htm.
• To download HMC updates and fixes, see the Hardware Management Console Support and downloads
website (https://www14.software.ibm.com/webapp/set2/sas/f/hmcl/home.html).
Consider the following prerequisites before you install the server:

Procedure
1. Ensure that you have the following items before you start your installation:
• Phillips screwdriver
• Flat-head screwdriver
2. Ensure that you have one of the following consoles:

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 19
• Hardware Management Console (HMC): To manage POWER10 processor-based systems, the HMC
must be at version 10 release 2.0, or later.
• Graphic monitor with keyboard and mouse.
• Teletype (tty) monitor with keyboard.

Moving the server to the installation site


Learn how to move the stand-alone server to the installation site.

About this task


After you have unpacked your stand-alone server, move the server to the installation site.

Completing inventory for your stand-alone server


Use this information to complete inventory for your server.

About this task


To complete the inventory, complete the following steps:

Procedure
1. Verify that you received all the boxes you ordered.
2. Unpack the server components as needed.
3. Complete a parts inventory before you install each server component by following these steps:
a. Locate the inventory list for your server.
b. Ensure that you received all the parts that you ordered.
Note: Your order information is included with your product. You can also obtain the order
information from your marketing representative or the IBM Business Partner.

Cabling the server and setting up a console


Your console, monitor, or interface choices are guided by whether you create logical partitions, which
operating system you install in your primary partition, and whether you install a Virtual I/O Server (VIOS)
in one of your logical partitions.

Determining which console to use


Your console, monitor, or interface choices are guided by whether you create logical partitions, which
operating system you install in your primary partition, and whether you install a Virtual I/O Server (VIOS)
in one of your logical partitions.
Go to the instructions for the applicable console, interface, or terminal in the following table.

Table 3. Available console types


Cabling setup
Console type Operating system Logical partitions Cable required instructions
ASCII terminal AIX, Linux, or VIOS Yes for VIOS, no for Ethernet (or cross- Logging on to the
AIX and Linux over cable) ASMI GUI (http://
www.ibm.com/
support/
knowledgecenter/
POWER10/p10eih/
p10eih_gui_loggin
gon.htm

20 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Table 3. Available console types (continued)
Cabling setup
Console type Operating system Logical partitions Cable required instructions
Hardware AIX, IBM i, Linux, Yes Ethernet (or cross- “Cabling the server
Management or VIOS over cable) to the HMC” on
Console (HMC) page 11.
Operations Console IBM i Yes Ethernet cable for “Cabling the server
LAN connection and accessing
Use your
Operations
Operations Console
Console” on page
to manage existing
23
IBM i partitions.

Accessing the eBMC so that you can manage the system


IBM® Power Systems servers use a enterprise baseboard management controller (eBMC) for system
service management, monitoring, maintenance, and control. The eBMC also provides access to the
system event log files (SEL). The eBMC is a specialized service processor that monitors the physical
state of the system by using sensors. A system administrator or service representative can communicate
with the eBMC through an independent connection.

About this task


Note: To manage POWER10 processor-based systems, the HMC must be at version 10 release 1.0,
service pack 1020, or later.
To access the eBMC by using your HMC, complete the following steps:

Procedure
1. Identify the port on the HMC that is enabled as a DHCP server and connect the new system to the
managed system network.
2. Connect each end of the power cables to the power supplies on the rear of the system, and connect
the other ends to a power source.
3. The HMC discovers the system and assigns it a default name. The name is the DHCP IP address you
are using, without the decimals. The server displays the Pending Authentication state.
4. You are prompted to set the HMC Access password that your HMC will use to authenticate and mange
the system. This is the same password that you will use to access the ASMI as admin. To set the
system password, select the server, then select Actions > Update System Password.
Note: The HMC Access password is also the eBMC ASMI admin password.
5. Click Finish.
6. Select System Actions > VMI configuration. Select the network interface, then select Modify.
Note: You can choose either T0 or T1. If you previously connected to T0, configure Eth0. If you
previously connected to T1 on the HMC network, configure Eth1.
7. Select DHCP and click OK.
8. Use the HMC to power on the system.
a. In the navigation area, select Resources > All Systems.
b. In the content pane, select the managed system.
c. In the navigation area, select System Actions > Operations > Power On.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 21
Accessing the eBMC without an HMC
To access the eBMC without using the HMC, complete the steps in this procedure.

About this task


To access the eBMC without using an HMC, complete the following steps:

Procedure
1. Connect an Ethernet cable between the ETH0 port on the rear of the system to a PC equipped with an
Ethernet port.
2. If you haven't already done so, connect the power cables to the power supplies. The panel displays
01 N.
3. Press the up arrow key to select 02 and press Enter.
4. Press Enter again. A < (less than symbol) appears next to N. Press the Up Arrow key. The N changes
to an M.
5. Press Enter.
6. Press Enter twice. 02 displays on the control panel.
7. Press the Up Arrow key until it returns 30 and press Enter.
8. Press enter again. The panel now displays 3000. Press Enter.
9. Record the information that displays. You will need this information for a later step.
10. Move to your Ethernet-equipped device. Open your device's network configuration panel and assign
an IP that is the same as what you recorded in the previous step, but subtract 1. For instance, if you
recorded 169.254.176.9, then assign your laptop 169.254.176.8. Use subnet mask 255.255.0.0 on
the device. This will be the BMC's default value.
11. Use your device to verify that you can connect using the address you used in the previous step, and
then attach a web browser to that IP and open ASMI.
12. Use the ASMI interface to set a new admin password. The initial login is admin/admin.
13. Set a new password. Ensure that you enter an acceptable password before proceeding to the next
step.
14. Configure ETH1 as a static IP. To configure ETH1 as a static IP, complete the following steps:
Note: You will need one available IP address for ETH1 on the BMC.
a. on the BMC, select Settings > Network > Eth1.
b. Select Add Static IPv4 Address.
c. Enter your IP address, gateway, and subnet information.
d. Click Add.

Cabling the server with an ASCII terminal

About this task


Lean more about the accessing the ASMI by using the eBMC interface.
To access the ASMI by using the eBMC interface, complete the following steps:

Procedure
1. For more information about launching the ASMI using the eBMC interface, see Launching the host
console (http://www.ibm.com/support/knowledgecenter/POWER10/p10eih/p10eih_gui_sol.htm.
2. Once you have performed the steps to launch the host console, return to these procedures.
3. Continue with “Completing the server setup” on page 27.

22 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Cabling the server to the HMC
The Hardware Management Console (HMC) controls managed systems, including the management of
logical partitions, the creation of a virtual environment, and the use of capacity on demand. Using service
applications, the HMC can also communicate with managed systems to detect, consolidate, and forward
information to IBM service for analysis.

Before you begin


If you have not installed and configured your HMC, do so now. For instructions, see
Installation and configuration tasks (http://www.ibm.com/support/knowledgecenter/POWER10/p10hai/
p10hai_taskflow.htm).
To manage POWER10 processor-based systems, the HMC must be at version 10 release 2.0, or later. To
view the HMC version and release, complete the following steps:
1. In the navigation area, click Updates.
2. In the work area, view and record the information that appears in the HMC Code Level section,
including the HMC version, release, Service Pack, build level, and base versions.
To cable the server to the HMC, complete the following steps:

Procedure
1. If you want to directly attach your HMC to the managed system, connect ETH0 on the HMC to the
HMC0 port on the managed system.
2. To learn how to connect an HMC to a private network so that it can manage more than one
managed system, see HMC network connections (http://www.ibm.com/support/knowledgecenter/
POWER9/p10hai/p10hai_netconhmc.htm).
Notes:
• You can also have multiple systems that are attached to a switch that is then connected to the HMC.
For instructions, see HMC network connections (http://www.ibm.com/support/knowledgecenter/
POWER10/p10hai/p10hai_netconhmc.htm).
• If you are using a switch, ensure that the speed in the switch is set to Autodetection. If the
server is directly attached to the HMC, ensure the Ethernet adapter speed on the HMC is set to
Autodetection. For information about how to set media speeds, see Setting the media speed (http://
www.ibm.com/support/knowledgecenter/POWER9/p10hai/p10hai_lanmediaspeed_enh.htm).
3. If you are connecting a second HMC to your managed server, connect it to the Ethernet port that is
labeled HMC2 on the managed server.
4. Continue with “Completing the server setup by using an HMC” on page 27.

Cabling the server and accessing Operations Console


You can use Operations Console to manage a server that is running the IBM i operating system even if you
do not have logical partitions.

Before you begin


You can access the Operations Console via a LAN connection to IBM i by using IBM i Access Client
Solutions (http://www-01.ibm.com/support/docview.wss?uid=isg3T1026805).
To cable the server and to access the Operations Console, complete the following steps:
1. Ensure that your server is powered off.
2. Obtain a static IP address that is assigned to the LAN console adapter on the server so that the
console can use it. Note the Internet Protocol (IP) address, subnet mask, and default gateway.
Optionally, select a unique host name and register the host name and the IP address in your site's
Domain Name System (DNS).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 23
Note: This IP address is used by the Operations Console stack on the IBM i interface and is different
from the IP address that is used to connect a normal Telnet session. The IP address must not be in use
by another server. Ping the IP address on a PC connected to a network to verify that no other device is
using the IP address. You should not receive replies.
To set up the Operations Console, complete the following steps:
1. Install IBM i Access Client Solutions (ACS) (http://www-01.ibm.com/support/docview.wss?
uid=isg3T1026805) on a network-connected personal computer.
Note: To run IBM i Access Client Solutions (ACS) on a workstation, you must install Java. ACS is a Java-
based program and Java is required to run ACS. For information about ACS Java requirements, see
IBM i Access - ACS Getting Started (https://www.ibm.com/support/pages/ibm-i-access-acs-getting-
started#3.0).
Note: It is recommended that you log onto the PC as the local administrator. This ensures that you
have all the privileges that you need to modify the PC and to start a console session. Also, ensure that
you are running the latest version of ACS. For more information, see IBM i Access - Client Solutions
5733XJ1 (https://www.ibm.com/support/pages/ibm-i-access-client-solutions-5733xj1).
2. Cable the PC to a server. Plug a Cat 5e or Cat 6 (recommended) Ethernet cable to the PC and into
a valid Ethernet adapter port. To determine the server adapter port that you must use, refer to the
following table:
Note: The T1 resource is required for console connectivity on any adapter. The T1 resource is either
the top or far-right port depending on how you are viewing the system.

Table 4. Server Operations Console LAN slots


Server Operations Console - LAN slot
9105-41B C7, C8, C9, C10, C11
9105-22A, 9105-22B, 9105-42A, 9786-22H, or C0, C1, C2, C3, C4, C7, C8, C9, C10, C11
9786-42H

Note: Make the initial connection with the PC that is directly cabled to the server. The PC and
server can be re-cabled to the network after the initial connection is made and a static IP address
has been assigned to the Operations Console port. A cross-over cable is not needed. For more
information, see Adapter requirements (http://www.ibm.com/support/knowledgecenter/POWER10/
p10hbx/hardwarereq_adapter.htm)
3. Configure the PC network. To configure the PC network, complete the following steps:
a. Open Windows Control Panel and access the adapter settings. If you are using Windows 10,
select Control Panel > Network and Internet > Network and Sharing Center > Change Adapter
Settings.
b. Disable any additional adapters other than the Local Area Connection.
c. Right click the adapter and select Properties.
d. Click Internet Protocol Version 4 (TCP/IPv4) and select Properties.
Note: If you are returning the device to the network after you set up the Operations Console, record
the IP information that is displayed.
e. Select Obtain an IP address automatically. This ensures that the PC receives an IP address in the
169.254.x.x range.
4. To disable the PC firewall, complete the following steps.
Note: All PC firewalls must be disabled for the initial connection.
a. In the Windows control panel, click Firewall settings and disable the firewall.
b. In the Windows control panel, click Security center. Check for a firewall and, if present, disable it.
c. Scan all tasks that are running on the PC for any other software firewalls and disable the firewall.

24 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
5. Power on the server by completing the following steps:
a. Set the manual initial program load (IPL) by completing the following steps:
i) Locate the server's control panel.
ii) Press the Up arrow key until you see 02, and press Enter.
iii) Press Enter again. A < (less than symbol) appears next to N.
iv) Press the Up Arrow key. The N changes to an M.
v) Press Enter.
vi) Press Enter twice. A 02 is displayed on the control panel.
b. After you have the server set to a manual IPL, push the white power button to power on the server.
Note: During the IPL, the system displays C6004031 on the control panel, which indicates that the
system is searching for an Operations Console. The system might take 20 - 30 minutes to complete
this action. If A6005008 is displayed on the control panel, this means that no Operations Console
is available. This might indicate that the system is not preinstalled with IBM i and you must set the
console type to LAN.
6. Perform this step if the system is not preinstalled with IBM i. For setting the console type to LAN,
complete the following steps:
a. Enable the control panel functions by completing the following steps:
i) Select function 25 on the control panel and press Enter. The return code must be 00.
ii) Select function 26 on the control panel and press Enter.
Note: If you see a FF return code, go back to function 25 and press Enter, then return to function
26 and press Enter.
b. Check your current setting(s). Use console service functions (65+21+11) to check the current
setting.
• A600 500A = No console defined
• A603 500A = LAN console
• A604 500A = HMC console
If the system reference code (SRC) = A603500A, skip to step “7” on page 25. For all others SRCs,
continue with the next step.
c. Set console type to LAN.
For release 7.4 and earlier, complete the following steps.
i) Use the 65+21+11 sequences until it returns A603500B. This indicates that the console type
will be changed to LAN.
ii) Use the 21. This performs the change console type function.
iii) Use the 11, until it returns A6C3500C. This indicates that settings have been saved successfully.
If not, repeat function 11 until it returns A6C3500C
d. For release 7.5 and later, complete the following steps.
i) Use the 65+11 sequences until it returns A603500B. This indicates that the console type will be
changed to LAN.
ii) Use the 21. This performs the change console type function.
iii) Use the 11, until it returns A6C3500C. This indicates that settings have been saved successfully.
If not, repeat function 11 until it returns A6C3500C
Note: 65+21+11 functions are no longer needed unless directed by IBM support. The functions to set
an adapter location are now performed automatically by the Licensed Internal Code.
7. Connect the Operations Console by completing the following steps:
a. Open IBM i Access Client Solutions (ACS).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 25
b. Under Management, click System Configurations.
c. Select Locate Console.
d. Click Search. After a few seconds, a connection displays. Click the connection and then click
Console.
e. In the Pending Authorization window, type the User ID and Password.
f. Accept the security certificate. Ensure that you accept it, otherwise your connection will not
continue. A console window opens. If the window is blank at first but the cursor is in the upper
left corner, it means that the screen is waiting for the Drive or DVD to provide the information to be
displayed.
8. To set a static IP address for the Operations Console, complete the following steps:
a. Sign on with QSECOFR. The default password is QSECOFR, and it is case-sensitive.
b. At the DST Main Menu b, select Option 3- Use Dedicated Service Tools.
c. Select Option 5- Work with DST environment.
d. Select Option 2- System Devices.
e. Select Option 7- Configure service tools LAN adapter.
f. Type the IP settings that you want to use. Optional: For the host name for Service Tools, you can
type a host name if it is also registered in your network DNS. It is recommended that you type the
word Default and enter the IP address that you want to use.
g. Press F7 to store the information.
h. Press F17 to Deactivate the session and then press it again to Activate. This causes your session
to go blank. Close the session.
9. To create a connection to the static IP, complete the following steps:
a. Either move the PC and Operations Console port both to the network or re-configure the PC IP
settings to be in the same subnet that you just configured for the service tools LAN adapter.
b. Return to the ACS interface and select the window labeled System Configurations.
c. Click New.
d. If you will use this connection to connect to other functions, type the system name that you plan to
use in the General tab.
e. Click the Console tab.
f. Under the LAN Console/Virtual Control panel, type the IP address of the service tools LAN adapter
in the Service Host Name field.
g. Click OK.
h. In the main ACS menu, click System and select the system that you created.
i. Under Console, click 5250 Console. Continue with your IPL.
Note: The IP configuration of the PC must be reset before cabling the PC back to the network because the
PC is configured with the gateway IP address. The PC and server console port (T1) can now be re-cabled
to the network.
Continue with “Completing the server setup” on page 16.

Cabling the server and connecting expansion units


Learn how to cable the server and to connect expansion units.

About this task


To cable the server and to connect expansion units, complete the following steps:

26 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Procedure
1. Ensure that you have cabled and set up a console. For more information, see “Cabling the server and
setting up a console” on page 20.
2. Complete the following steps:
a. Plug the power cord into the power supply.
Note: If present, remove and discard any plug that covers the ports on the rear of the system.
The port covers ensure that you are reminded about resetting the Administrator password of your
managed system after the initial program load (IPL) completes.
b. Plug the system power cords and the power cords for any other attached devices into the power
source.
c. If your system uses a power distribution unit (PDU), complete the following steps:
i) Connect the system power cords from the server and I/O drawers to the PDU with an IEC 320
type receptacle.
ii) Attach the PDU input power cord and plug it into the power source.
iii) If your system uses two PDUs for redundancy, complete the following steps:
• If your system has two power supplies, attach one power supply to each of the two PDUs.
• If your system has four power supplies, plug E0 and E1 to PDU A and E2 and E3 to PDU B.
Note: Confirm that the system is in standby mode. The green power status indicator on the front
control panel is flashing, and the dc out indicator lights on the power supplies are flashing. If
none of the indicators are flashing, check the power cord connections.
3. For information about connecting enclosures and expansion units, see Enclosures and expansion units
(http://www.ibm.com/support/knowledgecenter/POWER10/p10ham/p10ham_kickoff.htm).
4. Power on the managed system.

Completing the server setup


Learn about the tasks you must complete to set up your managed system.
Install the front door onto the front of the system chassis. To install the front door, complete the following
tasks:
1. Align the door with the system chassis so that it is open 90 degrees.
2. Align the hinges on the door with the posts on the chassis.
3. Using your finger, push each hinge onto each pin, one at a time.

Completing the server setup by using an HMC


Perform these tasks to complete the server setup by using a Hardware Management Console (HMC). You
can also begin to use virtualization to consolidate multiple workloads onto fewer systems to increase
server use, and to reduce cost.

About this task


To manage POWER10 processor-based systems, the HMC must be at version 10 release 1.0, service pack
1020, or later.
If your system was preinstalled with an operating system, you must exit manufacturing default
configuration (MDC) mode so that you can open a console and access your operating system. To exit
MDC mode, complete the following steps:
1. Select Resources > All Systems.
2. Select System > Actions > View System Partitions.
3. Under Properties, select General Settings

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 27
4. Select Power On Parameters and set the Partition Start Policy to User-Initiated.
To complete the server setup by using an HMC, complete the following steps:

Procedure
1. Change the managed system passwords by completing the following steps:
For more information about setting passwords for the managed system by using the HMC,
see Setting passwords for the managed system (http://www.ibm.com/support/knowledgecenter/
POWER10/p10hai/p10hai_setpassword_enh.htm).
2. Update the time of day on the managed system by using the Advanced System Management Interface
(ASMI).
To connect to the Advanced System Management Interface, complete the following steps:
a. In the navigation area, click System resources, and then select Systems.
b. In the content area, select one or more managed systems, and then click Connections and
operations > Launch advanced system management (ASMI).
3. Check the firmware level on the managed system and update it as needed.
To view and update the system firmware, complete the following steps:
a. In the navigation area, click System resources, and then select Systems.
b. To view the firmware information of the system, select the server for which you want to view the
firmware information and click Firmware > View current system firmware levels.
c. Compare your installed firmware level with available firmware levels. For more information, see the
Fix Central website (http://www.ibm.com/support/fixcentral).
d. If necessary, update your managed system firmware levels. Click Firmware > Update system
firmware.
e. After you complete this task, click Close.
4. Compare your installed firmware level with available firmware levels. If necessary, update your
firmware levels.
a. Compare your installed firmware level with available firmware levels. For more information, see the
Fix Central website (http://www.ibm.com/support/fixcentral).
b. If necessary, update your managed system firmware levels. In the navigation area, select Actions >
Update Firmware > System Firmware > Update....
5. To power on a managed system, see Starting a system(http://www.ibm.com/support/
knowledgecenter/POWER10/p10haj/crustartsys.htm)
6. Create partitions using templates.
• If you are creating new partitions, you can use the templates that are on your HMC. For more
information, see Accessing the template library (http://www.ibm.com/support/knowledgecenter/
POWER10/p10efc/p10efc_accessing_template_library.htm).
• If you have existing partitions on another system, you can capture those configurations,
save it to the template library and deploy the partition template. For more
information, see Partition templates (http://www.ibm.com/support/knowledgecenter/POWER10/
p10efc/p10efc_partition_template_concept.htm).
• If you want to use an existing template from another source, you can import that and use it. For more
information, see Importing a partition template (http://www.ibm.com/support/knowledgecenter/
POWER10/p10efc/p10efc_import_partition_template.htm).
7. Install an operating system and update the operating system.
• Install the AIX operating system. For instructions, see Installing AIX (http://www.ibm.com/support/
knowledgecenter/POWER10/p10hdx/p10hdx_installaix.htm).
• Install the Linux operating system. For instructions, see Installing Linux (http://www.ibm.com/
support/knowledgecenter/POWER10/p10hdx/p10hdx_installlinux.htm).

28 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
• Install the VIOS operating system. For instructions, see Installing VIOS (https://www.ibm.com/
support/knowledgecenter/POWER10/p10hb1/p10hb1_vios_install.htm).
• Install the IBM i operating system. For instructions, see Installing the IBM i operating system (http://
www.ibm.com/support/knowledgecenter/POWER10/p10hdx/p10hdx_ibmi.htm).

Completing the server setup without using an HMC


If you do not have an Hardware Management Console (HMC), use this procedure to complete the server
setup.

About this task


To complete the server setup without using a management console, complete the following steps:

Procedure
1. To check the firmware level on the managed system and the time of day, complete the following steps:
a. Access the Advanced System Management Interface (ASMI). For instructions, see
Accessing the ASMI without an HMC (www.ibm.com/support/knowledgecenter/POWER10/p10hby/
connect_asmi.htm).
b. On the ASMI Welcome pane, note the existing level of server firmware in the upper-right corner
under the copyright statement.
c. Update the date and time.
To automatically set the date and time, select NTP. Enter the NTP server address or addresses.
Click Save settings.
To manually set the date and time, Select Manual. Enter the date and time. Click Save settings.
2. To start a system, complete the following steps:
a. Open the front door of the managed system.
b. Press the power button on the control panel.
The power-on light begins to flash faster.
a. The system cooling fans are activated after approximately 30 seconds and begin to accelerate to
operating speed.
b. Progress indicators appear on the control panel display while the system is being started.
c. The power-on light on the control panel stops flashing and remains on, indicating that the system is
powered on.
For instructions, see Starting a system that is not managed by an HMC (www.ibm.com/support/
knowledgecenter/POWER10/p10haj/startsysnohmc.htm).
3. Install an operating system and update the operating system.
• Install the AIX operating system. For instructions, see Installing AIX (http://www.ibm.com/support/
knowledgecenter/POWER10/p10hdx/p10hdx_installaix.htm).
• Install the Linux operating system. For instructions, see Installing Linux (http://www.ibm.com/
support/knowledgecenter/POWER10/p10hdx/p10hdx_installlinux.htm).
• Install the VIOS operating system. For instructions, see Installing VIOS (https://www.ibm.com/
support/knowledgecenter/POWER10/p10hb1/p10hb1_vios_install.htm).
• Install the IBM i operating system. For instructions, see Installing the IBM i operating system (http://
www.ibm.com/support/knowledgecenter/POWER10/p10hdx/p10hdx_ibmi.htm).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 29
Setting up a preinstalled server
Learn how to set up a server that arrives preinstalled in a rack.

Prerequisite for installing the preinstalled server


Use the information to understand the prerequisites that are required for setting up the preinstalled
server.

About this task


You might need to read the following documents before you begin to install the server:
• The latest version of this document is maintained online. See Installing the Installing the IBM
Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014 (9105-41B) (http://
www.ibm.com/support/knowledgecenter/POWER10/p10jae/p10jae_roadmap.htm).
• (http://www.ibm.com/support/knowledgecenter/POWER10/p10jae/p10jae_roadmap.htm).
• To plan your server installation, see Planning for the system (http://www.ibm.com/support/
knowledgecenter/POWER10/p10jae/p10jae_kickoff.htm.
Consider the following prerequisites before you install the server:

Procedure
1. Ensure that you have the following items before you start your installation:
• Phillips screwdriver
• Flat-head screwdriver
2. Ensure that you have one of the following consoles:
• Hardware Management Console (HMC): To manage POWER10 processor-based systems, the HMC
must be at version 10 release 2.0, or later.
• Graphic monitor with keyboard and mouse.
• Teletype (tty) monitor with keyboard.

Completing inventory for your preinstalled server


Use this information to complete inventory for your server.

About this task


To complete the inventory, complete the following steps:

Procedure
1. Verify that you received all the boxes you ordered.
2. Unpack the server components as needed.
3. Complete a parts inventory before you install each server component by following these steps:
a. Locate the inventory list for your server.
b. Ensure that you received all the parts that you ordered.
Note: Your order information is included with your product. You can also obtain the order
information from your marketing representative or the IBM Business Partner.

30 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Removing the shipping bracket and connecting power cords and power
distribution unit (PDU) for your preinstalled server
Before you set up a console, you must remove the shipping bracket and connect power cords.

About this task


Attention:
• Attach an electrostatic discharge (ESD) wrist strap to the front ESD jack, to the rear ESD jack,
or to an unpainted metal surface of your hardware to prevent the electrostatic discharge from
damaging your hardware.
• When you use an ESD wrist strap, follow all electrical safety procedures. An ESD wrist strap
is used for static control. It does not increase or decrease your risk of receiving electric shock
when using or working on electrical equipment.
• If you do not have an ESD wrist strap, just prior to removing the product from ESD packaging and
installing or replacing hardware, touch an unpainted metal surface of the system for a minimum
of 5 seconds.
To remove the shipping bracket and connect power cords, do the following:

Procedure
1. Remove the six screws that secure the shipping bracket to the chassis.
2. Cable the server.
a. Connect the system power cords from the server and I/O drawers to the PDU with an IEC 320 type
receptacle.
b. Attach the PDU input power cord and plug it into the power source.

Setting up a console
Your console, monitor, or interface options are guided by how you want to use the system.

Determining which console to use


Your console, monitor, or interface choices are guided by whether you create logical partitions, which
operating system you install in your primary partition, and whether you install a Virtual I/O Server (VIOS)
in one of your logical partitions.
Go to the instructions for the applicable console, interface, or terminal in the following table.

Table 5. Available console types


Cabling setup
Console type Operating system Logical partitions Cable required instructions
ASCII terminal AIX, Linux, or VIOS Yes for VIOS, no for Logging on to the
AIX and Linux ASMI GUI (http://
www.ibm.com/
support/
knowledgecenter/
POWER10/p10eih/
p10eih_gui_loggin
gon.htm
Hardware AIX, IBM i, Linux, Yes Ethernet (or cross- “Cabling the server
Management or VIOS over cable) to the HMC” on
Console (HMC) page 11.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 31
Table 5. Available console types (continued)
Cabling setup
Console type Operating system Logical partitions Cable required instructions
Operations Console IBM i Yes Ethernet cable for “Cabling the server
LAN connection and accessing
Use your
Operations
Operations Console
Console” on page
to manage existing
12
IBM i partitions.

Keyboard, video, Linux or VIOS Yes Monitor and USB #unique_45


and mouse (KVM) cables equipped
with KVM

Cabling the server with an ASCII terminal


Access the ASMI by using the eBMC interface.

About this task


Lean more about the accessing the ASMI by using the eBMC interface.
To access the ASMI by using the eBMC interface, complete the following steps:

Procedure
1. For more information about launching the ASMI using the eBMC interface, see Launching the host
console (http://www.ibm.com/support/knowledgecenter/POWER10/p10eih/p10eih_gui_sol.htm.
2. Once you have performed the steps to launch the host console, return to these procedures.
3. Continue with “Completing the server setup” on page 27.

Cabling the server to the HMC


The Hardware Management Console (HMC) controls managed systems, including the management of
logical partitions, the creation of a virtual environment, and the use of capacity on demand. Using service
applications, the HMC can also communicate with managed systems to detect, consolidate, and forward
information to IBM service for analysis.

Before you begin


If you have not installed and configured your HMC, do so now. For instructions, see
Installation and configuration tasks (http://www.ibm.com/support/knowledgecenter/POWER10/p10hai/
p10hai_taskflow.htm).
To manage POWER10 processor-based systems, the HMC must be at version 10 release 2.0, or later. To
view the HMC version and release, complete the following steps:
1. In the navigation area, click Updates.
2. In the work area, view and record the information that appears in the HMC Code Level section,
including the HMC version, release, Service Pack, build level, and base versions.
To cable the server to the HMC, complete the following steps:

Procedure
1. If you want to directly attach your HMC to the managed system, connect ETH0 on the HMC to the
HMC0 port on the managed system..
Notes:

32 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
• You can also have multiple systems that are attached to a switch that is then connected to the HMC.
For instructions, see HMC network connections (http://www.ibm.com/support/knowledgecenter/
POWER10/p10hai/p10hai_netconhmc.htm).
2. If you are connecting a second HMC to your managed server, connect it to the Ethernet port that is
labeled HMC2 on the managed server.
3. Continue with “Routing cables through the cable-management arm and connecting expansion units”
on page 33.

Routing cables through the cable-management arm and connecting


expansion units
Use this procedure to route cables through the cable-management arm and to connect expansion units.

About this task


To route cables through the cable-management arm and to connect expansion units, complete the
following steps:

Procedure
1. Route the console cable through the cable management arm.
2. Connect expansion units that were shipped with the system. For more information, see the expansion
unit installation book that was shipped with the system. Complete the tasks associated with
connecting a preinstalled expansion unit or disk drive enclosure, then return to this document to
complete your server setup.
3. Power on the managed system.
4. Continue with “Completing the server setup” on page 33.

Completing the server setup


Learn about the tasks you must complete to set up your managed system.
Select from the following options:
• “Completing the server setup by using an HMC” on page 33
• “Completing the server setup without using an HMC” on page 35

Completing the server setup by using an HMC


Perform these tasks to complete the server setup by using a Hardware Management Console (HMC). You
can also begin to use virtualization to consolidate multiple workloads onto fewer systems to increase
server use, and to reduce cost.

About this task


To complete the server setup by using an HMC, complete the following steps:

Procedure
1. Change the managed system passwords by completing the following steps:
For more information about setting passwords for the managed system by using the HMC,
see Setting passwords for the managed system (http://www.ibm.com/support/knowledgecenter/
POWER10/p10hai/p10hai_setpassword_enh.htm).
2. Update the time of day on the managed system by using the Advanced System Management Interface
(ASMI).
To connect to the Advanced System Management Interface, complete the following steps:
a. In the navigation area, click System resources, and then select Systems.

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 33
b. In the content area, select one or more managed systems, and then click Connections and
operations > Launch advanced system management (ASMI).
3. Check the firmware level on the managed system and update it as needed.
To view and update the system firmware, complete the following steps:
a. In the navigation area, click System resources, and then select Systems.
b. To view the firmware information of the system, select the server for which you want to view the
firmware information and click Firmware > View current system firmware levels.
c. Compare your installed firmware level with available firmware levels. For more information, see the
Fix Central website (http://www.ibm.com/support/fixcentral).
d. If necessary, update your managed system firmware levels. Click Firmware > Update system
firmware.
e. After you complete this task, click Close.
4. Compare your installed firmware level with available firmware levels. If necessary, update your
firmware levels.
a. Compare your installed firmware level with available firmware levels. For more information, see the
Fix Central website (http://www.ibm.com/support/fixcentral).
b. If necessary, update your managed system firmware levels. In the navigation area, select Actions >
Update Firmware > System Firmware > Update....
5. If your system was preinstalled with an operating system, you must exit MDC (manufacturing default
configuration) mode so that you can open a console and access your operating system.
To exit MDC mode, complete the following steps:
a. Select Resources > All Systems.
b. Select System > Actions > View System Partitions.
c. Under Properties, select General Settings .
d. Select Power On Parameters and set the Partition Start Policy to User-Initiated.
e. Under System Actions, select Operations > Power On.
f. Once the system is in the partition standby state and the default partition is in the Not Activated
state, select the default partition and choose Activate.
For more information about starting a system or logical partition by using the HMC, see Starting a
system or logical partition by using the HMC.
6. To power on a managed system, see Starting a system(http://www.ibm.com/support/
knowledgecenter/POWER10/p10haj/crustartsys.htm)
7. Create partitions using templates.
• If you are creating new partitions, you can use the templates that are on your HMC. For more
information, see Accessing the template library (http://www.ibm.com/support/knowledgecenter/
POWER10/p10efc/p10efc_accessing_template_library.htm).
• If you have existing partitions on another system, you can capture those configurations,
save it to the template library and deploy the partition template. For more
information, see Partition templates (http://www.ibm.com/support/knowledgecenter/POWER10/
p10efc/p10efc_partition_template_concept.htm).
• If you want to use an existing template from another source, you can import that and use it. For more
information, see Importing a partition template (http://www.ibm.com/support/knowledgecenter/
POWER10/p10efc/p10efc_import_partition_template.htm).
8. Install an operating system and update the operating system.
• Install the AIX operating system. For instructions, see Installing AIX (http://www.ibm.com/support/
knowledgecenter/POWER10/p10hdx/p10hdx_installaix.htm).
• Install the Linux operating system. For instructions, see Installing Linux (http://www.ibm.com/
support/knowledgecenter/POWER10/p10hdx/p10hdx_installlinux.htm).

34 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
• Install the VIOS operating system. For instructions, see Installing VIOS (https://www.ibm.com/
support/knowledgecenter/POWER10/p10hb1/p10hb1_vios_install.htm).
• Install the IBM i operating system. For instructions, see Installing the IBM i operating system (http://
www.ibm.com/support/knowledgecenter/POWER10/p10hdx/p10hdx_ibmi.htm).

Completing the server setup without using an HMC


If you do not have an Hardware Management Console (HMC), use this procedure to complete the server
setup.

About this task


To complete the server setup without using a management console, complete the following steps:

Procedure
1. To check the firmware level on the managed system and the time of day, complete the following steps:
a. Access the Advanced System Management Interface (ASMI). For instructions, see
Accessing the ASMI without an HMC (www.ibm.com/support/knowledgecenter/POWER10/p10hby/
connect_asmi.htm).
b. On the ASMI Welcome pane, note the existing level of server firmware in the upper-right corner
under the copyright statement.
c. Update the date and time.
To automatically set the date and time, select NTP. Enter the NTP server address or addresses.
Click Save settings.
To manually set the date and time, Select Manual. Enter the date and time. Click Save settings.
2. To start a system, complete the following steps:
a. Open the front door of the managed system.
b. Press the power button on the control panel.
The power-on light begins to flash faster.
a. The system cooling fans are activated after approximately 30 seconds and begin to accelerate to
operating speed.
b. Progress indicators appear on the control panel display while the system is being started.
c. The power-on light on the control panel stops flashing and remains on, indicating that the system is
powered on.
For instructions, see Starting a system that is not managed by an HMC (www.ibm.com/support/
knowledgecenter/POWER10/p10haj/startsysnohmc.htm).
3. Install an operating system and update the operating system.
• Install the AIX operating system. For instructions, see Installing AIX (http://www.ibm.com/support/
knowledgecenter/POWER10/p10hdx/p10hdx_installaix.htm).
• Install the Linux operating system. For instructions, see Installing Linux (http://www.ibm.com/
support/knowledgecenter/POWER10/p10hdx/p10hdx_installlinux.htm).
• Install the VIOS operating system. For instructions, see Installing VIOS (https://www.ibm.com/
support/knowledgecenter/POWER10/p10hb1/p10hb1_vios_install.htm).
• Install the IBM i operating system. For instructions, see Installing the IBM i operating system (http://
www.ibm.com/support/knowledgecenter/POWER10/p10hdx/p10hdx_ibmi.htm).

Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM Power S1014
(9105-41B) 35
36 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Notices

This information was developed for products and services offered in the US.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the
user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice,
and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to actual people or business enterprises is
entirely coincidental.

© Copyright IBM Corp. 2022, 2023 37


If you are viewing this information in softcopy, the photographs and color illustrations may not appear.
The drawings and specifications contained herein shall not be reproduced in whole or in part without the
written permission of IBM.
IBM has prepared this information for use with the specific machines indicated. IBM makes no
representations that it is suitable for any other purpose.
IBM's computer systems contain mechanisms designed to reduce the possibility of undetected data
corruption or loss. This risk, however, cannot be eliminated. Users who experience unplanned outages,
system failures, power fluctuations or outages, or component failures must verify the accuracy of
operations performed and data saved or transmitted by the system at or near the time of the outage or
failure. In addition, users must establish procedures to ensure that there is independent data verification
before relying on such data in sensitive or critical operations. Users should periodically check IBM's
support websites for updated information and fixes applicable to the system and related software.

Homologation statement
This product may not be certified in your country for connection by any means whatsoever to interfaces
of public telecommunications networks. Further certification may be required by law prior to making any
such connection. Contact an IBM representative or reseller for any questions.

Accessibility features for IBM Power servers


Accessibility features assist users who have a disability, such as restricted mobility or limited vision, to
use information technology content successfully.

Overview
The IBM Power servers include the following major accessibility features:
• Keyboard-only operation
• Operations that use a screen reader
The IBM Power servers use the latest W3C Standard, WAI-ARIA 1.0 (www.w3.org/TR/wai-aria/),
to ensure compliance with ICT Accessibility 508 Standards and 255 Guidelines (https://www.access-
board.gov/ict/) and Web Content Accessibility Guidelines (WCAG) 2.0 (www.w3.org/TR/WCAG20/). To
take advantage of accessibility features, use the latest release of your screen reader and the latest web
browser that is supported by the IBM Power servers.
The IBM Power servers online product documentation in IBM Documentation is enabled for accessibility.
For more information about IBM's commitment to accessibility, see the IBM accessibility website at IBM
Accessibility (https://www.ibm.com/able/).

Keyboard navigation
This product uses standard navigation keys.

Interface information
The IBM Power servers user interfaces do not have content that flashes 2 - 55 times per second.
The IBM Power servers web user interface relies on cascading style sheets to render content properly and
to provide a usable experience. The application provides an equivalent way for low-vision users to use
system display settings, including high-contrast mode. You can control font size by using the device or
web browser settings.
The IBM Power servers web user interface includes WAI-ARIA navigational landmarks that you can use to
quickly navigate to functional areas in the application.

38 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Vendor software
The IBM Power servers include certain vendor software that is not covered under the IBM license
agreement. IBM makes no representation about the accessibility features of these products. Contact
the vendor for accessibility information about its products.

Related accessibility information


In addition to standard IBM help desk and support websites, IBM has a TTY telephone service for use by
deaf or hard of hearing customers to access sales and support services:
TTY service
800-IBM-3383 (800-426-3383)
(within North America)

For more information about the commitment that IBM has to accessibility, see IBM Accessibility
(www.ibm.com/able).

Privacy policy considerations


IBM Software products, including software as a service solutions, (“Software Offerings”) may use cookies
or other technologies to collect product usage information, to help improve the end user experience,
to tailor interactions with the end user, or for other purposes. In many cases no personally identifiable
information is collected by the Software Offerings. Some of our Software Offerings can help enable you
to collect personally identifiable information. If this Software Offering uses cookies to collect personally
identifiable information, specific information about this offering’s use of cookies is set forth below.
This Software Offering does not use cookies or other technologies to collect personally identifiable
information.
If the configurations deployed for this Software Offering provide you as the customer the ability to collect
personally identifiable information from end users via cookies and other technologies, you should seek
your own legal advice about any laws applicable to such data collection, including any requirements for
notice and consent.
For more information about the use of various technologies, including cookies, for these purposes,
see IBM’s Privacy Policy at http://www.ibm.com/privacy and IBM’s Online Privacy Statement at http://
www.ibm.com/privacy/details the section entitled “Cookies, Web Beacons and Other Technologies”
and the “IBM Software Products and Software-as-a-Service Privacy Statement” at http://www.ibm.com/
software/info/product-privacy.

Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at
Copyright and trademark information.
The registered trademark Linux is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Windows is a trademark of Microsoft Corporation in the United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.

Electronic emission notices


When attaching a monitor to the equipment, you must use the designated monitor cable and any
interference suppression devices supplied with the monitor.

Notices 39
Class A Notices
The following Class A statements apply to the IBM servers that contain the Power10 processor and its
features unless designated as electromagnetic compatibility (EMC) Class B in the feature information.
When attaching a monitor to the equipment, you must use the designated monitor cable and any
interference suppression devices supplied with the monitor.
The following Class A statements apply to the servers.

Canada Notice
CAN ICES-3 (A)/NMB-3(A)

European Community and Morocco Notice


This product is in conformity with the protection requirements of Directive 2014/30/EU of the European
Parliament and of the Council on the harmonization of the laws of the Member States relating to
electromagnetic compatibility. IBM cannot accept responsibility for any failure to satisfy the protection
requirements resulting from a non-recommended modification of the product, including the fitting of
non-IBM option cards.
This product may cause interference if used in residential areas. Such use must be avoided unless the
user takes special measures to reduce electromagnetic emissions to prevent interference to the reception
of radio and television broadcasts.
Warning: This equipment is compliant with Class A of CISPR 32. In a residential environment this
equipment may cause radio interference.

Germany Notice
Deutschsprachiger EU Hinweis: Hinweis für Geräte der Klasse A EU-Richtlinie zur
Elektromagnetischen Verträglichkeit
Dieses Produkt entspricht den Schutzanforderungen der EU-Richtlinie 2014/30/EU zur Angleichung der
Rechtsvorschriften über die elektromagnetische Verträglichkeit in den EU-Mitgliedsstaatenund hält die
Grenzwerte der EN 55022 / EN 55032 Klasse A ein.
Um dieses sicherzustellen, sind die Geräte wie in den Handbüchern beschrieben zu installieren und
zu betreiben. Des Weiteren dürfen auch nur von der IBM empfohlene Kabel angeschlossen werden.
IBM übernimmt keine Verantwortung für die Einhaltung der Schutzanforderungen, wenn das Produkt
ohne Zustimmung von IBM verändert bzw. wenn Erweiterungskomponenten von Fremdherstellern ohne
Empfehlung von IBM gesteckt/eingebaut werden.
EN 55032 Klasse A Geräte müssen mit folgendem Warnhinweis versehen werden:
"Warnung: Dieses ist eine Einrichtung der Klasse A. Diese Einrichtung kann im Wohnbereich Funk-
Störungen verursachen; in diesem Fall kann vom Betreiber verlangt werden, angemessene Maßnahmen
zu ergreifen und dafür aufzukommen."
Deutschland: Einhaltung des Gesetzes über die elektromagnetische Verträglichkeit von Geräten
Dieses Produkt entspricht dem “Gesetz über die elektromagnetische Verträglichkeit von Geräten (EMVG)“.
Dies ist die Umsetzung der EU-Richtlinie 2014/30/EU in der Bundesrepublik Deutschland.
Zulassungsbescheinigung laut dem Deutschen Gesetz über die elektromagnetische Verträglichkeit
von Geräten (EMVG) (bzw. der EMC Richtlinie 2014/30/EU) für Geräte der Klasse A
Dieses Gerät ist berechtigt, in Übereinstimmung mit dem Deutschen EMVG das EG-Konformitätszeichen -
CE - zu führen.

Verantwortlich für die Einhaltung der EMV Vorschriften ist der Hersteller:
International Business Machines Corp.
New Orchard Road

40 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Armonk, New York 10504
Tel: 914-499-1900

Der verantwortliche Ansprechpartner des Herstellers in der EU ist:


IBM Deutschland GmbH
Technical Relations Europe, Abteilung M456
IBM-Allee 1, 71139 Ehningen, Germany
Tel: +49 (0) 800 225 5426
email: HalloIBM@de.ibm.com

Generelle Informationen:
Das Gerät erfüllt die Schutzanforderungen nach EN 55024 und EN 55022 / EN 55032 Klasse A.

Japan Electronics and Information Technology Industries Association (JEITA)


Notice

This statement applies to products less than or equal to 20 A per phase.

This statement applies to products greater than 20 A, single phase.

This statement applies to products greater than 20 A per phase, three-phase.

Japan Voluntary Control Council for Interference (VCCI) Notice

Notices 41
Korea Notice

People's Republic of China Notice

Russia Notice

Taiwan Notice
CNS 13438:

CNS 15936:

IBM Taiwan Contact Information:

United States Federal Communications Commission (FCC) Notice


This equipment has been tested and found to comply with the limits for a Class A digital device,
pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against
harmful interference when the equipment is operated in a commercial environment. This equipment
generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance
with the instruction manual, may cause harmful interference to radio communications. Operation of this
equipment in a residential area is likely to cause harmful interference, in which case the user will be
required to correct the interference at his own expense.
Properly shielded and grounded cables and connectors must be used in order to meet FCC emission
limits. Proper cables and connectors are available from IBM-authorized dealers. IBM is not responsible
for any radio or television interference caused by using other than recommended cables and connectors

42 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
or by unauthorized changes or modifications to this equipment. Unauthorized changes or modifications
could void the user's authority to operate the equipment.

This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions:
(1) this device may not cause harmful interference, and (2) this device must accept any interference
received, including interference that may cause undesired operation.

Responsible Party:
International Business Machines Corporation
New Orchard Road
Armonk, NY 10504
Contact for FCC compliance information only: fccinfo@us.ibm.com

United Kingdom Notice


This product may cause interference if used in residential areas. Such use must be avoided unless the
user takes special measures to reduce electromagnetic emissions to prevent interference to the reception
of radio and television broadcasts.

Class B Notices
The following Class B statements apply to features designated as electromagnetic compatibility (EMC)
Class B in the feature installation information.
When attaching a monitor to the equipment, you must use the designated monitor cable and any
interference suppression devices supplied with the monitor.

Canada Notice
CAN ICES-3 (B)/NMB-3(B)

European Community and Morocco Notice


This product is in conformity with the protection requirements of Directive 2014/30/EU of the European
Parliament and of the Council on the harmonization of the laws of the Member States relating to
electromagnetic compatibility. IBM cannot accept responsibility for any failure to satisfy the protection
requirements resulting from a non-recommended modification of the product, including the fitting of
non-IBM option cards.

German Notice
Deutschsprachiger EU Hinweis: Hinweis für Geräte der Klasse B EU-Richtlinie zur
Elektromagnetischen Verträglichkeit
Dieses Produkt entspricht den Schutzanforderungen der EU-Richtlinie 2014/30/EU zur Angleichung der
Rechtsvorschriften über die elektromagnetische Verträglichkeit in den EU-Mitgliedsstaatenund hält die
Grenzwerte der EN 55022/ EN 55032 Klasse B ein.
Um dieses sicherzustellen, sind die Geräte wie in den Handbüchern beschrieben zu installieren und
zu betreiben. Des Weiteren dürfen auch nur von der IBM empfohlene Kabel angeschlossen werden.
IBM übernimmt keine Verantwortung für die Einhaltung der Schutzanforderungen, wenn das Produkt
ohne Zustimmung von IBM verändert bzw. wenn Erweiterungskomponenten von Fremdherstellern ohne
Empfehlung von IBM gesteckt/eingebaut werden.
Deutschland: Einhaltung des Gesetzes über die elektromagnetische Verträglichkeit von Geräten
Dieses Produkt entspricht dem “Gesetz über die elektromagnetische Verträglichkeit von Geräten (EMVG)“.
Dies ist die Umsetzung der EU-Richtlinie 2014/30/EU in der Bundesrepublik Deutschland.
Zulassungsbescheinigung laut dem Deutschen Gesetz über die elektromagnetische Verträglichkeit
von Geräten (EMVG) (bzw. der EMC Richtlinie 2014/30/EU) für Geräte der Klasse B

Notices 43
Dieses Gerät ist berechtigt, in Übereinstimmung mit dem Deutschen EMVG das EG-Konformitätszeichen -
CE - zu führen.

Verantwortlich für die Einhaltung der EMV Vorschriften ist der Hersteller:
International Business Machines Corp.
New Orchard Road
Armonk, New York 10504
Tel: 914-499-1900

Der verantwortliche Ansprechpartner des Herstellers in der EU ist:


IBM Deutschland GmbH
Technical Relations Europe, Abteilung M456
IBM-Allee 1, 71139 Ehningen, Germany
Tel: +49 (0) 800 225 5426
email: HalloIBM@de.ibm.com

Generelle Informationen:
Das Gerät erfüllt die Schutzanforderungen nach EN 55024 und EN 55032 Klasse B

Japan Electronics and Information Technology Industries Association (JEITA)


Notice

This statement applies to products less than or equal to 20 A per phase.

This statement applies to products greater than 20 A, single phase.

This statement applies to products greater than 20 A per phase, three-phase.

44 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
Japan Voluntary Control Council for Interference (VCCI) Notice

Taiwan Notice

United States Federal Communications Commission (FCC) Notice


This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant
to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful
interference in a residential installation. This equipment generates, uses, and can radiate radio frequency
energy and, if not installed and used in accordance with the instructions, may cause harmful interference
to radio communications. However, there is no guarantee that interference will not occur in a particular
installation. If this equipment does cause harmful interference to radio or television reception, which
can be determined by turning the equipment off and on, the user is encouraged to try to correct the
interference by one or more of the following measures:
• Reorient or relocate the receiving antenna.
• Increase the separation between the equipment and receiver.
• Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.
• Consult an IBM-authorized dealer or service representative for help.
Properly shielded and grounded cables and connectors must be used in order to meet FCC emission
limits. Proper cables and connectors are available from IBM-authorized dealers. IBM is not responsible
for any radio or television interference caused by using other than recommended cables and connectors
or by unauthorized changes or modifications to this equipment. Unauthorized changes or modifications
could void the user's authority to operate the equipment.
This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions:

(1) this device may not cause harmful interference, and (2) this device must accept any interference
received, including interference that may cause undesired operation.

Responsible Party:

International Business Machines Corporation


New Orchard Road
Armonk, New York 10504
Contact for FCC compliance information only: fccinfo@us.ibm.com

Terms and conditions


Permissions for the use of these publications are granted subject to the following terms and conditions.
Applicability: These terms and conditions are in addition to any terms of use for the IBM website.

Notices 45
Personal Use: You may reproduce these publications for your personal, noncommercial use provided that
all proprietary notices are preserved. You may not distribute, display or make derivative works of these
publications, or any portion thereof, without the express consent of IBM.
Commercial Use: You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make derivative works of
these publications, or reproduce, distribute or display these publications or any portion thereof outside
your enterprise, without the express consent of IBM.
Rights: Except as expressly granted in this permission, no other permissions, licenses or rights are
granted, either express or implied, to the publications or any information, data, software or other
intellectual property contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use
of the publications is detrimental to its interest or, as determined by IBM, the above instructions are not
being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS
ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT,
AND FITNESS FOR A PARTICULAR PURPOSE.

46 Power Systems: Installing the IBM Power S1024 (9105-42A), IBM Power L1024 (9786-42H), and IBM
Power S1014 (9105-41B)
IBM®

Part Number: 03KG460

(1P) P/N: 03KG460

GI11-9900-02
Lenovo ThinkSystem SR860 V3 Server
Product Guide

The Lenovo ThinkSystem SR860 V3 is a 4-socket server that features a 4U rack design with support for high-
performance GPUs. The server offers technology advances, including fourth-generation Intel Xeon Scalable
processors, and scale-up capacity of up to 16TB of system memory, up to 18x PCIe slots, and up to 48x 2.5-
inch drive bays.
Suggested uses: Mission critical workloads such as SAP HANA in-memory computing, transactional
databases, deep learning, analytics, big data, and virtual machine density.

Figure 1. Lenovo ThinkSystem SR860 V3

360° View Full 3D Tour

Did you know?


The Lenovo ThinkSystem SR860 V3 provides the advanced capabilities of four of the new 4th Gen Intel Xeon
Scalable processors plus support for four double-wide GPUs. This combination gives you significant
processing power in one server.
The SR860 V3 has space for 48x 2.5-inch drive bays, 24 of which can be configured as AnyBay drives -
supporting SAS, SATA or NVMe drives. NVMe drives are high-speed, low-latency storage, ideal for storage
tiering.

Click here to check for updates


Lenovo ThinkSystem SR860 V3 Server 1
Key features
The flexible ThinkSystem SR860 V3 server supports fourth-generation Intel Xeon Scalable Gold or Platinum
processors and can scale from two to four processors. Built for standard workloads like general business
applications and server consolidation, it can also accommodate high-growth areas such as databases and
virtualization. The ThinkSystem SR860 V3’s agile design permits rapid upgrades for processors and memory,
and its large, flexible storage capacity helps to keep pace with data growth.
With the capability to support up to 64 DIMMs, four sockets, up to 48 drives for internal storage, support for up
to eight 75W single-wide GPUs or four high-performance 350W double-wide GPUs, and two dedicated OCP
3.0 slots for 1, 10, 25 or 100 GbE networking, the SR860 V3 provides unmatched features and capabilities in
a 4U rack-mount design.
Scalability and performance
The SR860 V3 offers numerous features to boost performance, improve scalability and reduce costs:
Supports two or four 4th Gen Intel Xeon Processor Scalable processors, allowing you to start with two
processors and then upgrade to four when you need it.
Supports Gold and Platinum processors in the Intel Xeon Processor Scalable Family. Processors
supported:
Up to 60 cores
Core speeds of up to 3.7 GHz
TDP ratings of up to 350W
Up to four processors, 240 cores, and 480 threads maximize the concurrent execution of multithreaded
applications.
Support for embedded Intel accelerators:
Intel QuickAssist Technology (QAT)
Intel Dynamic Load Balancer (DLB)
Intel In-Memory Analytics Accelerator (IAA)
Intel Data Streaming Accelerator (DSA)
Enhanced inter-processor communications with three UPI connections between adjacent processors
ensures increased CPU I/O throughput.
Support for up to 64 TruDDR5 memory DIMMs operating at up to 4800 MHz means you have the
fastest available memory subsystem and memory capacity of up to 16 TB with 64x 256 GB 3DS
RDIMMs.
Supports configurations of 2 DIMMs per channel to operate at the 4400 MHz rated speed of the
memory DIMMs.
The use of solid-state drives (SSDs) instead of, or along with, traditional spinning drives (HDDs), can
improve I/O performance. An SSD can support up to 100 times more I/O operations per second (IOPS)
than a typical HDD.
Up to 48x 2.5-inch drive bays -- supporting combinations of SAS or SATA HDDs, SAS or SATA SSDs,
and NVMe PCIe Gen4 or Gen5 SSDs -- provide a flexible and scalable all-in-one platform to meet your
increasing demands. Up to 24x NVMe drives are supported, maximizing drive I/O performance in terms
of throughput, bandwidth, and latency.
The server has two dedicated industry-standard OCP 3.0 small form factor (SFF) slots, with a PCIe 5.0
x16 interface, supporting a variety of Ethernet network adapters. Simple-swap mechanism with
thumbscrews and pull-tab enables tool-less installation and removal of the adapter. Supports shared
BMC network sideband connectivity to enable out-of-band systems management.
Up to 18 PCIe slots in addition to the two OCP 3.0 Ethernet slots to maximize I/O capabilities.
The server is Compute Express Link (CXL) v1.1 Ready. With CXL 1.1 for next-generation workloads,
you can reduce compute latency in the data center and lower TCO. CXL is a protocol that runs across
the standard PCIe physical layer and can support both standard PCIe devices as well as CXL devices
on the same link.

Lenovo ThinkSystem SR860 V3 Server 2


High-speed RAID controllers from Lenovo and Broadcom provide 12 Gb SAS connectivity to the drive
backplanes. A variety of RAID adapters are available, with cache up to 8 GB and support for 32 drives
on a single controller.
Support for four high-performance double-wide GPUs, or eight single-wide GPUs, adding additional
processing power to the server.
Supports up to two externally accessible 7mm hot-swap drives with VROC RAID functionality, in
addition to the 48 front drive bays. These 7mm drives are ideal for operating system boot functions.
As an alternative to the 7mm drives, the server supports an M.2 adapter (non-RAID) for convenient
operating system boot functions. Available M.2 adapters support VROC RAID for boot drive
performance and reliability.
Supports Intel VROC (Virtual RAID on CPU) which enables basic RAID functionality on the onboard
NVMe ports of the server, with no additional adapter needed. This feature enables RAID on NVMe
drives (including 7mm and M.2 drives) without the need for a separate RAID adapter.
Availability and serviceability
The SR860 V3 provides many features to simplify serviceability and increase system uptime:
Designed to run 24 hours a day, 7 days a week
The server offers Single Device Data Correction (SDDC, also known as Chipkill), Adaptive Double-
Device Data Correction (ADDDC, also known as Redundant Bit Steering or RBS) and memory
mirroring for redundancy in the event of a non-correctable memory failure.
The server offers hot-swap drives, supporting RAID redundancy for data protection and greater system
uptime.
Support for VROC to enable RAID-1 support on M.2 or 7mm drives for enhanced data protection of
boot drives
The server has up to four hot-swap redundant power supplies and 12x N+1 redundant fans to provide
availability for business-critical applications.
The power-source-independent light path diagnostics uses LEDs to lead the technician to failed (or
failing) components, which simplifies servicing, speeds up problem resolution, and helps improve
system availability.
Proactive Platform Alerts (including PFA and SMART alerts): Processors, voltage regulators, memory,
internal storage (SAS/SATA HDDs and SSDs, NVMe SSDs, M.2 storage, flash storage adapters), fans,
power supplies, RAID controllers, server ambient and subcomponent temperatures. Alerts can be
surfaced through the XClarity Controller to managers such as Lenovo XClarity Administrator, VMware
vCenter, and Microsoft System Center. These proactive alerts let you take appropriate actions in
advance of possible failure, thereby increasing server uptime and application availability.
Solid-state drives (SSDs) offer more reliability than traditional mechanical HDDs for greater uptime.
The built-in XClarity Controller continuously monitors system parameters, triggers alerts, and performs
recovery actions in case of failures, to minimize downtime.
Built-in diagnostics in UEFI, using Lenovo XClarity Provisioning Manager, speed up troubleshooting
tasks to reduce service time.
Lenovo XClarity Provisioning Manager collects and saves service data to USB key drive or remote
CIFS share folder, for troubleshooting and to reduce service time.
Auto restart in the event of a momentary loss of AC power (based on the power policy setting in the
XClarity Controller service processor)
Offers a diagnostics port on the front of the server to allow you to attach an external diagnostics
handset for enhanced systems management capabilities.
Support for the XClarity Administrator Mobile app running on a supported smartphone or tablet and
connected to the server through the front USB 2.0 port, enables additional local systems management
functions.

Lenovo ThinkSystem SR860 V3 Server 3


3-year or 1-year customer-replaceable unit and onsite limited warranty, 9 x 5 next business day.
Optional service upgrades are available.
Manageability and security
Powerful systems management features simplify local and remote management of the SR860 V3:
Lenovo XClarity Controller 2 (XCC2) monitors server availability and performs remote management.
XCC2 Platinum is standard, which enables remote KVM, the mounting of remote media files (ISO and
IMG image files), boot capture, and power capping.
Lenovo XClarity Administrator offers comprehensive hardware management tools that help to increase
uptime, reduce costs and improve productivity through advanced server management capabilities.
UEFI-based Lenovo XClarity Provisioning Manager, accessible from F1 during boot, provides system
inventory information, graphical UEFI Setup, platform update function, RAID Setup wizard, operating
system installation function, and diagnostic functions.
Support for Lenovo XClarity Energy Manager, which captures real-time power and temperature data
from the server and provides automated controls to lower energy costs.
Root of Trust (RoT) module includes Platform Firmware Resiliency (PFR) and Trusted Platform Module
(TPM) 2.0, which further enhances key platform subsystem protections by detecting unauthorized
firmware updates, recovering corrupted images to a known-safe image, and monitoring firmware to
ensure it has not been compromised. Secures and authenticates system to prevent unauthorized
access.
Integrated Trusted Platform Module (TPM) 2.0 support enables advanced cryptographic methods, such
as digital signatures and remote attestation.
Supports Secure Boot to ensure only a digitally signed operating system can be used. Supported with
HDDs and SSDs, as well as 7mm or M.2 drives.
Industry-standard Advanced Encryption Standard (AES) NI support for faster, stronger encryption.
Intel Execute Disable Bit functionality can prevent certain classes of malicious buffer overflow attacks
when combined with a supported operating system.
Intel Trusted Execution Technology provides enhanced security through hardware-based resistance to
malicious software attacks, allowing an application to run in its own isolated space, protected from all
other software running on a system.
Energy efficiency
The SR860 V3 offers the following energy-efficiency features to save energy, reduce operational costs, and
increase energy availability:
Energy-efficient planar components help lower operational costs.
High-efficiency power supplies with 80 PLUS Platinum and Titanium certifications
Intel Intelligent Power Capability turns individual processor elements on and off as needed to reduce
power draw.
Low-voltage 1.1 V DDR5 memory offers energy savings compared to 1.2 V DDR4 DIMMs.
Solid-state drives (SSDs) consume as much as 80% less power than traditional spinning 2.5-inch
HDDs.
The server uses hexagonal ventilation holes, which can be grouped more densely than round holes,
providing more efficient airflow through the system and thus keeping your system cooler.
Optional Lenovo XClarity Energy Manager provides advanced data center power notification, analysis,
and policy-based management to help achieve lower heat output and reduced cooling needs.

Lenovo ThinkSystem SR860 V3 Server 4


Comparing the SR860 V3 to the SR860 V2
The ThinkSystem SR860 V3 improves on the previous generation SR860 V2, as summarized in the following
table.

Table 1. Comparing the SR860 V3 to the SR860 V2


Feature SR860 V2 SR860 V3 Benefits
Processor 4x 3rd Gen Intel Xeon 4x 4th Gen Intel Xeon Increased performance by 129%
Scalable Processors Scalable Processors (based on preliminary data from
"Cooper Lake" "Sapphire Rapids" Intel)
"Cedar Island" platform "Eagle Stream" platform Significant increase in cores per
Up to 28 cores Up to 60 cores processor
TDP ratings up to 250W TDP ratings up to 350W Increased performance
48x PCIe 3.0 lanes per 80x PCIe 5.0 lanes per Consolidation of more apps on
processor processor same number of servers,
reducing costs
New PCIe 5.0 support means
higher performance networking
and NVMe storage

GPU Supports up to 8x single- Supports up to 8x single- High performance GPU support


wide GPUs or up to 4x wide GPUs or up to 4x
double-wide GPUs double-wide GPUs

Memory DDR4 memory operating DDR5 memory operating Increased memory capacity
up to 3200 MHz up to 4800 MHz New DDR5 memory offers
6 channels per CPU 8 channels per CPU significant performance
48 DIMMs (12 per 64 DIMMs (16 per improvements over DDR4
processor), 2 DIMMs per processor), 2 DIMMs per More memory channels means
channel channel greater memory bandwidth
Supports RDIMMs and Supports RDIMMs, 3DS Support for lower-cost 9x4
3DS RDIMMs RDIMMs and 9x4 DIMMs
Up to 12TB of system RDIMMs
memory Up to 16TB of system
Intel Optane Persistent memory
Memory 200 Series No support for persistent
memory

Internal Up to 48x 2.5-inch hot- Up to 48x 2.5-inch hot- 2X performance improvement


storage swap drives swap drives with PCIe Gen5 NVMe
Supports SATA, AnyBay Supports SATA or 24 direct connections means no
or NVMe backplanes AnyBay backplanes NVMe retimer or switch
Up to 24x NVMe drives Up to 24x NVMe drives adapters needed
(PCIe Gen 3) (PCIe Gen 4/Gen 5) x4 M.2 NVMe SSDs for faster
16x direct connections 24x direct connections boot performance
2x 7mm SATA/NVMe in 2x 7mm SATA/NVMe in
dedicated bay (HW RAID) PCIe slot (VROC RAID)
Internal 2x M.2 drives Internal 2x M.2 drives
(HW RAID) (VROC RAID)
7mm and M.2 are M.2 adapter with NVMe
mutually exclusive x4 interface
7mm and M.2 are
mutually exclusive

Lenovo ThinkSystem SR860 V3 Server 5


Feature SR860 V2 SR860 V3 Benefits
RAID 8-, 16- and 32-port RAID 8-, 16- and 32-port RAID Consistent RAID/HBA support
adapters with up to 8GB adapters with up to 8GB Flexible config solution
flash flash PCIe Gen 5 allows for greater
Support for Lenovo and Support for Lenovo and storage performance
Broadcom adapters Broadcom adapters
Storage HBAs available Storage HBAs available
VROC for NVMe VROC for NVMe
Onboard SATA with SW
RAID

Networking 1x OCP 3.0 slot with PCIe 2x OCP 3.0 slots with Improved performance with
Gen 3 x16 interface PCIe Gen 5 x16 PCIe Gen 5
Additional PCIe adapters interfaces Support for two OCP adapters in
supported Additional PCIe adapters dedicated slots
1GbE dedicated supported
Management port 1GbE dedicated
Management port

PCIe Supports PCIe 3.0 Supports PCIe 5.0 PCIe Gen 5 allows for greater
Up to 14x slots (all Gen3) Up to 18x slots (all Gen4) I/O performance
3 onboard slots; others Up to 16x slots (mix of Additional 4x PCIe slots
via riser cards Gen4 & Gen5) Additional OCP slot
1x OCP slot (PCIe Gen3) Entry configuration of 4x
Gen4 slots
All slots via riser cards
2x OCP slots (PCIe
Gen5)

Management XClarity Controller Integrated XClarity New XCC2 offers improved


and security Support for full XClarity Controller 2 management capabilities
toolset including XClarity Support for full XClarity Same system management tool
Administrator toolset including XClarity with previous generation
Platform Firmware Administrator Silicon-level security solution
Resiliency (PFR) Platform Firmware
hardware Root of Trust Resiliency (PFR)
(RoT) hardware Root of Trust
Tamper Switch security (RoT)
solution (intrusion switch) Tamper Switch security
Integrated diagnostics solution (intrusion switch)
panel with LCD display Supports optional external
diagnostics handset

Power Choice of 750W-2600W Choice of 1100-2600W Multiple PSU offerings to suit the
AC hot-swap power AC hot-swap power configuration selected
supplies supplies New ErP Lot 9-compliant
Available in Titanium and Available in Titanium and offerings
Platinum efficiency levels Platinum efficiency levels Support CRPS for PRC
240V HVDC support for 240V HVDC support for
PRC customers PRC customers
Active-Standby mode CRPS power supply
support for PRC
customers
-48V or 336V DC power
supply for PRC customers
Active-Standby mode

Lenovo ThinkSystem SR860 V3 Server 6


Components and connectors
The following figure shows the front of the server.

Figure 2. Front view of the ThinkSystem SR860 V3


The following figure shows the rear of the server.

Figure 3. Rear view of the ThinkSystem SR860 V3


The following figure shows the locations of key components inside the server.

Lenovo ThinkSystem SR860 V3 Server 7


Figure 4. Internal view of the ThinkSystem SR860 V3
The following figure shows the location of the risers, M.2 adapter and RAID adapter flash modules
(supercaps).

Lenovo ThinkSystem SR860 V3 Server 8


Figure 5. Internal view of the ThinkSystem SR860 V3

Standard specifications
The following table lists the standard specifications.

Table 2. Standard specifications


Components Specification
Machine 7D93 - 3-year warranty
types 7D94 - 1-year warranty
7D95 - SAP HANA configurations with 3-year warranty
Form factor 4U rack
Processor Two or four 4th Gen Intel Xeon Scalable processors, either Gold or Platinum level processors
(formerly codename "Sapphire Rapids" or SPR). Supports processors up to 60 cores, core speeds up
to 3.7 GHz, and TDP ratings up to 350W. Three Intel Ultra Path Interconnect (UPI) links at 16 GT/s
each. Four processors are connected in a mesh topology. Support for up to four Intel embedded
accelrators: QAT, DLB, IAA, and DSA.
Chipset Intel C741 "Emmitsburg" chipset, part of the platform codenamed "Eagle Stream" (EGS)
Memory Up to 64 DIMM slots (16 DIMMs per processor). Each processor has 8 memory channels, with 2
DIMMs per channel. Lenovo TruDDR5 RDIMMs and 3DS RDIMMs are supported. DIMMs operate at
up to 4800 MHz at 1 DPC and 4400 MHz at 2 DPC.
Persistent No support.
memory
Memory Up to 16TB with 64x 256GB 3DS RDIMMs and four processors (4.0TB per processor).
maximums

Lenovo ThinkSystem SR860 V3 Server 9


Components Specification
Memory ECC, SDDC (for 10x4-based memory DIMMs), ADDDC (for 10x4-based memory DIMMs), memory
protection mirroring.
Disk drive Up to 48x 2.5-inch hot-swap drive bays:
bays
Up to 48x SAS/SATA drive bays
Up to 24x SAS/SATA + 24x AnyBay drive bays (support SAS, SATA, Gen4 NVMe, or Gen5
NVMe drives)

Optional two 7mm hot-swap SSD drive bays at the rear of the server, either SATA or NVMe, for OS
boot or storage

Maximum 1474.56TB using 48x 30.72TB 2.5-inch SAS/SATA SSDs


internal 1474.56TB using 24x 61.44TB 2.5-inch NVMe SSDs
storage 115.2TB using 48x 2.4TB 2.5-inch HDDs

Mix of NVMe/SSDs/HDDs supported.

Storage Up to 24x Onboard PCIe Gen 5 or Gen 4 NVMe ports (RAID functions provided using Intel
controller VROC)
12 Gb SAS/SATA RAID adapters
12 Gb SAS/SATA HBA (non-RAID)

Optical drive No internal optical drive


bays
Tape drive No internal backup drive
bays
Network Two dedicated OCP 3.0 SFF slots with PCIe 5.0 x16 host interface. Supports a variety of 2-port and
interfaces 4-port adapters with network connectivity up to 100 GbE. One port can optionally be shared with the
XClarity Controller (XCC) management processor for Wake-on-LAN and NC-SI support.
PCI Up to 18 PCIe slots (Gen4 only or Gen5+Gen4), depending on the configuration, plus two Gen5 OCP
Expansion 3.0 slots. Slot combinations are based on the risers selected:
slots
18x Gen4 PCIe slots
12x Gen5 PCIe slots + 4x Gen4 PCIe slots
4x Gen4 PCIe slots (entry configuration)

See the I/O expansion section for details.

GPU support Supports up to 8x single-wide GPUs or up to 4x double-wide GPUs


Ports Front: One VGA video port. 1x USB 3.2 G1 (5 Gb/s) port, 1x USB 2.0 port. The USB 2.0 port can be
configured to support local systems management by using the XClarity Administrator mobile app on a
mobile device connected via a USB cable.

Rear: Three USB 3.2 G1 (5 Gbp/s) ports, one VGA video port, one DB-9 serial port, and one RJ-45
XClarity Controller (XCC) systems management port. The serial port can be shared with the XCC for
serial redirection functions.

Internal: Optional M.2 adapter in dedicated slot supporting one or two M.2 drives (for OS boot support,
including hypervisor support).

Cooling 12x N+1 redundant hot-swap 60 mm fans (all 12 standard). One additional fan integrated in each of
the four power supplies.

Lenovo ThinkSystem SR860 V3 Server 10


Components Specification
Power supply Up to four hot-swap redundant AC power supplies (80 PLUS Platinum or Titanium certification):
1100W to 2600W options, supporting 220 V AC. 1100 W options also support 110V input supply. For
China only, supports 1300 W and 2600 W 240V AC/DC Platinum CRPS, or 1600 W 336V DC or -48V
DC CRPS. Power supplies can be configured as N+N redundant.
Video Embedded video graphics with 16 MB memory with 2D hardware accelerator, integrated into the
XClarity Controller. Maximum resolution is 1920x1200 32bpp at 60Hz.
Hot-swap Drives, fans and power supplies.
parts
Systems Operator panel with status LEDs. Optional External Diagnostics Handset with LCD display. Models
management with 16x 2.5-inch front drive bays can optionally support an Integrated Diagnostics Panel. XClarity
Controller 2 (XCC2) embedded management based on the ASPEED AST2600 baseboard
management controller (BMC). Dedicated rear Ethernet port for XCC2 remote access for
management. XClarity Administrator for centralized infrastructure management, XClarity Integrator
plugins, and XClarity Energy Manager centralized server power management. XCC2 Platinum is
included which enables remote control functions and other features.
Security Chassis intrusion switch, Power-on password, administrator's password, Root of Trust module
features supporting TPM 2.0 and Platform Firmware Resiliency (PFR).
Operating Microsoft Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, VMware ESXi.
systems See the Operating system support section for specifics.
supported
Limited 3-year or 1-year (model dependent) customer-replaceable unit and onsite limited warranty with 9x5
warranty next business day (NBD).
Service and Optional service upgrades are available through Lenovo Services: 4-hour or 2-hour response time, 6-
support hour fix time, 1-year or 2-year warranty extension, software support for Lenovo hardware and some
third-party applications. Actual offering may depend on the region where the server is installed and is
subject to change.
Dimensions Width: 447 mm (17.6 in.), height: 175 mm (6.9 in.), depth: 906 mm (35.7 in.). SeePhysical and
electrical specifications for details.
Weight Maximum: 59.4 kg (131 lb)

The SR860 V3 servers are shipped with the following items:


Documentation flyer
Rail kit (model dependent)
Power cords (model and region dependent)

Lenovo ThinkSystem SR860 V3 Server 11


Models
ThinkSystem SR860 V3 models can be configured by using the Lenovo Data Center Solution Configurator
(DCSC).
Configure-to-order (CTO) models are used to create models with factory-integrated server customizations. For
CTO models, two types of base CTO models are available for the SR860 V3 as listed in the columns in the
following table:
General purpose base CTO models are for general business (non-HPC) and is selectable by choosing
General Purpose mode in DCSC.
AI and HPC base models are intended for Artificial Intelligence (AI) and High Performance Computing
(HPC) configurations and solutions are enabled using the AI & HPC Hardware - ThinkSystem
Hardware mode in DCSC. These configurations, along with Lenovo EveryScale Solutions, can also be
built using System x and Cluster Solutions Configurator (x-config) . Tip: Some HPC and AI models are
not listed in DCSC and can only be configured in x-config.

Controlled GPU models: The "Controlled GPU" base CTO models listed in the table are the only models
that support high-performance GPUs and accelerators. These models are classified under US Government
ECCN regulations and have limited market and customer availability. All other base models do not support
high-performance GPUs.

Preconfigured server models may also be available for the SR860 V3, however these are region-specific; that
is, each region may define their own server models, and not all server models are available in every region.
The following table lists the base CTO models of the ThinkSystem SR860 V3 server.

Table 3. Base CTO models


Machine Type/Model Machine Type/Model
General purpose for AI and HPC Description
7D93CTO1WW 7D93CTOLWW ThinkSystem SR860 V3-3yr Warranty
7D93CTO2WW 7D93CTOHWW ThinkSystem SR860 V3-3yr Warranty with Controlled GPU
7D94CTO1WW 7D94CTOLWW ThinkSystem SR860 V3-1yr Warranty
7D95CTO1WW None ThinkSystem SR860 V3 – SAP HANA configurations with 3-year
warranty

Models of the SR860 V3 are defined based on whether the server will support GPUs or not. For GPU support
(or any other full-length adapters), the server uses special low-profile winged heatsinks on the rear
processors. Feature codes for these chassis bases are as listed in the following table.

GPU support: For GPU support (single-wide or double-wide) or full-length adapter support, you must
select base BT2K. The standard base (BT2J) does not support these adapters and cannot be upgraded in
the field to support full-length adapters or GPUs.

Table 4. Chassis base feature codes


Feature code Description
BT2J ThinkSystem SR860 V3 Standard 4U/4S Base
BT2K ThinkSystem SR860 V3 4U/4S Base Supporting GPUs

Processors

Lenovo ThinkSystem SR860 V3 Server 12


The SR860 V3 supports Gold and Platinum level processors in the 4th Gen Intel Xeon Scalable Processor
family. The server supports two or four processors.

Support for three processors : For configurations with 3 processors, submit a CORE/special bid request.

The four processors are connected together using a mesh topology. A mesh topology allows all four
processors to be connected together which improves the performance of processor-to-processor
communications. The SR860 V3 implements a mesh topology using 3 UPI links.
Topics in this section:
Heatsinks
Processor options
Processor features
Two-processor configurations
Thermal requirement by processor
UEFI operating modes

Heatsinks
Heatsinks for the processors are auto-derived based on the Base feature code selected. As listed in the
Models section, there are two base feature codes related to heatsinks, one for double-wide (DW) GPU support
and one that does not support DW GPUs. The DW GPU base derives two standard heatsinks for the front
processors and two low-profile winged heatsinks for the rear processors. The standard base derives four
standard heatsinks.

Table 5. Processor heatsinks


Feature code Description Max Qty
BNWR ThinkSystem SR860 V3/ST650 V3 CPU Heatsink 4
Standard heatsink
BU4F ThinkSystem SR860 V3 / SR850 V3 Rear Winged 2U Heatsink 2
For use on rear processors for support of DW GPUs

Processor options
All supported processors have the following characteristics:
8 DDR5 memory channels at 2 DIMMs per channel
Up to 3 UPI links between processors at 16 GT/s
Up to 80 PCIe 5.0 I/O lanes
The following table lists the 4th Gen processors that are currently supported by the SR860 V3.

Lenovo ThinkSystem SR860 V3 Server 13


Table 6. 4th Gen Intel Xeon Processor support
Part Feature Maximum
number code SKU Description quantity
4XG7A86613 BQ6C 6416H ThinkSystem SR860 V3 Intel Xeon Gold 6416H 18C 165W 2.2GHz 4
Processor Option Kit w/o Fan
4XG7A86612 BQ6B 6418H ThinkSystem SR860 V3 Intel Xeon Gold 6418H 24C 185W 2.1GHz 4
Processor Option Kit w/o Fan
4XG7A86614 BQ6E 6434H ThinkSystem SR860 V3 Intel Xeon Gold 6434H 8C 205W 3.7GHz 4
Processor Option Kit w/o Fan
4XG7A86611 BQ6A 6448H ThinkSystem SR860 V3 Intel Xeon Gold 6448H 32C 225W 2.4GHz 4
Processor Option Kit w/o Fan
4XG7A86610 BPPH 8444H ThinkSystem SR860 V3 Intel Xeon Platinum 8444H 16C 270W 2.9GHz 4
Processor Option Kit w/o Fan
4XG7A86609 BPPG 8450H ThinkSystem SR860 V3 Intel Xeon Platinum 8450H 28C 250W 2.0GHz 4
Processor Option Kit w/o Fan
4XG7A86608 BPPF 8454H ThinkSystem SR860 V3 Intel Xeon Platinum 8454H 32C 270W 2.1GHz 4
Processor Option Kit w/o Fan
4XG7A86607 BPPN 8460H ThinkSystem SR860 V3 Intel Xeon Platinum 8460H 40C 330W 2.2GHz 4
Processor Option Kit w/o Fan
4XG7A86606 BPPE 8468H ThinkSystem SR860 V3 Intel Xeon Platinum 8468H 48C 330W 2.1GHz 4
Processor Option Kit w/o Fan
4XG7A86605 BPPS 8490H ThinkSystem SR860 V3 Intel Xeon Platinum 8490H 60C 350W 1.9GHz 4
Processor Option Kit w/o Fan

Configuration notes:
Processor options include a heatsink but do not include a system fan

Processor features
Processors supported by the SR860 V3 introduce new embedded accelerators to add even more processing
capability:
QuickAssist Technology (Intel QAT)
Help reduce system resource consumption by providing accelerated cryptography, key protection, and
data compression with Intel QuickAssist Technology (Intel QAT). By offloading encryption and
decryption, this built-in accelerator helps free up processor cores and helps systems serve a larger
number of clients.
Intel Dynamic Load Balancer (Intel DLB)
Improve the system performance related to handling network data on multi-core Intel Xeon Scalable
processors. Intel Dynamic Load Balancer (Intel DLB) enables the efficient distribution of network
processing across multiple CPU cores/threads and dynamically distributes network data across multiple
CPU cores for processing as the system load varies. Intel DLB also restores the order of networking
data packets processed simultaneously on CPU cores.
Intel Data Streaming Accelerator (Intel DSA)
Drive high performance for storage, networking, and data-intensive workloads by improving streaming
data movement and transformation operations. Intel Data Streaming Accelerator (Intel DSA) is
designed to offload the most common data movement tasks that cause overhead in data center-scale
deployments. Intel DSA helps speed up data movement across the CPU, memory, and caches, as well
as all attached memory, storage, and network devices.

Lenovo ThinkSystem SR860 V3 Server 14


Intel In-Memory Analytics Accelerator (Intel IAA)
Run database and analytics workloads faster, with potentially greater power efficiency. Intel In-Memory
Analytics Accelerator (Intel IAA) increases query throughput and decreases the memory footprint for in-
memory database and big data analytics workloads. Intel IAA is ideal for in-memory databases, open
source databases and data stores like RocksDB, Redis, Cassandra, and MySQL.
Intel Advanced Matrix Extensions (Intel AMX)
Intel Advanced Matrix Extensions (Intel AMX) is a built-in accelerator in all Silver, Gold, and Platinum
processors that significantly improves deep learning training and inference. With Intel AMX, you can
fine-tune deep learning models or train small to medium models in just minutes. Intel AMX offers
discrete accelerator performance without added hardware and complexity.
The processors also support a separate and encrypted memory space, known as the SGX Enclave, for use by
Intel Software Guard Extensions (SGX). The size of the SGX Enclave supported varies by processor model.
Intel SGX offers hardware-based memory encryption that isolates specific application code and data in
memory. It allows user-level code to allocate private regions of memory (enclaves) which are designed to be
protected from processes running at higher privilege levels.
The following table summarizes the key features of all supported 4th Gen processors in the SR860 V3.

Table 7. 4th Gen Intel Xeon Processor features


Core speed Max UPI 2.0 Accelerators SGX
CPU Cores/ (Base / memory links & PCIe Enclave
model threads TB max†) L3 cache* speed speed lanes TDP QAT DLB DSA IAA Size
6416H 18 / 36 2.2 / 4.2 GHz 45 MB* 4800 MHz 3 / 16 GT/s 80 165W 0 0 1 1 512GB
6418H 24 / 48 2.1 / 4.0 GHz 60 MB* 4800 MHz 3 / 16 GT/s 80 185W 0 0 1 1 512GB
6434H 8 / 16 3.7 / 4.1 GHz 22.5 MB* 4800 MHz 3 / 16 GT/s 80 195W 0 0 1 1 512GB
6448H 32 / 64 2.4 / 4.1 GHz 60 MB 4800 MHz 3 / 16 GT/s 80 250W 2 2 1 1 512GB
8444H 16 / 32 2.9 / 4.0 GHz 45 MB* 4800 MHz 3 / 16 GT/s 80 270W 0 0 4 4 512GB
8450H 28 / 56 2.0 / 3.5 GHz 75 MB* 4800 MHz 3 / 16 GT/s 80 250W 0 0 4 4 512GB
8454H 32 / 64 2.1 / 3.4 GHz 82.5 MB* 4800 MHz 3 / 16 GT/s 80 270W 4 4 4 4 512GB
8460H 40 / 80 2.2 / 3.8 GHz 105 MB* 4800 MHz 3 / 16 GT/s 80 330W 0 0 4 4 512GB
8468H 48 / 96 2.1 / 3.8 GHz 105 MB* 4800 MHz 3 / 16 GT/s 80 330W 4 4 4 4 512GB
8490H 60 / 120 1.9 / 3.5 GHz 112.5 MB 4800 MHz 3 / 16 GT/s 80 350W 4 4 4 4 512GB
† The maximum single-core frequency at with the processor is capable of operating
* L3 cache is 1.875 MB per core or larger. Processors with a larger L3 cache per core are marked with an *

Two-processor configurations
The SR860 V3 can be used with only two processors installed. Most core functions of the server (including the
XClarity Controller) are connected to processors 1 and 2.
With only two processors, the server has the following capabilities:
32 memory DIMMs for an 8TB maximum
10 slots are available - see I/O expansion for details
Riser 1: slots 3, 6, 8
Riser 2: slots 11, 14
Riser 3: slots 15, 18, 20
Two OCP slots
Support for only 2x DW GPUs or 4x SW GPUs
Up to 8x NVMe drives

Thermal requirement by processor

Lenovo ThinkSystem SR860 V3 Server 15


The following thermal requirements apply to the SR860 V3:
Servers with DW GPUs and full-length adapters: Processors with TDP 270W or lower
The use of full-length PCIe adapter (for example double-wide GPUs) requires the use of the lower-height
winged heatsink for the rear processors. These heatsinks limit the processors to 270W TDP or less.

UEFI operating modes


The SR860 V3 offers preset operating modes that affect energy consumption and performance. These modes
are a collection of predefined low-level UEFI settings that simplify the task of tuning the server to suit your
business and workload requirements.
The following table lists the feature codes that allow you to specify the mode you wish to preset in the factory
for CTO orders.

UK and EU customers : For compliance with the ERP Lot9 regulation, you should select feature BFYE.
For some systems, you may not be able to make a selection, in which case, it will be automatically derived
by the configurator.

Table 8. UEFI operating mode presets in DCSC


Feature code Description
BFYB Operating mode selection for: "Maximum Performance Mode"
BFYC Operating mode selection for: "Minimal Power Mode"
BFYD Operating mode selection for: "Efficiency Favoring Power Savings Mode"
BFYE Operating mode selection for: "Efficiency - Favoring Performance Mode"

The preset modes for the SR860 V3 are as follows:


Maximum Performance Mode (feature BFYB): Achieves maximum performance but with higher
power consumption and lower energy efficiency.
Minimal Power Mode (feature BFYC): Minimize the absolute power consumption of the system.
Efficiency Favoring Power Savings Mode (feature BFYD): Maximize the performance/watt efficiency
with a bias towards power savings. This is the favored mode for SPECpower benchmark testing, for
example.
Efficiency Favoring Performance Mode (feature BFYE): Maximize the performance/watt efficiency
with a bias towards performance. This is the favored mode for Energy Star certification, for example.
For details about these preset modes, and all other performance and power efficiency UEFI settings offered in
the SR860 V3, see the paper "Tuning UEFI Settings for Performance and Energy Efficiency on Intel Xeon
Scalable Processor-Based ThinkSystem Servers", available from https://lenovopress.lenovo.com/lp1477.

Memory options
The SR860 V3 uses Lenovo TruDDR5 memory operating at up to 4800 MHz. The server supports up to 64
DIMMs with 4 processors. The processors have 8 memory channels and support 2 DIMMs per channel
(DPC). The server supports up to 16TB of memory using 64x 256GB 3DS RDIMMs and four processors.
DIMMs operate at 4800 MHz at 1 DPC and 4400 MHz at 2 DPC.
Lenovo TruDDR5 memory uses the highest quality components that are sourced from Tier 1 DRAM suppliers
and only memory that meets the strict requirements of Lenovo is selected. It is compatibility tested and tuned
to maximize performance and reliability. From a service and support standpoint, Lenovo TruDDR5 memory
automatically assumes the system warranty, and Lenovo provides service and support worldwide.
The following table lists the 4800 MHz memory options that are currently supported by the SR860 V3.

Lenovo ThinkSystem SR860 V3 Server 16


Table 9. 4800 MHz memory options
DRAM
Part number Feature code Description technology
9x4 RDIMMs - 4800 MHz
4X77A77483 BNW5 ThinkSystem 32GB TruDDR5 4800MHz (1Rx4) 9x4 RDIMM 16Gb
4X77A77033 BKTN ThinkSystem 64GB TruDDR5 4800MHz (2Rx4) 9x4 RDIMM 16Gb
10x4 RDIMMs - 4800 MHz
4X77A77030 BNF6 ThinkSystem 32GB TruDDR5 4800MHz (1Rx4) 10x4 RDIMM 16Gb
4X77A77032 BNF9 ThinkSystem 64GB TruDDR5 4800MHz (2Rx4) 10x4 RDIMM 16Gb
4X77A87034 BZC2 ThinkSystem 96GB TruDDR5 4800MHz (2Rx4) RDIMM 24Gb
x8 RDIMMs - 4800 MHz
4X77A77029 BKTL ThinkSystem 16GB TruDDR5 4800MHz (1Rx8) RDIMM 16Gb
4X77A77031 BKTM ThinkSystem 32GB TruDDR5 4800MHz (2Rx8) RDIMM 16Gb
3DS RDIMMs - 4800 MHz
4X77A77034 BNFC ThinkSystem 128GB TruDDR5 4800MHz (4Rx4) 3DS 16Gb
RDIMM v2
CTO only BY8F ThinkSystem 128GB TruDDR5 4800MHz (4Rx4) 3DS 16Gb
RDIMM v1
CTO only BZPM ThinkSystem 256GB TruDDR5 4800MHz (8Rx4) 3DS 16Gb
RDIMM v1
4X77A77035 BNF8 ThinkSystem 256GB TruDDR5 4800MHz (8Rx4) 3DS 16Gb
RDIMM v2

9x4 RDIMMs (also known as EC4 RDIMMs) are a new lower-cost DDR5 memory option supported in
ThinkSystem V3 servers. 9x4 DIMMs offer the same performance as standard RDIMMs (known as 10x4 or
EC8 modules), however they support lower fault-tolerance characteristics. Standard RDIMMs and 3DS
RDIMMs support two 40-bit subchannels (that is, a total of 80 bits), whereas 9x4 RDIMMs support two 36-bit
subchannels (a total of 72 bits). The extra bits in the subchannels allow standard RDIMMs and 3DS RDIMMs
to support Single Device Data Correction (SDDC), however 9x4 RDIMMs do not support SDDC. Note,
however, that all DDR5 DIMMs, including 9x4 RDIMMs, support Bounded Fault correction, which enables the
server to correct most common types of DRAM failures.
For more information on DDR5 memory, see the Lenovo Press paper, Introduction to DDR5 Memory,
available from https://lenovopress.com/lp1618.
The following rules apply when selecting the memory configuration:
The SR860 V3 only supports quantities of 1, 2, 4, 6, 8, 12, or 16 DIMMs per processor; other quantities
not supported
The server supports three types of DIMMs: 9x4 RDIMMs, RDIMMs, and 3DS RDIMMs; UDIMMs and
LRDIMMs are not supported
Mixing of DIMM types is not supported (9x4 DIMMs with 10x4 RDIMMs, 9x4 DIMMs with 3DS
RDIMMs, 10x4 RDIMMs with 3DS RDIMMs)
Mixing of DRAM technology (16Gb and 24Gb) is not supported. See the column in the above table.
Mixing x4 and x8 DIMMs is not supported
Mixing of DIMM rank counts is supported. Follow the required installation order installing the DIMMs
with the higher rank counts first.
Mixing of DIMM capacities is supported, however only two different capacities are supported across all
channels of the processor. Follow the required installation order installing the larger DIMMs first.

Lenovo ThinkSystem SR860 V3 Server 17


Memory mirroring is not supported with 9x4 DIMMs
The mixing of 128GB 3DS RDIMMs and 256GB 3DS RDIMMs is supported, however all DIMM slots
must be populated evenly: 8x 128GB DIMMs and 8x 256GB DIMMs per processor
Mixing DIMMs with 16Gb and 24Gb DRAM is not supported; this means the 96GB DIMM (feature
BZC2) cannot be mixed with any other DIMM
96GB DIMMs are now be supported on all 4th Gen processors, not just Platinum processors
For best performance, consider the following:
Ensure the memory installed is at least the same speed as the memory bus of the selected processor.
Populate all 8 memory channels.
The following memory protection technologies are supported:
ECC detection/correction
Bounded Fault detection/correction
SDDC (for 10x4-based memory DIMMs; look for "x4" in the DIMM description)
ADDDC (for 10x4-based memory DIMMs, not supported with 9x4 DIMMs)
Memory mirroring
See the Lenovo Press article, RAS Features of the Lenovo ThinkSystem Intel Servers for more information
about memory RAS features.
If memory channel mirroring is used, then DIMMs must be installed in pairs (minimum of one pair per
processor), and both DIMMs in the pair must be identical in type and size. 50% of the installed capacity is
available to the operating system.
Memory rank sparing is implemented using ADDDC/ADC-SR/ADDDC-MR to provide DRAM-level sparing
feature support.

Internal storage
The SR860 V3 supports up to 48x 2.5-inch SAS/SATA drive bays, up to 24 of which can be AnyBay drive bays
instead. All 48x drive bays are hot-swap and all front-accessible. The server also supports internal M.2 drives
(one or two, installed in an adapter), or rear-accessible hot-swap 7mm SSDs (installed in a PCIe slot).

Note: M.2 and 7mm drive support is mutually exclusive, as they both use the same connectors.

In this section:
NVMe drive support
Front drive bays
Field upgrades
Supported drive bay combinations
M.2 drives
7mm drives
SED encryption key management with SKLM

NVMe drive support


The SR860 V3 supports up to 24x NVMe drives with a PCIe 5.0 interface to maximize storage performance,
each with a direct connection to the processors. All connections are made using onboard connectors; no
NVMe retimer adapters are needed or supported. There is no oversubscription: each x4 drive has a full x4
(four PCIe Gen5 lanes) connection to the processor.

Lenovo ThinkSystem SR860 V3 Server 18


Front drive bays
The front drive bay are configured using 8-bay backplanes. The two available backplanes are:
8-bay 2.5-inch SAS/SATA backplane
8-bay 2.5-inch AnyBay backplane

Tip: The SR860 V3 does not support 3.5-inch drive bays.

The locations of the backplanes is shown in the following figure.

Figure 6. Backplanes
Ordering information for the backplanes is listed in the following table.

Table 10. Backplanes for front drive bays


Feature PCIe SAS Max
Part number code Description gen gen qty
4XB7A86629 BT3A ThinkSystem SR850 V3/SR860 V3 8x 2.5" SAS/SATA Backplane - 12Gb 6
Option Kit
4XB7A86631 BT3B ThinkSystem SR860 V3 8x 2.5" AnyBay Backplane Option Kit Gen5 24Gb 3

Field upgrades
For field upgrades, the backplane part numbers include the necessary cables for onboard NVMe connections
as well as connections for both X350 and X40 RAID adapters/HBAs.

Field upgrades
For field upgrades, the backplane part numbers include the necessary cables for onboard NVMe connections
as well as connections for both X350 and X40 RAID adapters/HBAs.
2.5-inch drive bay fillers
Backplane option kits include the necessary drive bay fillers, however if needed, additional blanks can be
ordered as listed in the following table.

Table 11. Drive bay fillers for 2.5-inch bays


Part number Description
4XH7A99569 ThinkSystem 2.5" 1x1 HDD Filler by 8 units (contains 8x single drive-bay fillers)

Lenovo ThinkSystem SR860 V3 Server 19


Supported drive bay combinations
The following table shows the supported drive bay combinations - SAS/SATA or AnyBay drives. The table lists
the backplanes required for each drive bay combination.
Some configurations require 4 processors be installed or are only supported when there are 2 processors.
This is noted in the table.

Table 12. Supported drive bay combinations


SAS/SATA backplane(s) AnyBay backplane(s)
Total Total CPU SAS/SATA
drives NVMe support SAS/SATA drives drives NVMe drives Backplanes
8 0 2 or 4 8 0 0 1x SAS/SATA
8 4 2 0 4 4 1x AnyBay
8 8 4 0 0 8 1x AnyBay
16 0 2 or 4 16 0 0 2x SAS/SATA
16 8 2 0 8 8 2x AnyBay
16 8 4 8 0 8 1x SAS/SATA + 1x AnyBay
16 16 4 0 0 16 2x AnyBay
24 0 2 or 4 24 0 0 3x SAS/SATA
24 8 2 8 8 8 1x SAS/SATA + 2x AnyBay
24 8 4 16 0 8 2x SAS/SATA + 1x AnyBay
24 16 4 8 0 16 1x SAS/SATA + 2x AnyBay
24 24 4 0 0 24 3x AnyBay
32 0 2 or 4 32 0 0 4x SAS/SATA
32 8 2 16 8 8 2x SAS/SATA + 2x AnyBay
32 8 4 24 0 8 3x SAS/SATA + 1x AnyBay
32 16 4 16 0 16 2x SAS/SATA + 2x AnyBay
32 24 4 8 0 24 1x SAS/SATA + 3x AnyBay
40 0 2 or 4 40 0 0 5x SAS/SATA
40 8 2 24 8 8 3x SAS/SATA + 2x AnyBay
40 8 4 32 0 8 4x SAS/SATA + 1x AnyBay
40 16 4 24 0 16 3x SAS/SATA + 2x AnyBay
40 24 4 16 0 24 2x SAS/SATA + 3x AnyBay
48 0 2 or 4 48 0 0 6x SAS/SATA
48 8 2 32 8 8 4x SAS/SATA + 2x AnyBay
48 8 4 40 0 8 5x SAS/SATA + 1x AnyBay
48 16 4 32 0 16 4x SAS/SATA + 2x AnyBay
48 24 4 24 0 24 3x SAS/SATA + 3x AnyBay

M.2 drives
The SR860 V3 supports one or two M.2 form-factor SATA or NVMe drives for use as an operating system
boot solution or as additional storage. The M.2 drives install into an M.2 module which is mounted on the air
baffle as shown in the Components and connectors section.
The supported M.2 modules are listed in the following table. For field upgrades see the M.2 field upgrades
section below.

Lenovo ThinkSystem SR860 V3 Server 20


Table 13. M.2 modules
Part Feature SATA NVMe Maximum
number code Description drives drives RAID supported
4Y37A09738 B5XJ ThinkSystem M.2 SATA/NVMe 2-Bay Yes Yes (x1 VROC 1
Enablement Kit lane)
4Y37A79663 BM8X ThinkSystem M.2 SATA/x4 NVMe 2-Bay Yes Yes (x4 VROC 1
Adapter lanes)
4Y37A09750 B8P9 ThinkSystem M.2 NVMe 2-Bay RAID No Yes (x1 Integrated 1
Adapter lane) (Marvell)
4Y37A90063 BYFF ThinkSystem M.2 RAID B540i-2i Yes Yes (x1 Integrated 1
SATA/NVMe Adapter lane) (Broadcom)

Configuration notes:
M.2 and 7mm are mutually exclusive: they are not supported together in the same configuration
RAID support is implemented as follows:
ThinkSystem M.2 SATA/NVMe 2-Bay Enablement Kit (4Y37A09738): VROC (SATA or NVMe);
No additional adapter is required nor supported
ThinkSystem M.2 SATA/x4 NVMe 2-Bay Adapter (4Y37A79663): VROC (SATA or NVMe); No
additional adapter is required nor supported
ThinkSystem M.2 NVMe 2-Bay RAID Adapter (4Y37A09750): RAID implemented using an
onboard Marvell 88NR2241 NVMe RAID Controller (NVMe only)
ThinkSystem M.2 RAID B540i-2i SATA/NVMe Adapter (4Y37A90063): RAID is implemented
using an onboard Broadcom SAS3808N RAID controller
If RAID is enabled using VROC, select these feature codes:
VROC SATA support: On Board SATA Software RAID Mode for M.2 (feature BS7Q)
VROC NVMe support:
Intel VROC (VMD NVMe RAID) Standard for M.2 (feature BS7M)
Intel VROC RAID1 Only for M.2 (feature BZ4X)
The ThinkSystem M.2 SATA/NVMe 2-Bay Enablement Kit has the following features:
Supports one or two M.2 drives, either SATA or NVMe
When two drives installed, they must be either both SATA or both NVMe
Support 42mm, 60mm, 80mm and 110mm drive form factors (2242, 2260, 2280 and 22110)
On the SR860 V3, RAID support is implemented using VROC SATA or VROC NVMe
Either 6Gbps SATA or PCIe 3.0 x1 interface to the drives depending on the drives installed
Supports monitoring and reporting of events and temperature through I2C
Firmware update via Lenovo firmware update tools

Lenovo ThinkSystem SR860 V3 Server 21


The ThinkSystem M.2 SATA/x4 NVMe 2-Bay Adapter has the following features:
Supports one or two M.2 drives, either SATA or NVMe
When two drives installed, they must be either both SATA or both NVMe
Support 42mm, 60mm, 80mm and 110mm drive form factors (2242, 2260, 2280 and 22110)
On the SR860 V3, RAID support is implemented using VROC SATA or VROC NVMe
Either 6Gbps SATA or PCIe 4.0 x4 interface to the drives depending on the drives installed
Supports monitoring and reporting of events and temperature through I2C
Firmware update via Lenovo firmware update tools
The ThinkSystem M.2 NVMe 2-Bay RAID Adapter (4Y37A09750) has the following features:
Supports one or two NVMe M.2 drives
Support 42mm, 60mm, 80mm and 110mm drive form factors (2242, 2260, 2280 and 22110)
RAID support via an onboard Marvell 88NR2241 NVMe RAID Controller
With 1 drive, supports single-drive RAID-0
With 2 drives, supports 2-drive RAID-0, 2-drive RAID-1, or two single-drive RAID-0 arrays
PCIe 3.0 x2 host interface; PCIe 3.0 x1 connection to each drive
Management and configuration support via UEFI and OS-based tools
Supports monitoring and reporting of events and temperature through I2C
Firmware update via Lenovo firmware update tools
The ThinkSystem M.2 RAID B540i-2i SATA/NVMe Adapter (4Y37A90063) has the following features:
Supports one or two M.2 drives, either SATA or NVMe
Support 42mm, 60mm, 80mm and 110mm drive form factors (2242, 2260, 2280 and 22110)
RAID support via an onboard Broadcom SAS3808N RAID Controller
With 1 drive, supports JBOD
With 2 drives, supports 2-drive RAID-0, 2-drive RAID-1, or JBOD
PCIe 4.0 x2 host interface; PCIe 4.0 x1 connection to each drive
Management and configuration support via UEFI and OS-based tools
Supports monitoring and reporting of events and temperature
Firmware update via Lenovo firmware update tools
Supports SED drive encryption
M.2 field upgrades
For field upgrades, the SR860 V3 also requires an additional M.2 cable kit.
Ordering information is listed in the following table.

Table 14. M.2 cable kits


Part number Feature code Description
4X97A88013 BW25 ThinkSystem SR850 V3/SR860 V3 M.2 SATA/NVMe Cable Option Kit
(Cable kit for 4Y37A09738 or 4Y37A09750)
4X97A88014 BW26 ThinkSystem SR850 V3/SR860 V3 M.2 SATA/x4 NVMe Cable Option Kit
(Cable kit for 4Y37A79663)
4X97A99371 C4TU ThinkSystem SR850 V3/SR860 V3 M.2 RAID B540i-2i SATA/NVMe Cable Kit
(Cable kit for 4Y37A90063)

Lenovo ThinkSystem SR860 V3 Server 22


7mm drives
The SR860 V3 supports two 7mm drives, either both SATA or both NVMe, at the rear of the server. These
drives occupy one or two PCIe slots in Riser 3, as shown in the following figure.
Support for 7mm drive bays is based on the riser cards selected for Riser 3 (feature code or option part
number as listed in the Riser ordering information section. In addition to selecting the correct Riser 3 riser, you
will also need to order the 7mm drive bays listed in the following table.

Table 15. 7mm drive bay ordering information


Feature Max
Part number code Description SATA NVMe RAID qty
4XB7A88714 BU0N ThinkSystem SR850 V3/SR860 V3 7mm SATA/NVMe 2- Yes Yes VROC 1
Bay Rear Enablement Option Kit
4XB7A88715 B8P3 ThinkSystem SR850 V3/SR860 V3 7mm NVMe 2-Bay No Yes Integrated 1
RAID Rear Enablement Option Kit (Marvell)
4Y37A90062 BYFG ThinkSystem 7mm SATA/NVMe 2-Bay Rear Hot-Swap Yes Yes Integrated 1
RAID Enablement Kit (Broadcom)

M.2 and 7mm drive support: The 7mm drives connect to the same ports on the system board as the M.2
module. As a result, 7mm and M.2 are mutually exclusive.

Figure 7. 7mm drive bays


The use of the 7mm rear drive bays has the following configuration rules:
The location of the 7mm drives is based on the riser card selected.
M.2 and 7mm are mutually exclusive: they are not supported together in the same configuration
For ThinkSystem SR850 V3/SR860 V3 7mm SATA/NVMe 2-Bay Rear Enablement Option Kit
(4XB7A88714):

Lenovo ThinkSystem SR860 V3 Server 23


The 7mm drive bays support either SATA drives or NVMe drives but not both at the same time.
RAID support is implemented using VROC SATA or VROC NVMe; No additional adapter is
required nor supported.
If RAID is enabled using VROC, select these feature codes:
VROC SATA support: On Board SATA Software RAID Mode for 7mm (feature BS7U)
VROC NVMe support:
Intel VROC (VMD NVMe RAID) Standard for 7mm (feature BS7R)
Intel VROC RAID1 Only for 7mm (feature BZ4Y)
For ThinkSystem SR850 V3/SR860 V3 7mm NVMe 2-Bay RAID Rear Enablement Option Kit
(4XB7A88715):
The 7mm drive bays only support NVMe drives
RAID functionality is integrated into the M.2 adapter using a Marvell 88NR2241 NVMe RAID
Controller
For ThinkSystem 7mm SATA/NVMe 2-Bay Rear Hot-Swap RAID Enablement Kit (4Y37A90062)
The 7mm drive bays support either SATA or NVMe drives
RAID functionality is integrated into the 7mm adapter using a Broadcom SAS3808N RAID
Controller
Field upgrades are enabled by replacing Riser 3 with a riser that enables support for the 7mm drives, as listed
in the Riser ordering information section.

SED encryption key management with SKLM


The server supports self-encrypting drives (SEDs) as listed in the Internal drive options section. To effectively
manage a large deployment of these drives in Lenovo servers, IBM Security Key Lifecycle Manager (SKLM)
offers a centralized key management solution.
The IBM Security Key Lifecycle Manager software is available from Lenovo using the ordering information
listed in the following table.

Lenovo ThinkSystem SR860 V3 Server 24


Table 16. IBM Security Key Lifecycle Manager licenses
Part number Feature Description
SKLM Basic Edition
7S0A007FWW S874 IBM Security Key Lifecycle Manager Basic Edition Install License + SW Subscription &
Support 12 Months
7S0A008VWW SDJR IBM Security Key Lifecycle Manager Basic Edition Install License + SW Subscription & 3
Years Of Support
7S0A008WWW SDJS IBM Security Key Lifecycle Manager Basic Edition Install License + SW Subscription & 4
Years Of Support
7S0A008XWW SDJT IBM Security Key Lifecycle Manager Basic Edition Install License + SW Subscription & 5
Years Of Support
SKLM For Raw Decimal Terabyte Storage
7S0A007HWW S876 IBM Security Key Lifecycle Manager For Raw Decimal Terabyte Storage Resource Value
Unit License + SW Subscription & Support 12 Months
7S0A008YWW SDJU IBM Security Key Lifecycle Manager For Raw Decimal Terabyte Storage Resource Value
Unit License + SW Subscription & 3 Years Of Support
7S0A008ZWW SDJV IBM Security Key Lifecycle Manager For Raw Decimal Terabyte Storage Resource Value
Unit License + SW Subscription & 4 Years Of Support
7S0A0090WW SDJW IBM Security Key Lifecycle Manager For Raw Decimal Terabyte Storage Resource Value
Unit License + SW Subscription & 5 Years Of Support
SKLM For Raw Decimal Petabyte Storage
7S0A007KWW S878 IBM Security Key Lifecycle Manager For Raw Decimal Petabyte Storage Resource Value
Unit License + SW Subscription & Support 12 Months
7S0A0091WW SDJX IBM Security Key Lifecycle Manager For Raw Decimal Petabyte Storage Resource Value
Unit License + SW Subscription & 3 Years Of Support
7S0A0092WW SDJY IBM Security Key Lifecycle Manager For Raw Decimal Petabyte Storage Resource Value
Unit License + SW Subscription & 4 Years Of Support
7S0A0093WW SDJZ IBM Security Key Lifecycle Manager For Raw Decimal Petabyte Storage Resource Value
Unit License + SW Subscription & 5 Years Of Support
SKLM For Usable Decimal Terabyte Storage
7S0A007MWW S87A IBM Security Key Lifecycle Manager For Usable Decimal Terabyte Storage Resource
Value Unit License + SW Subscription & Support 12 Months
7S0A0094WW SDK0 IBM Security Key Lifecycle Manager For Usable Decimal Terabyte Storage Resource
Value Unit License + SW Subscription & 3 Years In Support
7S0A0095WW SDK1 IBM Security Key Lifecycle Manager For Usable Decimal Terabyte Storage Resource
Value Unit License + SW Subscription & 4 Years In Support
7S0A0096WW SDK2 IBM Security Key Lifecycle Manager For Usable Decimal Terabyte Storage Resource
Value Unit License + SW Subscription & 5 Years In Support
SKLM For Usable Decimal Petabyte Storage
7S0A007PWW S87C IBM Security Key Lifecycle Manager For Usable Decimal Petabyte Storage Resource
Value Unit License + SW Subscription & Support 12 Months
7S0A0097WW SDK3 IBM Security Key Lifecycle Manager For Usable Decimal Petabyte Storage Resource
Value Unit License + SW Subscription & 3 Years Of Support
7S0A0098WW SDK4 IBM Security Key Lifecycle Manager For Usable Decimal Petabyte Storage Resource
Value Unit License + SW Subscription & 4 Years Of Support
7S0A0099WW SDK5 IBM Security Key Lifecycle Manager For Usable Decimal Petabyte Storage Resource
Value Unit License + SW Subscription & 5 Years Of Support

Lenovo ThinkSystem SR860 V3 Server 25


Controllers for internal storage
The SR860 V3 supports offers a variety of controller options for internal drives:
For 2.5-inch drives:
Onboard NVMe ports (RAID support provided using Intel VROC NVMe RAID)
RAID adapters and HBAs for SAS/SATA drives
For 7mm drive bays in the rear of the server (see the 7mm drives section)
SATA controller integrated into the 7mm drive bay enclosure
NVMe controller integrated into the 7mm drive bay enclosure
For M.2 drives internal to the server (see M.2 drives section)
SATA controller integrated on the M.2 adapters
NVMe controller integrated on the M.2 adapters
The onboard NVMe support has the following features:
Controller integrated into the Intel processor
Supports up to 24 NVMe drives
Each drive has PCIe Gen5 x4 host interface
Supports RAID using Intel VROC
The following table lists the controllers and adapters used for the internal 2.5-inch drive bays of the SR860 V3
server.

Legacy Option ROM support: The server does not support legacy option boot ROM on PCIe adapters
connected to CPU 3 or 4. See the I/O expansion section for details on which slots connect to each CPU.
For option ROM support, install the adapters in slots connected to CPU 1 or 2, or use UEFI boot mode on
those adapters instead.

Table 17. Controllers for internal storage


Part Feature Max Slots
number code Description Qty supported
Onboard NVMe - Intel VROC NVMe RAID
None BR9B Intel VROC (VMD NVMe RAID) Standard 1 Not applicable
(supports RAID 0, 1, 10 for all brands of drives)
4L47A39164 B96G Intel VROC (VMD NVMe RAID) Premium 1 Not applicable
(license upgrade - to enable RAID-5 support)
SAS/SATA HBAs - Adaptec PCIe 3.0
4Y37A72480 BJHH** ThinkSystem 4350-8i SAS/SATA 12Gb HBA 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A72481 BJHJ ThinkSystem 4350-16i SAS/SATA 12Gb HBA 3 6, 8, 11, 12, 14, 15, 18,
19, 20
SAS/SATA HBAs - Adaptec PCIe 4.0
4Y37A97938 C6UL ThinkSystem 4450-16i SAS/SATA PCIe Gen4 24Gb HBA 1 6, 8, 11, 12, 14, 15, 18,
19, 20
SAS/SATA HBAs - Broadcom PCIe 4.0
4Y37A78601 BM51 ThinkSystem 440-8i SAS/SATA PCIe Gen4 12Gb HBA 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A78602 BM50 ThinkSystem 440-16i SAS/SATA PCIe Gen4 12Gb HBA 3 6, 8, 11, 12, 14, 15, 18,
19, 20
SAS/SATA RAID Adapters - Adaptec PCIe 3.0

Lenovo ThinkSystem SR860 V3 Server 26


Part Feature Max Slots
number code Description Qty supported
4Y37A72482 BJHK ThinkSystem RAID 5350-8i PCIe 12Gb Adapter 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A72483 BJHL ThinkSystem RAID 9350-8i 2GB Flash PCIe 12Gb Adapter 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A72485 BJHN ThinkSystem RAID 9350-16i 4GB Flash PCIe 12Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
SAS/SATA RAID Adapters - Adaptec PCIe 4.0
4Y37A97936 C6UJ ThinkSystem RAID 5450-16i PCIe Gen4 24Gb Adapter 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A97935 C6UH ThinkSystem RAID 9450-8i 4GB Flash PCIe Gen4 24Gb 1 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A97937 C6UK ThinkSystem RAID 9450-16i 8GB Flash PCIe Gen4 24Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A97940 C6UN ThinkSystem RAID 9450-32i 8GB Flash PCIe Gen4 24Gb 1 8,18,19, 20
Adapter
SAS/SATA RAID Adapters - Broadcom PCIe 4.0
4Y37A78834 BMFT ThinkSystem RAID 540-8i PCIe Gen4 12Gb Adapter 1 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A78835 BNAX ThinkSystem RAID 540-16i PCIe Gen4 12Gb Adapter 3 6, 8, 11, 12, 14, 15, 18,
19, 20
4Y37A09728† B8NY ThinkSystem RAID 940-8i 4GB Flash PCIe Gen4 12Gb 1 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A09729 B8NW ThinkSystem RAID 940-8i 8GB Flash PCIe Gen4 12Gb 1 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A78600† BM35 ThinkSystem RAID 940-16i 4GB Flash PCIe Gen4 12Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A09730† B8NZ ThinkSystem RAID 940-16i 8GB Flash PCIe Gen4 12Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter 19, 20
4Y37A09733 B8P8 ThinkSystem RAID 940-32i 8GB Flash PCIe Gen4 12Gb 2 8, 15, 18, 19, 20
Adapter
NVMe Adapters
4Y37A09728† BGM1 ThinkSystem RAID 940-8i 4GB Flash PCIe Gen4 12Gb 1 6, 8, 11, 12, 14, 15, 18,
Adapter for U.3 19, 20
4Y37A09729† BGM0 ThinkSystem RAID 940-8i 8GB Flash PCIe Gen4 12Gb 1 6, 8, 11, 12, 14, 15, 18,
Adapter for U.3 19, 20
4Y37A78600† BM36 ThinkSystem RAID 940-16i 4GB Flash PCIe Gen4 12Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter for U.3 19, 20
4Y37A09730† BDY4 ThinkSystem RAID 940-16i 8GB Flash PCIe Gen4 12Gb 3 6, 8, 11, 12, 14, 15, 18,
Adapter for U.3 19, 20
† Adapter also supports PCIe 4.0 x1 connectivity to NVMe drives (requires NVMe drives with U.3 interface)
** This adapter is currently not available for CTO orders; it is only available as an option part number for field
upgrades

Lenovo ThinkSystem SR860 V3 Server 27


Configuration notes:
Supercap support limits the number of RAID adapters installable : RAID 9350 and RAID 940
adapters include and require a power module (supercap) to power the flash memory. The SR860 V3
supports up to 4 supercaps, installed in dedicated holders on the air baffle as shown in the
Components and connectors section. The number of supercaps supported also determines the
maximum number of internal + external RAID 9xx adapters that can be installed in the server.
Field upgrades: The RAID 9xx adapter part numbers include both the supercap and the supercap
cable.
E810 Ethernet and X350 RAID/HBAs : The use of both an Intel E810 network adapter and an X350
HBA/RAID adapter (9350, 5350 and 4350) is supported, however E810 firmware CVL4.3 or later is
required. For details, see Support Tip HT513226.

RAID 940 Tri-Mode support


The RAID 940-8i and RAID 940-16i adapters also support NVMe through a feature named Tri-Mode support
(or Trimode support). This feature enables the use of NVMe U.3 drives at the same time as SAS and SATA
drives. Tri-Mode requires an AnyBay backplane. Cabling of the controller to the backplanes is the same as
with SAS/SATA drives, and the NVMe drives are connected via a PCIe x1 link to the controller.
NVMe drives connected using Tri-Mode support provide better performance than SAS or SATA drives: A
SATA SSD has a data rate of 6Gbps, a SAS SSD has a data rate of 12Gbps, whereas an NVMe U.3 Gen 4
SSD with a PCIe x1 link will have a data rate of 16Gbps. NVMe drives typically also have lower latency and
higher IOPS compared to SAS and SATA drives. Tri-Mode is supported with U.3 NVMe drives and requires an
AnyBay backplane.

Tri-Mode requires U.3 drives: Only NVMe drives with a U.3 interface are supported. U.2 drives are not
supported. See the Internal drive options section for the U.3 drives supported by the server.

Intel VROC onboard RAID


Intel VROC (Virtual RAID on CPU) is a feature of the Intel processor that enables Integrated RAID support.
There are two separate functions of VROC in the SR860 V3:
Intel VROC SATA RAID, formerly known as Intel RSTe
Intel VROC NVMe RAID
VROC SATA RAID (RSTe) is available and supported with all SATA drives. It offers a 6 Gb/s connection to
each drive and on the SR860 V3 implements RAID levels 0, 1, 5, and 10. RAID 1 is limited to 2 drives per
array, and RAID 10 is limited to 4 drives per array. Hot-spare functionality is also supported.
VROC NVMe RAID offers RAID support for any NVMe drives directly connected to the ports on the server's
system board or via adapters such as NVMe retimers or NVMe switch adapters. On the SR860 V3, RAID
levels implemented are based on the VROC feature selected as indicated in the following table. RAID 1 is
limited to 2 drives per array, and RAID 10 is limited to 4 drives per array. Hot-spare functionality is also
supported.

Performance tip: For best performance with VROC NVMe RAID, the drives in an array should all be
connected to the same processor. Spanning processors is possible however performance will be
unpredictable and should be evaluated based on your workload.

The SR860 V3 supports the VROC NVMe RAID offerings listed in the following table.

Tip: These feature codes and part numbers are only for VROC RAID using NVMe drives, not SATA drives

Lenovo ThinkSystem SR860 V3 Server 28


Table 18. Intel VROC NVMe RAID ordering information and feature support
Intel Non-Intel
Part Feature NVMe NVMe
number code Description SSDs SSDs RAID 0 RAID 1 RAID 10 RAID 5
4L47A92670 BZ4W Intel VROC RAID1 Only Yes Yes No Yes No No
4L47A83669 BR9B Intel VROC (VMD NVMe RAID) Yes Yes Yes Yes Yes No
Standard
4L47A39164 B96G Intel VROC (VMD NVMe RAID) Yes Yes Yes Yes Yes Yes
Premium

Configuration notes:
If a feature code is ordered in a CTO build, the VROC functionality is enabled in the factory. For field
upgrades, order a part number and it will be fulfilled as a Feature on Demand (FoD) license which can
then be activated via the XCC management processor user interface.
Intel VROC NVMe is supported on all Intel Xeon Scalable processors

Virtualization support: Virtualization support for Intel VROC is as follows:


VROC SATA RAID (RSTe) : VROC SATA RAID is supported with Windows, RHEL and SLES,
however it is not supported by virtualization hypervisors such as ESXi, KVM, Xen, and Hyper-V.
Virtualization is only supported on the onboard SATA ports in AHCI (non-RAID) mode.
VROC (VMD) NVMe RAID : VROC (VMD) NVMe RAID is supported by ESXi, KVM, Xen, and
Hyper-V. ESXi support is limited to RAID 1 only; other RAID levels are not supported. Windows and
Linux OSes support VROC RAID NVMe, both for host boot functions and for guest OS function, and
RAID-0, 1, 5, and 10 are supported. On ESXi, VROC is supported with both boot and data drives.

For specifications about the RAID adapters and HBAs supported by the SR860 V3, see the ThinkSystem
RAID Adapter and HBA Comparison, available from:
https://lenovopress.com/lp1288-lenovo-thinksystem-raid-adapter-and-hba-reference#sr860-v3-
support=SR860%2520V3
For details about these adapters, see the relevant product guide:
SAS HBAs: https://lenovopress.com/servers/options/hba
RAID adapters: https://lenovopress.com/servers/options/raid

Internal drive options


The following tables list the drive options for internal storage of the server.
2.5-inch hot-swap drives:
2.5-inch hot-swap 12 Gb SAS HDDs
2.5-inch hot-swap 24 Gb SAS SSDs
2.5-inch hot-swap 12 Gb SAS SSDs
2.5-inch hot-swap 6 Gb SATA SSDs
2.5-inch hot-swap PCIe 5.0 NVMe SSDs
2.5-inch hot-swap PCIe 4.0 NVMe SSDs
2.5-inch 7mm hot-swap drives:
7mm 2.5-inch hot-swap 6 Gb SATA SSDs
7mm 2.5-inch hot-swap PCIe 4.0 NVMe SSDs

Lenovo ThinkSystem SR860 V3 Server 29


M.2 drives:
M.2 SATA drives
M.2 PCIe 4.0 NVMe drives

M.2 drive support: The use of M.2 drives requires an additional adapter as described in the M.2 drives
subsection.

SED support: The tables include a column to indicate which drives support SED encryption. The
encryption functionality can be disabled if needed. Note: Not all SED-enabled drives have "SED" in the
description.

Table 19. 2.5-inch hot-swap 12 Gb SAS HDDs


Feature SED Max
Part number code Description support Qty
2.5-inch hot-swap HDDs - 12 Gb SAS 15K
7XB7A00021 AULV ThinkSystem 2.5" 300GB 15K SAS 12Gb Hot Swap 512n HDD No 48
7XB7A00022 AULW ThinkSystem 2.5" 600GB 15K SAS 12Gb Hot Swap 512n HDD No 48
2.5-inch hot-swap HDDs - 12 Gb SAS 10K
7XB7A00025 AULZ ThinkSystem 2.5" 600GB 10K SAS 12Gb Hot Swap 512n HDD No 48
7XB7A00027 AUM1 ThinkSystem 2.5" 1.2TB 10K SAS 12Gb Hot Swap 512n HDD No 48
7XB7A00028 AUM2 ThinkSystem 2.5" 1.8TB 10K SAS 12Gb Hot Swap 512e HDD No 48
4XB7A83970 BRG7 ThinkSystem 2.5" 2.4TB 10K SAS 12Gb Hot Swap 512e HDD v2 No 48
2.5-inch hot-swap SED HDDs - 12 Gb SAS 10K
7XB7A00031 AUM5 ThinkSystem 2.5" 600GB 10K SAS 12Gb Hot Swap 512n HDD SED Support 48
7XB7A00033 B0YX ThinkSystem 2.5" 1.2TB 10K SAS 12Gb Hot Swap 512n HDD SED Support 48
4XB7A84038 BRG8 ThinkSystem 2.5" 2.4TB 10K SAS 12Gb Hot Swap 512e HDD FIPS v2 Support 48

Table 20. 2.5-inch hot-swap 24 Gb SAS SSDs


Feature SED Max
Part number code Description support Qty
2.5-inch hot-swap SSDs - 24 Gb SAS - Mixed Use/Mainstream (3-5 DWPD)
4XB7A80340 BNW8 ThinkSystem 2.5" PM1655 800GB Mixed Use SAS 24Gb HS SSD Support 48
4XB7A80341 BNW9 ThinkSystem 2.5" PM1655 1.6TB Mixed Use SAS 24Gb HS SSD Support 48
4XB7A80342 BNW6 ThinkSystem 2.5" PM1655 3.2TB Mixed Use SAS 24Gb HS SSD Support 48
4XB7A80343 BP3K ThinkSystem 2.5" PM1655 6.4TB Mixed Use SAS 24Gb HS SSD Support 48
2.5-inch hot-swap SSDs - 24 Gb SAS - Read Intensive/Entry/Capacity (<3 DWPD)
4XB7A80318 BNWC ThinkSystem 2.5" PM1653 960GB Read Intensive SAS 24Gb HS SSD Support 48
4XB7A80319 BNWE ThinkSystem 2.5" PM1653 1.92TB Read Intensive SAS 24Gb HS SSD Support 48
4XB7A80320 BNWF ThinkSystem 2.5" PM1653 3.84TB Read Intensive SAS 24Gb HS SSD Support 48
4XB7A80321 BP3E ThinkSystem 2.5" PM1653 7.68TB Read Intensive SAS 24Gb HS SSD Support 48
4XB7A80322 BP3J ThinkSystem 2.5" PM1653 15.36TB Read Intensive SAS 24Gb HS SSD Support 48
4XB7A80323 BP3D ThinkSystem 2.5" PM1653 30.72TB Read Intensive SAS 24Gb HS SSD Support 48

Lenovo ThinkSystem SR860 V3 Server 30


Table 21. 2.5-inch hot-swap 12 Gb SAS SSDs
Feature SED Max
Part number code Description support Qty
2.5-inch hot-swap SSDs - 12 Gb SAS - Write Intensive/Performance (10+ DWPD)
4XB7A83216 BR0Y ThinkSystem 2.5" Nytro 3750 1.6TB Write Intensive SAS 12Gb HS SSD Support 48
4XB7A83217 BR0X ThinkSystem 2.5" Nytro 3750 3.2TB Write Intensive SAS 12Gb HS SSD Support 48

Table 22. 2.5-inch hot-swap 6 Gb SATA SSDs


Feature SED Max
Part number code Description support Qty
2.5-inch hot-swap SSDs - 6 Gb SATA - Mixed Use/Mainstream (3-5 DWPD)
4XB7A82289 BQ21 ThinkSystem 2.5" 5400 MAX 480GB Mixed Use SATA 6Gb HS SSD Support 48
4XB7A82290 BQ24 ThinkSystem 2.5" 5400 MAX 960GB Mixed Use SATA 6Gb HS SSD Support 48
4XB7A82291 BQ22 ThinkSystem 2.5" 5400 MAX 1.92TB Mixed Use SATA 6Gb HS SSD Support 48
4XB7A82292 BQ23 ThinkSystem 2.5" 5400 MAX 3.84TB Mixed Use SATA 6Gb HS SSD Support 48
4XB7A17125 BA7Q ThinkSystem 2.5" S4620 480GB Mixed Use SATA 6Gb HS SSD No 48
4XB7A17126 BA4T ThinkSystem 2.5" S4620 960GB Mixed Use SATA 6Gb HS SSD No 48
4XB7A17127 BA4U ThinkSystem 2.5" S4620 1.92TB Mixed Use SATA 6Gb HS SSD No 48
4XB7A17128 BK7L ThinkSystem 2.5" S4620 3.84TB Mixed Use SATA 6Gb HS SSD No 48
2.5-inch hot-swap SSDs - 6 Gb SATA - Read Intensive/Entry (<3 DWPD)
4XB7A82258 BQ1Q ThinkSystem 2.5" 5400 PRO 240GB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A82259 BQ1P ThinkSystem 2.5" 5400 PRO 480GB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A82260 BQ1R ThinkSystem 2.5" 5400 PRO 960GB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A82261 BQ1X ThinkSystem 2.5" 5400 PRO 1.92TB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A82262 BQ1S ThinkSystem 2.5" 5400 PRO 3.84TB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A82263 BQ1T ThinkSystem 2.5" 5400 PRO 7.68TB Read Intensive SATA 6Gb HS SSD Support 48
4XB7A17072 B99D ThinkSystem 2.5" S4520 240GB Read Intensive SATA 6Gb HS SSD No 48
4XB7A17101 BA7G ThinkSystem 2.5" S4520 480GB Read Intensive SATA 6Gb HS SSD No 48
4XB7A17102 BA7H ThinkSystem 2.5" S4520 960GB Read Intensive SATA 6Gb HS SSD No 48
4XB7A17103 BA7J ThinkSystem 2.5" S4520 1.92TB Read Intensive SATA 6Gb HS SSD No 48
4XB7A17104 BK77 ThinkSystem 2.5" S4520 3.84TB Read Intensive SATA 6Gb HS SSD No 48
4XB7A17105 BK78 ThinkSystem 2.5" S4520 7.68TB Read Intensive SATA 6Gb HS SSD No 48

Lenovo ThinkSystem SR860 V3 Server 31


Table 23. 2.5-inch hot-swap PCIe 5.0 NVMe SSDs
Feature SED Max
Part number code Description support Qty
2.5-inch SSDs - U.2 PCIe 5.0 NVMe - Mixed Use/Mainstream (3-5 DWPD)
4XB7A93888 C0ZM ThinkSystem 2.5" U.2 CD8P 1.6TB Mixed Use NVMe PCIe 5.0 x4 HS Support 24
SSD
4XB7A93889 C0ZL ThinkSystem 2.5" U.2 CD8P 3.2TB Mixed Use NVMe PCIe 5.0 x4 HS Support 24
SSD
4XB7A93890 C0ZK ThinkSystem 2.5" U.2 CD8P 6.4TB Mixed Use NVMe PCIe 5.0 x4 HS Support 24
SSD
4XB7A93891 C0ZJ ThinkSystem 2.5" U.2 CD8P 12.8TB Mixed Use NVMe PCIe 5.0 x4 HS Support 24
SSD
2.5-inch SSDs - U.2 PCIe 5.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A93480 C0BB ThinkSystem 2.5" U.2 CD8P 1.92TB Read Intensive NVMe PCIe 5.0 x4 Support 24
HS SSD
4XB7A93481 C0BA ThinkSystem 2.5" U.2 CD8P 3.84TB Read Intensive NVMe PCIe 5.0 x4 Support 24
HS SSD
4XB7A93482 C0B9 ThinkSystem 2.5" U.2 CD8P 7.68TB Read Intensive NVMe PCIe 5.0 x4 Support 24
HS SSD
4XB7A93483 C0B8 ThinkSystem 2.5" U.2 CD8P 15.36TB Read Intensive NVMe PCIe 5.0 x4 Support 24
HS SSD
4XB7A93484 C0B7 ThinkSystem 2.5" U.2 CD8P 30.72TB Read Intensive NVMe PCIe 5.0 x4 Support 24
HS SSD
2.5-inch SSDs - U.3 PCIe 5.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A82366 BTPZ ThinkSystem 2.5" U.3 PM1743 1.92TB Read Intensive NVMe PCIe 5.0 Support 24
x4 HS SSD
4XB7A82367 BTQ0 ThinkSystem 2.5" U.3 PM1743 3.84TB Read Intensive NVMe PCIe 5.0 Support 24
x4 HS SSD
4XB7A82368 BTQ1 ThinkSystem 2.5" U.3 PM1743 7.68TB Read Intensive NVMe PCIe 5.0 Support 24
x4 HS SSD
4XB7A82369 BTQ2 ThinkSystem 2.5" U.3 PM1743 15.36TB Read Intensive NVMe PCIe 5.0 Support 24
x4 HS SSD

Table 24. 2.5-inch hot-swap PCIe 4.0 NVMe SSDs


Feature SED Max
Part number code Description support Qty
2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Write Intensive/Performance (10+ DWPD)
4XB7A17158 BKKY ThinkSystem 2.5" U.2 P5800X 400GB Write Intensive NVMe PCIe 4.0 x4 No 24
HS SSD
4XB7A17159 BKKZ ThinkSystem 2.5" U.2 P5800X 800GB Write Intensive NVMe PCIe 4.0 x4 No 24
HS SSD
4XB7A17160 BMM8 ThinkSystem 2.5" U.2 P5800X 1.6TB Write Intensive NVMe PCIe 4.0 x4 No 24
HS SSD
2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Mixed Use/Mainstream (3-5 DWPD)
4XB7A17129 BNEG ThinkSystem 2.5" U.2 P5620 1.6TB Mixed Use NVMe PCIe 4.0 x4 HS Support 24
SSD

Lenovo ThinkSystem SR860 V3 Server 32


Feature SED Max
Part number code Description support Qty
4XB7A17133 BNEZ ThinkSystem 2.5" U.2 P5620 6.4TB Mixed Use NVMe PCIe 4.0 x4 HS Support 24
SSD
4XB7A17136 BA4V ThinkSystem 2.5" U.2 P5620 12.8TB Mixed Use NVMe PCIe 4.0 x4 HS Support 24
SSD
2.5-inch SSDs - U.3 PCIe 4.0 NVMe - Mixed Use/Mainstream (3-5 DWPD)
4XB7A95054 C2BG ThinkSystem 2.5" U.3 7500 MAX 800GB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A95055 C2BV ThinkSystem 2.5" U.3 7500 MAX 1.6TB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A95056 C2BW ThinkSystem 2.5" U.3 7500 MAX 3.2TB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A95057 C2BF ThinkSystem 2.5" U.3 7500 MAX 6.4TB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A95058 C2BX ThinkSystem 2.5" U.3 7500 MAX 12.8TB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A13967 BNEJ ThinkSystem 2.5" U.3 7450 MAX 1.6TB Mixed Use NVMe PCIe 4.0 x4 Support 24
HS SSD
2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A93075 C1WJ ThinkSystem 2.5" U.2 P5336 30.72TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A93076 C1WK ThinkSystem 2.5" U.2 P5336 61.44TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A90100 BXMA ThinkSystem 2.5" U.2 PM9A3 1.92TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A90101 BXM9 ThinkSystem 2.5" U.2 PM9A3 3.84TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A13941 BMGD ThinkSystem 2.5" U.2 P5520 1.92TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A13943 BNEF ThinkSystem 2.5" U.2 P5520 7.68TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
4XB7A13631 BNEQ ThinkSystem 2.5" U.2 P5520 15.36TB Read Intensive NVMe PCIe 4.0 x4 Support 24
HS SSD
2.5-inch SSDs - U.3 PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A95049 C2BY ThinkSystem 2.5" U.3 7500 PRO 960GB Read Intensive NVMe PCIe 4.0 Support 24
x4 HS SSD
4XB7A95050 C2BR ThinkSystem 2.5" U.3 7500 PRO 1.92TB Read Intensive NVMe PCIe 4.0 Support 24
x4 HS SSD
4XB7A95051 C2BS ThinkSystem 2.5" U.3 7500 PRO 3.84TB Read Intensive NVMe PCIe 4.0 Support 24
x4 HS SSD
4XB7A95052 C2BT ThinkSystem 2.5" U.3 7500 PRO 7.68TB Read Intensive NVMe PCIe 4.0 Support 24
x4 HS SSD
4XB7A95053 C2BU ThinkSystem 2.5" U.3 7500 PRO 15.36TB Read Intensive NVMe PCIe Support 24
4.0 x4 HS SSD
4XB7A79647 BNF2 ThinkSystem 2.5" U.3 7450 PRO 1.92TB Read Intensive NVMe PCIe 4.0 Support 24
x4 HS SSD

Lenovo ThinkSystem SR860 V3 Server 33


Table 25. 7mm 2.5-inch hot-swap 6 Gb SATA SSDs
Feature SED Max
Part number code Description support Qty
7mm 2.5-inch hot-swap SSDs - 6 Gb SATA - Read Intensive/Entry (<3 DWPD)
4XB7A82264 BQ1U ThinkSystem 7mm 5400 PRO 240GB Read Intensive SATA 6Gb HS Support 2
SSD
4XB7A82265 BQ1V ThinkSystem 7mm 5400 PRO 480GB Read Intensive SATA 6Gb HS Support 2
SSD
4XB7A82266 BQ1W ThinkSystem 7mm 5400 PRO 960GB Read Intensive SATA 6Gb HS Support 2
SSD
4XB7A17106 BK79 ThinkSystem 7mm S4520 240GB Read Intensive SATA 6Gb HS SSD No 2
4XB7A17107 BK7A ThinkSystem 7mm S4520 480GB Read Intensive SATA 6Gb HS SSD No 2
4XB7A17108 BK7B ThinkSystem 7mm S4520 960GB Read Intensive SATA 6Gb HS SSD No 2

Table 26. 7mm 2.5-inch hot-swap PCIe 4.0 NVMe SSDs


Feature SED Max
Part number code Description support Qty
7mm 2.5-inch hot-swap SSDs - PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A82853 BPZ4 ThinkSystem 7mm U.3 7450 PRO 960GB Read Intensive NVMe PCIe Support 2
4.0 x4 HS SSD
4XB7A82855 BPZ5 ThinkSystem 7mm U.3 7450 PRO 1.92TB Read Intensive NVMe PCIe Support 2
4.0 x4 HS SSD
4XB7A82856 BPZ6 ThinkSystem 7mm U.3 7450 PRO 3.84TB Read Intensive NVMe PCIe Support 2
4.0 x4 HS SSD

Table 27. M.2 SATA drives


Feature SED Max
Part number code Description support Qty
M.2 SSDs - 6 Gb SATA - Read Intensive/Entry (<3 DWPD)
4XB7A89422 BYF7 ThinkSystem M.2 ER3 240GB Read Intensive SATA 6Gb NHS SSD Support 2
4XB7A90049 BYF8 ThinkSystem M.2 ER3 480GB Read Intensive SATA 6Gb NHS SSD Support 2
4XB7A90230 BYF9 ThinkSystem M.2 ER3 960GB Read Intensive SATA 6Gb NHS SSD Support 2
4XB7A82286 BQ1Z ThinkSystem M.2 5400 PRO 240GB Read Intensive SATA 6Gb NHS Support 2
SSD
4XB7A82287 BQ1Y ThinkSystem M.2 5400 PRO 480GB Read Intensive SATA 6Gb NHS Support 2
SSD
4XB7A82288 BQ20 ThinkSystem M.2 5400 PRO 960GB Read Intensive SATA 6Gb NHS Support 2
SSD
7N47A00130 AUUV ThinkSystem M.2 128GB SATA 6Gbps Non-Hot Swap SSD No 2

Lenovo ThinkSystem SR860 V3 Server 34


Table 28. M.2 PCIe 4.0 NVMe drives
Feature SED Max
Part number code Description support Qty
M.2 SSDs - PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A90102 BXMH ThinkSystem M.2 PM9A3 960GB Read Intensive NVMe PCIe 4.0 x4 Support 2
NHS SSD
4XB7A82636 BS2P ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 Support 2
NHS SSD
4XB7A13999 BKSR ThinkSystem M.2 7450 PRO 960GB Read Intensive NVMe PCIe 4.0 x4 Support 2
NHS SSD

USB flash drive


For general portable storage needs, the server also supports the USB flash drive option that is listed in the
following table.

Table 29. USB memory key


Part number Feature Description
4X77A77065 BNWN ThinkSystem USB 32GB USB 3.0 Flash Drive

Internal backup units


The server does not support any internal backup units, such as tape drives or RDX drives.

Optical drives
The server does not support an internal optical drive.
An external USB optical drive is available, listed in the following table.

Table 30. External optical drive


Part number Feature code Description
7XA7A05926 AVV8 ThinkSystem External USB DVD RW Optical Disk Drive

I/O expansion
The SR860 V3 supports up to 20 PCIe slots: 18x regular PCIe slots – either Gen4 or Gen5 – plus two OCP 3.0
slots with Gen5 interfaces.

Full length adapter support: For full-length adapter support, you must select base BT2K. This Base
selection derives the lower winged heatsinks for the rear processors to enable full-length adapter support.
The standard base (BT2J) only supports half-length and low-profile adapters and cannot be upgraded in
the field to support full-length adapters. See the Models section for more information about base feature
codes.

Topics in this section:


Riser & slot support
Riser ordering information
Riser supported combinations

Lenovo ThinkSystem SR860 V3 Server 35


Figure 8. Slots in the SR860 V3

Riser & slot support


The SR860 V3 server supports Gen4-only or Gen5+Gen4 slot configurations to suit the needs of installed
applications.
18x Gen4 + 2x OCP Gen5 slots
For applications that require as many slots as possible, the SR860 V3 supports a configuration with 18x Gen4
slots plus 2x OCP slots. The configuration supports 4x double-wide GPUs and optionally supports 7mm hot-
swap drive bays installed in place of slot 20.
OCP slots:
Slot 1: Gen5 x16 OCP 3.0 slot (CPU 1, Note: OCP 3.0 is support PCIe Gen4)
Slot 2: Gen5 x16 OCP 3.0 slot (CPU 2, Note: OCP 3.0 is support PCIe Gen4)
Riser 1:
Slot 3: Gen4 x8 FHFL (CPU 1) (Not present if slot 4 is double-wide GPU)
Slot 4: Gen4 x16 FHFL (CPU 4) (Capable for double-wide GPU)
Slot 5: Gen4 x8 FHFL (CPU 4) (Not present if slot 6 is double-wide GPU)
Slot 6: Gen4 x16 FHFL (CPU 1) (Capable for double-wide GPU)
Slot 7: Gen4 x8 FHHL (CPU 4)
Slot 8: Gen4 x16 FHHL (CPU 1)
Riser 2:
Slot 9: Gen4 x8 HHHL (CPU 4)
Slot 10: Gen4 x8 HHHL (CPU 4)
Slot 11: Gen4 x8 HHHL (CPU 1)
Slot 12: Gen4 x8 HHHL (CPU 3)
Slot 13: Gen4 x8 HHHL (CPU 3)
Slot 14: Gen4 x8 HHHL (CPU 2)
Riser 3:
Slot 15: Gen4 x8 FHFL (CPU 2) (Not present if slot 16 is double-wide GPU)
Slot 16: Gen4 x16 FHFL (CPU 3) (Capable for double-wide GPU)
Slot 17: Gen4 x8 FHFL (CPU 3) (Not present if slot 18 is double-wide GPU)
Slot 18: Gen4 x16 FHFL (CPU 2) (Capable for double-wide GPU)
Slot 10: Gen4 x8 FHHL (CPU 3)
Slot 20: Gen4 x16 FHHL (CPU 2) (Not present if select 7mm)

Lenovo ThinkSystem SR860 V3 Server 36


The 18-slot configuration is shown in the following figure. Blue slots are Gen4 and green slots are Gen5. The
red shading indicates the slots where double-wide GPUs are supported. The processor that each slot is
connected to is also shown in the figure.

Figure 9. Slot configuration with 18x Gen4 slots


12x Gen5 + 4x Gen4 + 2x OCP Gen5 slots
For applications that require PCIe Gen5 slots, the SR860 V3 supports a configuration with 12x Gen5 slots, 4x
Gen4 slots, plus 2x OCP slots. The configuration supports 4x double-wide GPUs and optionally supports 7mm
hot-swap drive bays installed in place of slot 20.
OCP slots:
Slot 1: Gen5 x16 OCP 3.0 slot (CPU 1, Note: OCP 3.0 is support PCIe Gen4)
Slot 2: Gen5 x16 OCP 3.0 slot (CPU 2, Note: OCP 3.0 is support PCIe Gen4)
Riser 1:
Slot 3: Gen5 x8 FHFL (CPU 1) (Not present if slot 4 is double-wide GPU)
Slot 4: Gen5 x16 FHFL (CPU 4) (Capable for double-wide GPU)
Slot 5: Empty
Slot 6: Gen5 x16 FHFL (CPU 1) (Capable for double-wide GPU)
Slot 7: Gen5 x16 FHHL (CPU 4)
Slot 8: Gen4 x16 FHHL (CPU 1)
Riser 2:
Slot 9: Gen5 x8 HHHL (CPU 4)
Slot 10: Gen5 x8 HHHL (CPU 4)
Slot 11: Gen4 x8 HHHL (CPU 1)
Slot 12: Gen4 x8 HHHL (CPU 3)
Slot 13: Gen5 x8 HHHL (CPU 3)
Slot 14: Gen5 x8 HHHL (CPU 2)
Riser 3:
Slot 15: Gen5 x8 FHFL (CPU 2) (Not present if slot 16 is double-wide GPU)
Slot 16: Gen5 x16 FHFL (CPU 3) (Capable for double-wide GPU)
Slot 17: Empty
Slot 18: Gen5 x16 FHFL (CPU 2) (Capable for double-wide GPU)
Slot 19: Gen5 x16 FHHL (CPU 3)
Slot 20: Gen4 x16 FHHL (CPU 2) (Not present if select 7mm)
The 16-slot configuration (12x Gen5 slots + 4x Gen4 slots) is shown in the following figure. Blue slots are
Gen4 and green slots are Gen5. The red shading indicates the slots where double-wide GPUs are supported.
The processor that each slot is connected to is also shown in the figure.

Lenovo ThinkSystem SR860 V3 Server 37


Figure 10. Slot configuration with 12x Gen5 slots + 4x Gen4 slots
4x Gen4 + 2x OCP Gen5 slots
For applications that don't require many slots, the SR860 V3 also supports a configuration with 4x Gen4 slots
plus 2x OCP slots. The configuration optionally supports 7mm hot-swap drive bays installed in riser 3.
OCP slots:
Slot 1: Gen5 x16 OCP 3.0 slot (CPU 1, Note: OCP 3.0 is support PCIe Gen4)
Slot 2: Gen5 x16 OCP 3.0 slot (CPU 2, Note: OCP 3.0 is support PCIe Gen4)
Riser 1:
Slot 3 ~ Slot 6: Empty
Slot 7: Gen4 x8 FHHL (CPU 1)
Slot 8: Gen4 x8 FHHL (CPU 1)
Riser 2: Empty
Slot 9 ~ Slot 14: Empty
Riser 3:
Slot 15 ~ Slot 18: Empty
Slot 19: Gen4 x8 FHHL (CPU 2)
Slot 20: Gen4 x8 FHHL (CPU 2)
The 4-slot configuration is shown in the following figure. Blue slots are Gen4 and green slots are Gen5. The
processor that each slot is connected to is also shown in the figure.

Lenovo ThinkSystem SR860 V3 Server 38


Figure 11. Slot configuration with 4x Gen4 slots

Riser ordering information


The riser cards supported are listed in the following table. The table also lists the total slots and what type of
slots each riser card includes. All of the x8 slots have a physical x16 connector.

Risers with 7mm drive cages : As listed in the table, some risers include support for two 7mm hot-swap
drive bays installed in Riser 3. The part numbers and feature codes include the cages and cables needed
for the 7mm drive bays, however the 7mm drive bays themselves (backplanes) will need to be ordered as
well. See the 7mm drives section for details.

Table 31. Riser part numbers (Blue = Gen4, green = Gen5)


Part Feature Total G4 G4 G5 G5 7mm
number code Description slots x8 x16 x8 x16 drives
FHHL risers for Riser 1 and 3
4XC7A86627 BT3T ThinkSystem SR860 V3 x8/x8 PCIe G4 Riser 1/3 2 2 0 0 0 No
FHHL Option Kit
4XC7A86623 BT3V ThinkSystem SR860 V3 3 x16 & 3 x8 PCIe G4 Riser 6 3 3 0 0 No
1/3 FHFL Option Kit
4XC7A86624 BT3Y ThinkSystem SR860 V3 4 x16 & 1 x8 PCIe G5 Riser 5 0 1 1 3 No
1/3 FHFL Option Kit
FHHL risers with 7mm drive cages for Riser 3 (include 2x 7mm drive cages; order drive bays separately - See
the 7mm drives section)
4XC7A87077 BT3U ThinkSystem SR860 V3 7mm/x8/x8 PCIe G4 Riser 3 2 2 0 0 0 Yes
FHHL Option Kit
4XC7A87075 BT3X ThinkSystem SR860 V3 2 x16 & 3 x8 + 7mm PCIe G4 5 3 2 0 0 Yes
Riser 3 FHFL Option Kit
4XC7A87076 BT40 ThinkSystem SR860 V3 3 x16 & 1 x8 + 7mm PCIe G5 4 0 0 1 3 Yes
Riser 3 FHFL Option Kit
HHHL risers for Riser 2
4XC7A86625 BT3W ThinkSystem SR860 V3 6 x8 PCIe G4 Riser 2 HHHL 6 6 0 0 0 No
Option Kit
4XC7A86626 BT3Z ThinkSystem SR860 V3 6 x8 PCIe G5 Riser 2 HHHL 6 0 0 6 0 No
Option Kit

Lenovo ThinkSystem SR860 V3 Server 39


Riser supported combinations
The SR860 V3 supports a mix of Gen5 and Gen4 PCIe slots using the combinations listed in the following
table. The table also indicates which configurations support 7mm drives and which configurations support
double-wide (DW) GPUs.

Field upgrades: It is supported to add riser cards using option part numbers as long as the target
configuration is listed as supported in the table. Part numbers are listed in the Riser ordering information
section.

Table 32. Riser combinations (Blue = Gen4, green = Gen5)


Slots DW GPUs
Riser Total G4 G4 G5 G5 7mm GPU Remaining
count slots x8 x16 x8 x16 support support slots** Riser 1 Riser 2 Riser 3
1 2 2 0 0 0 No No - ThinkSystem Empty Empty
SR860 V3 x8/x8
PCIe G4 Riser 1/3
FHHL, BT3T
1 6 3 3 0 0 No Yes (2) 4 ThinkSystem Empty Empty
SR860 V3 3 x16
& 3 x8 PCIe G4
Riser 1/3 FHFL,
BT3V
1 5 0 1 1 3 No Yes (2) 3 ThinkSystem Empty Empty
SR860 V3 4 x16
& 1 x8 PCIe G5
Riser 1/3 FHFL,
BT3Y
1 2 2 0 0 0 Yes No - Empty Empty ThinkSystem
SR860 V3
7mm/x8/x8 PCIe
G4 Riser 3 FHHL,
BT3U
1 5 3 2 0 0 Yes Yes (2) 3 Empty Empty ThinkSystem
SR860 V3 2 x16 &
3 x8 + 7mm PCIe
G4 Riser 3 FHFL,
BT3X
1 4 0 0 1 3 Yes Yes (2) 3 Empty Empty ThinkSystem
SR860 V3 3 x16 &
1 x8 + 7mm PCIe
G5 Riser 3 FHFL,
BT40
2 4 4 0 0 0 No No - ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3 x8/x8
PCIe G4 Riser 1/3 PCIe G4 Riser 1/3
FHHL, BT3T FHHL, BT3T
2 8 5 3 0 0 No Yes (2) 6 ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3 3 x16 &
PCIe G4 Riser 1/3 3 x8 PCIe G4 Riser
FHHL, BT3T 1/3 FHFL, BT3V
2 7 2 1 1 3 No Yes (2) 6 ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3 4 x16 &
PCIe G4 Riser 1/3 1 x8 PCIe G5 Riser
FHHL, BT3T 1/3 FHFL, BT3Y

Lenovo ThinkSystem SR860 V3 Server 40


Slots DW GPUs
Riser Total G4 G4 G5 G5 7mm GPU Remaining
count slots x8 x16 x8 x16 support support slots** Riser 1 Riser 2 Riser 3
2 12 6 6 0 0 No Yes (4) 8 ThinkSystem Empty ThinkSystem
SR860 V3 3 x16 SR860 V3 3 x16 &
& 3 x8 PCIe G4 3 x8 PCIe G4 Riser
Riser 1/3 FHFL, 1/3 FHFL, BT3V
BT3V
2 11 3 4 1 3 No No* No* ThinkSystem Empty ThinkSystem
SR860 V3 3 x16 SR860 V3 4 x16 &
& 3 x8 PCIe G4 1 x8 PCIe G5 Riser
Riser 1/3 FHFL, 1/3 FHFL, BT3Y
BT3V
2 11 3 4 1 3 No No* No* ThinkSystem Empty ThinkSystem
SR860 V3 4 x16 SR860 V3 3 x16 &
& 1 x8 PCIe G5 3 x8 PCIe G4 Riser
Riser 1/3 FHFL, 1/3 FHFL, BT3V
BT3Y
2 10 0 2 2 6 No Yes (4) 8 ThinkSystem Empty ThinkSystem
SR860 V3 4 x16 SR860 V3 4 x16 &
& 1 x8 PCIe G5 1 x8 PCIe G5 Riser
Riser 1/3 FHFL, 1/3 FHFL, BT3Y
BT3Y
2 8 8 0 0 0 No No - ThinkSystem ThinkSystem Empty
SR860 V3 x8/x8 SR860 V3 6 x8
PCIe G4 Riser 1/3 PCIe G4 Riser
FHHL, BT3T 2 HHHL, BT3W
2 8 4 0 4 0 No No - ThinkSystem ThinkSystem Empty
SR860 V3 x8/x8 SR860 V3 6 x8
PCIe G4 Riser 1/3 PCIe G5 Riser
FHHL, BT3T 2 HHHL, BT3Z
2 12 9 3 0 0 No Yes (2) 10 ThinkSystem ThinkSystem Empty
SR860 V3 3 x16 SR860 V3 6 x8
& 3 x8 PCIe G4 PCIe G4 Riser
Riser 1/3 FHFL, 2 HHHL, BT3W
BT3V
2 12 5 3 4 0 No Yes (2) 10 ThinkSystem ThinkSystem Empty
SR860 V3 3 x16 SR860 V3 6 x8
& 3 x8 PCIe G4 PCIe G5 Riser
Riser 1/3 FHFL, 2 HHHL, BT3Z
BT3V
2 11 6 1 1 3 No Yes (2) 10 ThinkSystem ThinkSystem Empty
SR860 V3 4 x16 SR860 V3 6 x8
& 1 x8 PCIe G5 PCIe G4 Riser
Riser 1/3 FHFL, 2 HHHL, BT3W
BT3Y
2 11 2 1 5 3 No Yes (2) 10 ThinkSystem ThinkSystem Empty
SR860 V3 4 x16 SR860 V3 6 x8
& 1 x8 PCIe G5 PCIe G5 Riser
Riser 1/3 FHFL, 2 HHHL, BT3Z
BT3Y
2 4 4 0 0 0 Yes No - ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3
PCIe G4 Riser 1/3 7mm/x8/x8 PCIe
FHHL, BT3T G4 Riser 3 FHHL,
BT3U
2 7 5 2 0 0 Yes Yes (2) 5 ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3 2 x16 &
PCIe G4 Riser 1/3 3 x8 + 7mm PCIe
FHHL, BT3T G4 Riser 3 FHFL,
BT3X

Lenovo ThinkSystem SR860 V3 Server 41


Slots DW GPUs
Riser Total G4 G4 G5 G5 7mm GPU Remaining
count slots x8 x16 x8 x16 support support slots** Riser 1 Riser 2 Riser 3
2 6 2 0 1 3 Yes Yes (2) 5 ThinkSystem Empty ThinkSystem
SR860 V3 x8/x8 SR860 V3 3 x16 &
PCIe G4 Riser 1/3 1 x8 + 7mm PCIe
FHHL, BT3T G5 Riser 3 FHFL,
BT40
2 8 5 3 0 0 Yes No - ThinkSystem Empty ThinkSystem
SR860 V3 3 x16 SR860 V3
& 3 x8 PCIe G4 7mm/x8/x8 PCIe
Riser 1/3 FHFL, G4 Riser 3 FHHL,
BT3V BT3U
2 11 6 5 0 0 Yes Yes (4) 7 ThinkSystem Empty ThinkSystem
SR860 V3 3 x16 SR860 V3 2 x16 &
& 3 x8 PCIe G4 3 x8 + 7mm PCIe
Riser 1/3 FHFL, G4 Riser 3 FHFL,
BT3V BT3X
2 10 3 3 1 3 Yes No* No* ThinkSystem Empty ThinkSystem
SR860 V3 3 x16 SR860 V3 3 x16 &
& 3 x8 PCIe G4 1 x8 + 7mm PCIe
Riser 1/3 FHFL, G5 Riser 3 FHFL,
BT3V BT40
2 7 2 1 1 3 Yes Yes (2) 5 ThinkSystem Empty ThinkSystem
SR860 V3 4 x16 SR860 V3
& 1 x8 PCIe G5 7mm/x8/x8 PCIe
Riser 1/3 FHFL, G4 Riser 3 FHHL,
BT3Y BT3U
2 10 3 3 1 3 Yes No* No* ThinkSystem Empty ThinkSystem
SR860 V3 4 x16 SR860 V3 2 x16 &
& 1 x8 PCIe G5 3 x8 + 7mm PCIe
Riser 1/3 FHFL, G4 Riser 3 FHFL,
BT3Y BT3X
2 9 0 1 2 6 Yes Yes (4) 7 ThinkSystem Empty ThinkSystem
SR860 V3 4 x16 SR860 V3 3 x16 &
& 1 x8 PCIe G5 1 x8 + 7mm PCIe
Riser 1/3 FHFL, G5 Riser 3 FHFL,
BT3Y BT40
3 10 10 0 0 0 No No - ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 x8/x8
PCIe G4 Riser 1/3 PCIe G4 Riser PCIe G4 Riser 1/3
FHHL, BT3T 2 HHHL, BT3W FHHL, BT3T
3 14 11 3 0 0 No Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 3 x16 &
PCIe G4 Riser 1/3 PCIe G4 Riser 3 x8 PCIe G4 Riser
FHHL, BT3T 2 HHHL, BT3W 1/3 FHFL, BT3V
3 13 8 1 1 3 No Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 4 x16 &
PCIe G4 Riser 1/3 PCIe G4 Riser 1 x8 PCIe G5 Riser
FHHL, BT3T 2 HHHL, BT3W 1/3 FHFL, BT3Y
3 13 4 1 5 3 No Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 4 x16 &
PCIe G4 Riser 1/3 PCIe G5 Riser 1 x8 PCIe G5 Riser
FHHL, BT3T 2 HHHL, BT3Z 1/3 FHFL, BT3Y
3 18 12 6 0 0 No Yes (4) 14 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 3 x16 SR860 V3 6 x8 SR860 V3 3 x16 &
& 3 x8 PCIe G4 PCIe G4 Riser 3 x8 PCIe G4 Riser
Riser 1/3 FHFL, 2 HHHL, BT3W 1/3 FHFL, BT3V
BT3V

Lenovo ThinkSystem SR860 V3 Server 42


Slots DW GPUs
Riser Total G4 G4 G5 G5 7mm GPU Remaining
count slots x8 x16 x8 x16 support support slots** Riser 1 Riser 2 Riser 3
3 17 5 4 5 3 No No* No* ThinkSystem ThinkSystem ThinkSystem
SR860 V3 3 x16 SR860 V3 6 x8 SR860 V3 4 x16 &
& 3 x8 PCIe G4 PCIe G5 Riser 1 x8 PCIe G5 Riser
Riser 1/3 FHFL, 2 HHHL, BT3Z 1/3 FHFL, BT3Y
BT3V
3 16 6 2 2 6 No Yes (4) 14 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3 4 x16 &
& 1 x8 PCIe G5 PCIe G4 Riser 1 x8 PCIe G5 Riser
Riser 1/3 FHFL, 2 HHHL, BT3W 1/3 FHFL, BT3Y
BT3Y
3 16 2 2 6 6 No Yes (4) 14 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3 4 x16 &
& 1 x8 PCIe G5 PCIe G5 Riser 1 x8 PCIe G5 Riser
Riser 1/3 FHFL, 2 HHHL, BT3Z 1/3 FHFL, BT3Y
BT3Y
3 10 10 0 0 0 Yes No - ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3
PCIe G4 Riser 1/3 PCIe G4 Riser 7mm/x8/x8 PCIe
FHHL, BT3T 2 HHHL, BT3W G4 Riser 3 FHHL,
BT3U
3 13 11 2 0 0 Yes Yes (2) 11 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 2 x16 &
PCIe G4 Riser 1/3 PCIe G4 Riser 3 x8 + 7mm PCIe
FHHL, BT3T 2 HHHL, BT3W G4 Riser 3 FHFL,
BT3X
3 12 8 0 1 3 Yes Yes (2) 11 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 3 x16 &
PCIe G4 Riser 1/3 PCIe G4 Riser 1 x8 + 7mm PCIe
FHHL, BT3T 2 HHHL, BT3W G5 Riser 3 FHFL,
BT40
3 12 4 0 5 3 Yes Yes (2) 11 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 x8/x8 SR860 V3 6 x8 SR860 V3 3 x16 &
PCIe G4 Riser 1/3 PCIe G5 Riser 1 x8 + 7mm PCIe
FHHL, BT3T 2 HHHL, BT3Z G5 Riser 3 FHFL,
BT40
3 14 11 3 0 0 Yes Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 3 x16 SR860 V3 6 x8 SR860 V3
& 3 x8 PCIe G4 PCIe G4 Riser 7mm/x8/x8 PCIe
Riser 1/3 FHFL, 2 HHHL, BT3W G4 Riser 3 FHHL,
BT3V BT3U
3 17 12 5 0 0 Yes Yes (4) 13 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 3 x16 SR860 V3 6 x8 SR860 V3 2 x16 &
& 3 x8 PCIe G4 PCIe G4 Riser 3 x8 + 7mm PCIe
Riser 1/3 FHFL, 2 HHHL, BT3W G4 Riser 3 FHFL,
BT3V BT3X
3 16 5 3 5 3 Yes No* No* ThinkSystem ThinkSystem ThinkSystem
SR860 V3 3 x16 SR860 V3 6 x8 SR860 V3 3 x16 &
& 3 x8 PCIe G4 PCIe G5 Riser 1 x8 + 7mm PCIe
Riser 1/3 FHFL, 2 HHHL, BT3Z G5 Riser 3 FHFL,
BT3V BT40
3 13 8 1 1 3 Yes Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3
& 1 x8 PCIe G5 PCIe G4 Riser 7mm/x8/x8 PCIe
Riser 1/3 FHFL, 2 HHHL, BT3W G4 Riser 3 FHHL,
BT3Y BT3U

Lenovo ThinkSystem SR860 V3 Server 43


Slots DW GPUs
Riser Total G4 G4 G5 G5 7mm GPU Remaining
count slots x8 x16 x8 x16 support support slots** Riser 1 Riser 2 Riser 3
3 13 4 1 5 3 Yes Yes (2) 12 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3
& 1 x8 PCIe G5 PCIe G5 Riser 7mm/x8/x8 PCIe
Riser 1/3 FHFL, 2 HHHL, BT3Z G4 Riser 3 FHHL,
BT3Y BT3U
3 16 5 3 5 3 Yes No* No* ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3 2 x16 &
& 1 x8 PCIe G5 PCIe G5 Riser 3 x8 + 7mm PCIe
Riser 1/3 FHFL, 2 HHHL, BT3Z G4 Riser 3 FHFL,
BT3Y BT3X
3 15 2 1 6 6 Yes Yes (4) 13 ThinkSystem ThinkSystem ThinkSystem
SR860 V3 4 x16 SR860 V3 6 x8 SR860 V3 3 x16 &
& 1 x8 PCIe G5 PCIe G5 Riser 1 x8 + 7mm PCIe
Riser 1/3 FHFL, 2 HHHL, BT3Z G5 Riser 3 FHFL,
BT3Y BT40
* It is not recommended to install double-wide GPUs in these configurations due to the mix of Gen4 and Gen5
slots
** For configurations with support for double-wide GPUs, this is the number of slots remaining available after
the maximum number of GPUs are installed

Physically x16 slots : All of the x8 slots have a physical x16 connector. This means that the slot
mechanically accepts an adapter that has the longer x16 edge connector. However since the slot is
electrically x8, it only has eight PCIe lanes for data transfer and only has the performance of a x8 slot.

Network adapters
The SR860 V3 has two dedicated OCP 3.0 SFF slots with PCIe 5.0 x16 host interfaces. See Figure 3 for the
location of the OCP slots.
The following table lists the supported OCP adapters. One port of each adapter can optionally be shared with
the XCC management processor for Wake-on-LAN and NC-SI support.

Lenovo ThinkSystem SR860 V3 Server 44


Table 33. OCP adapters
Feature Maximum
Part number code Description supported
Gigabit Ethernet
4XC7A08235 B5T1 ThinkSystem Broadcom 5719 1GbE RJ45 4-port OCP Ethernet Adapter 2
4XC7A88428 BW97 ThinkSystem Intel I350 1GbE RJ45 4-Port OCP Ethernet Adapter V2 2
4XC7A08277 B93E ThinkSystem Intel I350 1GbE RJ45 4-port OCP Ethernet Adapter 1
10 Gb Ethernet - 10GBASE-T
4XC7A08236 B5ST ThinkSystem Broadcom 57416 10GBASE-T 2-port OCP Ethernet Adapter 2
4XC7A08240 B5T4 ThinkSystem Broadcom 57454 10GBASE-T 4-port OCP Ethernet Adapter 2
4XC7A08278 BCD5 ThinkSystem Intel X710-T2L 10GBASE-T 2-port OCP Ethernet Adapter 2
4XC7A80268 BPPY ThinkSystem Intel X710-T4L 10GBase-T 4-Port OCP Ethernet Adapter 2
25 Gb Ethernet
4XC7A08237 BN2T ThinkSystem Broadcom 57414 10/25GbE SFP28 2-Port OCP Ethernet 2
Adapter
4XC7A80567 BPPW ThinkSystem Broadcom 57504 10/25GbE SFP28 4-Port OCP Ethernet 2
Adapter
4XC7A08294 BCD4 ThinkSystem Intel E810-DA2 10/25GbE SFP28 2-Port OCP Ethernet 2
Adapter
4XC7A80269 BP8L ThinkSystem Intel E810-DA4 10/25GbE SFP28 4-Port OCP Ethernet 2
Adapter
4XC7A62582 BE4T ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port OCP 2
Ethernet Adapter
100 Gb Ethernet
4XC7A08243 BPPX ThinkSystem Broadcom 57508 100GbE QSFP56 2-Port OCP Ethernet 2
Adapter

The following table lists additional supported network adapters that can be installed in the regular PCIe slots.

Legacy Option ROM support: The server does not support legacy option boot ROM on PCIe adapters
connected to CPU 3 or 4. See the I/O expansion section for details on which slots connect to each CPU.
For option ROM support, install the adapters in slots connected to CPU 1 or 2, or use UEFI boot mode on
those adapters instead.

Lenovo ThinkSystem SR860 V3 Server 45


Table 34. PCIe network adapters
Feature Maximum Slots
Part number code Description supported supported
Gigabit Ethernet
7ZT7A00484 AUZV ThinkSystem Broadcom 5719 1GbE RJ45 4-Port PCIe 18 All slots
Ethernet Adapter
7ZT7A00535 AUZW ThinkSystem I350-T4 PCIe 1Gb 4-Port RJ45 Ethernet 18 All slots
Adapter
10 Gb Ethernet - 10GBASE-T
7ZT7A00496 AUKP ThinkSystem Broadcom 57416 10GBASE-T 2-Port PCIe 18 All slots
Ethernet Adapter
4XC7A80266 BNWL ThinkSystem Intel X710-T2L 10GBase-T 2-Port PCIe 18 All slots
Ethernet Adapter
4XC7A79699 BMXB ThinkSystem Intel X710-T4L 10GBase-T 4-Port PCIe 18 All slots
Ethernet Adapter
25 Gb Ethernet
4XC7A08238 BK1H ThinkSystem Broadcom 57414 10/25GbE SFP28 2-port 18 All slots
PCIe Ethernet Adapter
4XC7A80566 BNWM ThinkSystem Broadcom 57504 10/25GbE SFP28 4-Port 8 4, 6, 7, 8, 16,
PCIe Ethernet Adapter 18, 19, 20
4XC7A08295 BCD6 ThinkSystem Intel E810-DA2 10/25GbE SFP28 2-Port PCIe 18 All slots
Ethernet Adapter
4XC7A80267 BP8M ThinkSystem Intel E810-DA4 10/25GbE SFP28 4-Port PCIe 8 All FH slots
Ethernet Adapter
4XC7A62580 BE4U ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2- 8 4, 6, 7, 8, 16,
Port PCIe Ethernet Adapter 18, 19, 20
100 Gb Ethernet / InfiniBand HDR100
4XC7A08297 BK1J ThinkSystem Broadcom 57508 100GbE QSFP56 2-Port 8 4, 6, 7, 8, 16,
PCIe 4 Ethernet Adapter 18, 19, 20
4XC7A08248 B8PP ThinkSystem Mellanox ConnectX-6 Dx 100GbE QSFP56 2- 8 4, 6, 7, 8, 16,
port PCIe Ethernet Adapter 18, 19, 20
4C57A14178 B4RA ThinkSystem Mellanox ConnectX-6 HDR100/100GbE 8 4, 6, 7, 8, 16,
QSFP56 2-port PCIe VPI Adapter 18, 19, 20
4C57A14177 B4R9 ThinkSystem Mellanox ConnectX-6 HDR100/100GbE 8 4, 6, 7, 8, 16,
QSFP56 1-port PCIe VPI Adapter 18, 19, 20
200 Gb Ethernet / InfiniBand HDR/NDR200
4C57A15326 B4RC ThinkSystem Mellanox ConnectX-6 HDR/200GbE QSFP56 8 4, 6, 7, 8, 16,
1-port PCIe 4 VPI Adapter 18, 19, 20
4XC7A81883 BQBN ThinkSystem NVIDIA ConnectX-7 NDR200/200GbE 8 4, 6, 7, 8, 16,
QSFP112 2-port PCIe Gen5 x16 Adapter 18, 19, 20
4C57A80293 BNDQ ThinkSystem NVIDIA PCIe Gen4 x16 Passive Aux Kit 1 18
400 Gb / InfiniBand NDR
4XC7A80289 BQ1N ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port 6 4, 6, 7, 16, 18,
PCIe Gen5 x16 InfiniBand Adapter 19

For details about these adapters, see the relevant product guide:
Ethernet adapters: https://lenovopress.com/servers/options/ethernet

Lenovo ThinkSystem SR860 V3 Server 46


InfiniBand adapters: https://lenovopress.com/servers/options/infiniband

Fibre Channel host bus adapters


The following table lists the Fibre Channel HBAs supported by the server.

Legacy Option ROM support: The server does not support legacy option boot ROM on PCIe adapters
connected to CPU 3 or 4. See the I/O expansion section for details on which slots connect to each CPU.
For option ROM support, install the adapters in slots connected to CPU 1 or 2, or use UEFI boot mode on
those adapters instead.

Table 35. Fibre Channel HBAs


Part Feature Maximum Slots
number code Description supported supported
16Gb Fibre Channel
01CV840 ATZV Emulex 16Gb Gen6 FC Dual-port HBA 18 All slots
01CV830 ATZU Emulex 16Gb Gen6 FC Single-port HBA 18 All slots
01CV760 ATZC QLogic 16Gb Enhanced Gen5 FC Dual-port HBA 18 All slots
01CV750 ATZB QLogic 16Gb Enhanced Gen5 FC Single-port HBA 18 All slots
32Gb Fibre Channel
4XC7A76498 BJ3G ThinkSystem Emulex LPe35000 32Gb 1-port PCIe Fibre Channel 18 All slots
Adapter v2
4XC7A76525 BJ3H ThinkSystem Emulex LPe35002 32Gb 2-port PCIe Fibre Channel 18 All slots
Adapter V2
4XC7A08279 BA1G ThinkSystem QLogic QLE2770 32Gb 1-Port PCIe Fibre Channel 18 All slots
Adapter
4XC7A08276 BA1F ThinkSystem QLogic QLE2772 32Gb 2-Port PCIe Fibre Channel 18 All slots
Adapter
64Gb Fibre Channel
4XC7A77485 BLC1 ThinkSystem Emulex LPe36002 64Gb 2-port PCIe Fibre Channel 18 All slots
Adapter

For more information, see the list of Lenovo Press Product Guides in the Host bus adapters category:
https://lenovopress.com/servers/options/hba

Lenovo ThinkSystem SR860 V3 Server 47


SAS adapters for external storage
The following table lists SAS HBAs and RAID adapters supported by the server for use with external storage.

Legacy Option ROM support: The server does not support legacy option boot ROM on PCIe adapters
connected to CPU 3 or 4. See the I/O expansion section for details on which slots connect to each CPU.
For option ROM support, install the adapters in slots connected to CPU 1 or 2, or use UEFI boot mode on
those adapters instead.

Table 36. Adapters for external storage


Part Feature Maximum Slots
number code Description supported supported
SAS HBA - PCIe 4.0
4Y37A09724 B8P7 ThinkSystem 440-16e SAS/SATA PCIe Gen4 12Gb HBA 18 All slots
4Y37A78837 BNWK ThinkSystem 440-8e SAS/SATA PCIe Gen4 12Gb HBA 18 All slots
RAID Adapter - PCIe 4.0
4Y37A78836 BNWJ ThinkSystem RAID 940-8e 4GB Flash PCIe Gen4 12Gb Adapter 4 All slots

For a comparison of the functions of the supported storage adapters, see the ThinkSystem RAID Adapter and
HBA Reference:
https://lenovopress.lenovo.com/lp1288#sr860-v3-support=SR860%2520V3&internal-or-external-
ports=External
The RAID 940-8e adapter uses a flash power module (supercap) and the server supports up to four
supercaps. The number of 940-8e RAID adapters supported is based on how many supercaps can be
installed in the server. For example, if your configuration uses two RAID 940/9350 adapters for internal
storage, then you can only install two RAID 940-8e adapters, since there is only space for four supercaps
total.
For details about these adapters, see the relevant product guide:
SAS HBAs: https://lenovopress.com/servers/options/hba
RAID adapters: https://lenovopress.com/servers/options/raid

Flash storage adapters


The SR860 V3 currently does not support PCIe Flash Storage adapters.

GPU adapters
The SR860 V3 supports the graphics processing units (GPUs) listed in the following table:

GPU support: For GPU support with CTO orders, you will need to select base BT2K. See the Models
section for details. When adding GPUs to an existing server, the server must already be configured in the
factory with full-length slots and low-profile heatsinks on the rear processors. Field upgrades of the
heatsinks and slots are not available.

Lenovo ThinkSystem SR860 V3 Server 48


Table 37. GPU adapters
Part Feature Maximum Slots NVLink Controlled
number code Description supported supported support GPU
Single-wide GPUs
4X67A84824 BS2C ThinkSystem NVIDIA L4 24GB PCIe 8 3, 4, 5, 6, 15, No Controlled
Gen4 Passive GPU 16, 17, 18
Double-wide GPUs
4X67A84823 BT87 ThinkSystem NVIDIA L40 48GB PCIe 4 4, 6, 16, 18 No Controlled
Gen4 Passive GPU
4X67A81102 BP04 ThinkSystem AMD Instinct MI210 PCIe 4 4, 6, 16, 18 No Controlled
Gen4 Passive Accelerator
NVLink Bridge
4X67A71309 BG3F ThinkSystem NVIDIA Ampere NVLink 6** - - -
2-Slot Bridge
** 3 NVLink Bridges per pair of supported double-wide GPUs
For details about these GPUs, see the ThinkSystem and ThinkAgile GPU Summary:
https://lenovopress.com/lp0768-thinksystem-thinkagile-gpu-summary
The following rules apply when using GPUs:
The table includes a Controlled GPU column. If a GPU is listed as Controlled, that means the GPU is
not offered in certain markets, as determined by the US Government. If a GPU is listed as No, that
means the GPU is not controlled and is available in all markets.
GPUs can be configured in CTO orders as follows:
A Controlled GPU can only be configured using one of the Base CTO models for Controlled
GPUs, such as , as listed in the Models section.
A GPU that is not controlled can only be configured using one of the Base CTO models that is
not for Controlled GPUs, such as 7D93CTO1WW, as listed in the Models section.
Installed GPUs must be identical
NVLink bridges are supported on certain GPUs as listed in the above table, 3 bridges per pair of GPUs.
The use of the NVLink bridge requires that the two GPUs be installed next to each other (slots 4 & 6, or
slots 16 & 18)
For double-wide GPUs:
Base BT2K is required. See the Models section for details.
Only processors with TDP ≤ 270W are supported
Some NVIDIA A Series GPUs are available as two feature codes, one with a CEC chip and one without
a CEC chip (ones without the CEC chip have "w/o CEC" in the name). The CEC is a secondary
Hardware Root of Trust (RoT) module that provides an additional layer of security, which can be used
by customers who have high regulatory requirements or high security standards. NVIDIA uses a multi-
layered security model and hence the protection offered by the primary Root of Trust embedded in the
GPU is expected to be sufficient for most customers. The CEC defeatured products still offer Secure
Boot, Secure Firmware Update, Firmware Rollback Protection, and In-Band Firmware Update Disable.
Specifically, without the CEC chip, the GPU does not support Key Revocation or Firmware Attestation.
CEC and non-CEC GPUs of the same type of GPU can be mixed in field upgrades.
Double-wide GPUs require an auxiliary power cable. For CTO orders, the necessary auxiliary power cables
are automatically selected as part of configuration. For field upgrades, you will need to also order the power
cable separately, as listed in the follwoing table. One part number is needed per GPU.

Lenovo ThinkSystem SR860 V3 Server 49


Table 38. Auxiliary power cables
Part number Feature code Description
4X97A88017 BW29 ThinkSystem SR850 V3/SR860 V3 A100/A6000/MI210 GPU Power Cable Option
Kit
4X97A88016 BW28 ThinkSystem SR850 V3/SR860 V3 H100 GPU Power Cable Option Kit
4X97A88015 BW27 ThinkSystem SR850 V3/SR860 V3 A4500 GPU Power Cable Option Kit

Cooling
The server has 12 60mm hot-swap dual-rotor variable-speed fans at the front of the server and all 12 fans are
standard in all models. The server offers N+1 redundancy, meaning that one fan can fail and the server still
operates normally.
Each power supply also includes an integrated fan.
The 12 front fans are installed in a 4U-high unit as shown in the following figure. The 12 fans are installed in
six modules in vertical bays, each of which comprise of 2 fans.

Figure 12. SR860 V3 cooling fan modules


When servicing the fan modules, you remove the modules from the top of the unit (hot-swap). Although the
server supports N+1 redundancy (that is, supporting the failure of 1 fan while maintaining server operation),
the server supports the removal of two fans for the time it takes to undertake a fan replacement: remove the
module, replace the fan, reinsert the module.
The following table lists the CTO ordering information for the fan modules.

Table 39. Cooling


Feature code Description Max Qty
BT2L ThinkSystem SR860 V3 Dual Rotor System Fan (contains two fans) 6

Lenovo ThinkSystem SR860 V3 Server 50


Power supplies
The server supports up to four redundant hot-swap power supplies. Redundancy can be configured as N+1 or
N+N.

Tip: Use Lenovo Capacity Planner to determine exactly what power your server needs:
https://datacentersupport.lenovo.com/us/en/products/solutions-and-software/software/lenovo-capacity-
planner/solutions/ht504651

Table 40. Power supply options for SR860 V3


Part Feature Supported 110V
number code Description Connector quantities support
Titanium power supplies (available in all markets)
4P57A72666 BLKH ThinkSystem 1100W 230V Titanium Hot-Swap Gen2 C13 4 No
Power Supply
4P57A78359 BPK9 ThinkSystem 1800W 230V Titanium Hot-Swap Gen2 C13 2 or 4 No
Power Supply
4P57A72667 BKTJ ThinkSystem 2600W 230V Titanium Hot-Swap Gen2 C19 2 or 4 No
Power Supply v4
Platinum power supplies (available in all markets)
4P57A72671 BNFH ThinkSystem 1100W 230V/115V Platinum Hot-Swap C13 4 Yes
Gen2 Power Supply v3
4P57A26294 BMUF ThinkSystem 1800W 230V Platinum Hot-Swap Gen2 C13 2 or 4 No
Power Supply
4P57A26295 B962 ThinkSystem 2400W 230V Platinum Hot-Swap Gen2 C19 2 or 4 No
Power Supply
Power supplies for customers in China only
4P57A82017 BTTP ThinkSystem 1600W 336V HVDC CRPS Hot-Swap DC 4 No
Power Supply v1.1 (PRC)
4P57A78364 BTTN ThinkSystem 1600W -48V DC CRPS Hot-Swap Power DC 4 No
Supply v1.1 (PRC)
4P57A78363 BU4H ThinkSystem 1300W 230V/115V Platinum CRPS Hot- C13 4 Yes
Swap Power Supply v1.1 (PRC)
4P57A82024 BU4G ThinkSystem 1300W 230V/115V Platinum CRPS Hot- C13 4 Yes
Swap Power Supply v1.2 (PRC)
4P57A82018 BU4J ThinkSystem 2700W 230V Platinum CRPS Hot-Swap C19 4 No
Power Supply v1.1 (PRC)
4P57A82025 BU4K ThinkSystem 2700W 230V Platinum CRPS Hot-Swap C19 4 No
Power Supply v1.2 (PRC)

The 1100W Platinum power supply is auto-sensing and supports both 110V AC (100-127V 50/60 Hz) and
220V AC (200-240V 50/60 Hz) power. All other power supplies only supports 220V AC power. For China
customers, all power supplies support 240V DC.
Configuration notes:
Installed power supplies must be identical wattage. For CRPS power supplies, part numbers cannot be
mixed.
Power supply options do not include a line cord. For server configurations, the inclusion of a power
cord is model dependent. Configure-to-order models can be configured without a power cord if desired.

Lenovo ThinkSystem SR860 V3 Server 51


Power supply LEDs
The supported hot-swap power supplies have the following LEDs:
Power input LED:
Green: The power supply is connected to the AC power source
Off: The power supply is disconnected from the AC power source or a power problem has
occurred
Power output LED:
Green: The server is on and the power supply is working normally
Off: The server is powered off, or the power supply is not working properly
Power supply error LED:
Off: The power supply is working normally
Yellow: The power supply has failed
Note: The SR860 V3 does not support Zero-output mode (also known as Standby mode) with power supplies.

Power cords
Line cords and rack power cables with C13 connectors can be ordered as listed in the following table.

115V customers: If you plan to use the 1100W power supply with a low-range (100-127V) power source,
select a power cable that is rated above 10A. Power cables that are rated at 10A or below are not
supported with low-range power.

Table 41. Power cords


Part number Feature code Description
Rack cables - C13 to C14
SL67B08593 BPHZ 0.5m, 10A/100-250V, C13 to C14 Jumper Cord
00Y3043 A4VP 1.0m, 10A/100-250V, C13 to C14 Jumper Cord
4L67A08367 B0N5 1.0m, 13A/100-250V, C13 to C14 Jumper Cord
39Y7937 6201 1.5m, 10A/100-250V, C13 to C14 Jumper Cord
4L67A08368 B0N6 1.5m, 13A/100-250V, C13 to C14 Jumper Cord
4L67A08365 B0N4 2.0m, 10A/100-250V, C13 to C14 Jumper Cord
4L67A08369 6570 2.0m, 13A/100-250V, C13 to C14 Jumper Cord
4L67A08366 6311 2.8m, 10A/100-250V, C13 to C14 Jumper Cord
4L67A08370 6400 2.8m, 13A/100-250V, C13 to C14 Jumper Cord
39Y7932 6263 4.3m, 10A/100-250V, C13 to C14 Jumper Cord
4L67A08371 6583 4.3m, 13A/100-250V, C13 to C14 Rack Power Cable
Rack cables - C13 to C14 (Y-cable)
00Y3046 A4VQ 1.345m, 2X C13 to C14 Jumper Cord, Rack Power Cable
00Y3047 A4VR 2.054m, 2X C13 to C14 Jumper Cord, Rack Power Cable
Rack cables - C13 to C20
39Y7938 6204 2.8m, 10A/100-250V, C13 to IEC 320-C20 Rack Power Cable
Rack cables - C13 to C20 (Y-cable)
47C2491 A3SW 1.2m, 16A/100-250V, 2 Short C13s to Short C20 Rack Power Cable
47C2492 A3SX 2.5m, 16A/100-250V, 2 Long C13s to Short C20 Rack Power Cable
47C2493 A3SY 2.8m, 16A/100-250V, 2 Short C13s to Long C20 Rack Power Cable
47C2494 A3SZ 4.1m, 16A/100-250V, 2 Long C13s to Long C20 Rack Power Cable

Lenovo ThinkSystem SR860 V3 Server 52


Part number Feature code Description
Line cords
39Y7930 6222 2.8m, 10A/250V, C13 to IRAM 2073 (Argentina) Line Cord
81Y2384 6492 4.3m 10A/220V, C13 to IRAM 2073 (Argentina) Line Cord
39Y7924 6211 2.8m, 10A/250V, C13 to AS/NZ 3112 (Australia/NZ) Line Cord
81Y2383 6574 4.3m, 10A/230V, C13 to AS/NZS 3112 (Aus/NZ) Line Cord
69Y1988 6532 2.8m, 10A/250V, C13 to NBR 14136 (Brazil) Line Cord
81Y2387 6404 4.3m, 10A/250V, C13 - 2P+Gnd (Brazil) Line Cord
39Y7928 6210 2.8m, 10A/220V, C13 to GB 2099.1 (China) Line Cord
81Y2378 6580 4.3m, 10A/220V, C13 to GB 2099.1 (China) Line Cord
39Y7918 6213 2.8m, 10A/250V, C13 to DK2-5a (Denmark) Line Cord
81Y2382 6575 4.3m, 10A/230V, C13 to DK2-5a (Denmark) Line Cord
39Y7917 6212 2.8m, 10A/230V, C13 to CEE7-VII (Europe) Line Cord
81Y2376 6572 4.3m, 10A/230V, C13 to CEE7-VII (Europe) Line Cord
39Y7927 6269 2.8m, 10A/250V, C13(2P+Gnd) (India) Line Cord
81Y2386 6567 4.3m, 10A/240V, C13 to IS 6538 (India) Line Cord
39Y7920 6218 2.8m, 10A/250V, C13 to SI 32 (Israel) Line Cord
81Y2381 6579 4.3m, 10A/230V, C13 to SI 32 (Israel) Line Cord
39Y7921 6217 2.8m, 220-240V, C13 to CEI 23-16 (Italy/Chile) Line Cord
81Y2380 6493 4.3m, 10A/230V, C13 to CEI 23-16 (Italy/Chile) Line Cord
46M2593 A1RE 2.8m, 12A/125V, C13 to JIS C-8303 (Japan) Line Cord
4L67A08362 6495 4.3m, 12A/200V, C13 to JIS C-8303 (Japan) Line Cord
39Y7926 6335 4.3m, 12A/100V, C13 to JIS C-8303 (Japan) Line Cord
39Y7922 6214 2.8m, 10A/250V, C13 to SABS 164 (S Africa) Line Cord
81Y2379 6576 4.3m, 10A/230V, C13 to SABS 164 (South Africa) Line Cord
39Y7925 6219 2.8m, 220-240V, C13 to KETI (S Korea) Line Cord
81Y2385 6494 4.3m, 12A/220V, C13 to KSC 8305 (S. Korea) Line Cord
39Y7919 6216 2.8m, 10A/250V, C13 to SEV 1011-S24507 (Swiss) Line Cord
81Y2390 6578 4.3m, 10A/230V, C13 to SEV 1011-S24507 (Sws) Line Cord
23R7158 6386 2.8m, 10A/125V, C13 to CNS 10917-3 (Taiwan) Line Cord
81Y2375 6317 2.8m, 10A/240V, C13 to CNS 10917-3 (Taiwan) Line Cord
81Y2374 6402 2.8m, 13A/125V, C13 to CNS 60799 (Taiwan) Line Cord
4L67A08363 AX8B 4.3m, 10A 125V, C13 to CNS 10917 (Taiwan) Line Cord
81Y2389 6531 4.3m, 10A/250V, C13 to 76 CNS 10917-3 (Taiwan) Line Cord
81Y2388 6530 4.3m, 13A/125V, C13 to CNS 10917 (Taiwan) Line Cord
39Y7923 6215 2.8m, 10A/250V, C13 to BS 1363/A (UK) Line Cord
81Y2377 6577 4.3m, 10A/230V, C13 to BS 1363/A (UK) Line Cord
90Y3016 6313 2.8m, 10A/120V, C13 to NEMA 5-15P (US) Line Cord
46M2592 A1RF 2.8m, 10A/250V, C13 to NEMA 6-15P Line Cord
00WH545 6401 2.8m, 13A/120V, C13 to NEMA 5-15P (US) Line Cord
4L67A08359 6370 4.3m, 10A/125V, C13 to NEMA 5-15P (US) Line Cord
4L67A08361 6373 4.3m, 10A/250V, C13 to NEMA 6-15P (US) Line Cord
4L67A08360 AX8A 4.3m, 13A/120V, C13 to NEMA 5-15P (US) Line Cord

Lenovo ThinkSystem SR860 V3 Server 53


Power cords (C19 connectors)
Line cords and rack power cables with C19 connectors can be ordered as listed in the following table.

Table 42. Power cords (C19 connectors)


Part number Feature code Description
Rack cables
4L67A86677 BPJ0 0.5m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86678 B4L0 1.0m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86679 B4L1 1.5m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86680 B4L2 2.0m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
39Y7916 6252 2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
4L67A86681 B4L3 4.3m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
Line cords
40K9777 6276 4.3m, 220-240V, C19 to IRAM 2073 (Argentina) Line cord
40K9773 6284 4.3m, 220-240V, C19 to AS/NZS 3112 (Aus/NZ) Line cord
40K9775 6277 4.3m, 250V, C19 to NBR 14136 (Brazil) Line Cord
40K9774 6288 4.3m, 220-240V, C19 to GB2099.1 (China) Line cord
40K9769 6283 4.3m, 16A/230V, C19 to IEC 309-P+N+G (Den/Sws) Line Cord
40K9766 6279 4.3m, 220-240V, C19 to CEE7-VII (European) Line cord
40K9776 6285 4.3m, 220-240V, C19 to IS6538 (India) Line cord
40K9771 6282 4.3m, 220-240V, C19 to SI 32 (Israel) Line cord
40K9768 6281 4.3m, 220-240V, C19 to CEI 23-16 (Italy) Line cord
40K9770 6280 4.3m, 220-240V, C19 to SABS 164 (South Africa) Line cord
41Y9231 6289 4.3m, 15A/250V, C19 to KSC 8305 (S. Korea) Line Cord
81Y2391 6549 4.3m, 16A/230V, C19 to SEV 1011 (Sws) Line Cord
41Y9230 6287 4.3m, 16A/250V, C19 to CNS 10917-3 (Taiwan) Line Cord
40K9767 6278 4.3m, 220-240V, C19 to BS 1363/A w/13A fuse (UK) Line Cord
40K9772 6275 4.3m, 16A/208V, C19 to NEMA L6-20P (US) Line Cord
00D7197 A1NV 4.3m, 15A/250V, C19 to NEMA 6-15P (US) Line Cord

Systems management
The SR860 V3 contains an integrated service processor, XClarity Controller 2 (XCC), which provides
advanced control, monitoring, and alerting functions. The XCC2 is based on the AST2600 baseboard
management controller (BMC) using a dual-core ARM Cortex A7 32-bit RISC service processor running at 1.2
GHz.
Topics in this section:
System I/O Board
Local management
System status with XClarity Mobile
Remote management
XCC2 Platinum
Lenovo XClarity Provisioning Manager
Lenovo XClarity Administrator
Lenovo XClarity Integrators
Lenovo XClarity Essentials

Lenovo ThinkSystem SR860 V3 Server 54


Lenovo XClarity Energy Manager
Lenovo Capacity Planner

System I/O Board


The SR860 V3 implements a separate System I/O Board that connects to the Processor Board. The location
of the System I/O Board is shown in the Components and connectors section. The System I/O Board contains
all the connectors visible at the rear of the server as shown in the following figure.

Figure 13. System I/O Board


The board also has the following components:
XClarity Controller 2, implemented using the ASPEED AST2600 baseboard management controller
(BMC).
Root of Trust (RoT) module - a daughter card that implements Platform Firmware Resiliency (PFR)
hardware Root of Trust (RoT) which enables the server to be NIST SP800-193 compliant. For more
details about PFR, see the Security section.
Connector to enable an additional redundant Ethernet connection to the XCC2 controller. The
connector is used in conjunction with the ThinkSystem V3 Management NIC Adapter Kit
(4XC7A85319). For details, see the Remote management section.
Internal USB port - to allow the booting of an operating system from a USB key. The VMware ESXi
preloads use this port for example. Preloads are described in the Operating system support section.
MicroSD card port to enable the use of a MicroSD card for additional storage for use with the XCC2
controller. XCC2 can use the storage as a Remote Disc on Card (RDOC) device (up to 4GB of
storage). It can also be used to store firmware updates (including N-1 firmware history) for ease of
deployment.
Tip: Without a MicroSD card installed, the XCC2 controller will have 100MB of available RDOC
storage.
Ordering information for the supported USB drive and Micro SD card are listed in the following table.

Lenovo ThinkSystem SR860 V3 Server 55


Table 43. Media for use with the System I/O Board
Part number Feature code Description
4X77A77065 BNWN ThinkSystem USB 32GB USB 3.0 Flash Drive
4X77A77064 BNWP ThinkSystem MicroSD 32GB Class 10 Flash Memory Card
4X77A92672 C0BC ThinkSystem MicroSD 64GB Class 10 Flash Memory Card

Local management
The server offers a front operator panel with key LED status indicators, as shown in the following figure.

Tip: The Network LED only shows network activity of an installed OCP network adapter. The LED shows
activity from both OCP adapters if two are installed.

Figure 14. Front operator panel


Light path diagnostics
The server offers light path diagnostics. If an environmental condition exceeds a threshold or if a system
component fails, XCC lights LEDs inside the server to help you diagnose the problem and find the failing part.
The server has fault LEDs next to the following components:
Each memory DIMM
Each drive bay
Each power supply
External Diagnostics Handset
The SR860 V3 has a port to connect an External Diagnostics Handset as described in the preceding section.
The External Diagnostics Handset has the same functions as the Integrated Diagnostics Panel but has the
advantages of not consuming space on the front of the server plus it can be shared among many servers in
your data center. The handset has a magnet on the back of it to allow you to easily mount it on a convenient
place on any rack cabinet.

Lenovo ThinkSystem SR860 V3 Server 56


Figure 15. External Diagnostics Handset
Ordering information for the External Diagnostics Handset with is listed in the following table.

Table 44. External Diagnostics Handset ordering information


Part number Feature code Description
4TA7A64874 BEUX ThinkSystem External Diagnostics Handset

Information tab
The front of the server also houses an information pull-out tab (also known as the network access tag). See
Figure 2 for the location. A label on the tab shows the network information (MAC address and other data) to
remotely access XClarity Controller.

System status with XClarity Mobile


The XClarity Mobile app includes a tethering function where you can connect your Android or iOS device to
the server via USB to see the status of the server.
The steps to connect the mobile device are as follows:
1. Enable USB Management on the server, by holding down the ID button for 3 seconds (or pressing the
dedicated USB management button if one is present)
2. Connect the mobile device via a USB cable to the server's USB port with the management symbol

3. In iOS or Android settings, enable Personal Hotspot or USB Tethering


4. Launch the Lenovo XClarity Mobile app
Once connected you can see the following information:
Server status including error logs (read only, no login required)
Server management functions (XClarity login credentials required)

Lenovo ThinkSystem SR860 V3 Server 57


Remote management
The server offers a dedicated RJ45 port at the rear of the server for remote management via the XClarity
Controller management processor. The port supports 10/100/1000 Mbps speeds.
Remote server management is provided through industry-standard interfaces:
Intelligent Platform Management Interface (IPMI) Version 2.0
Simple Network Management Protocol (SNMP) Version 3 (no SET commands; no SNMP v1)
Common Information Model (CIM-XML)
Representational State Transfer (REST) support
Redfish support (DMTF compliant)
Web browser - HTML 5-based browser interface (Java and ActiveX not required) using a responsive
design (content optimized for device being used - laptop, tablet, phone) with NLS support
The SR860 V3 also supports the use of an OCP adapter that provides an additional redundant Ethernet
connection to the XCC2 controller. Ordering information is listed in the following table.

Table 45. Redundant System Management Port Adapter


Part Feature Maximum
number code Description quantity
4XC7A85319 BTMQ ThinkSystem V3 Management NIC Adapter Kit 1

The use of this adapter allows concurrent remote access using both the connection on the adapter and the
onboard RJ45 remote management port provided by the server. The adapter and onboard port have separate
IP addresses.
Configuration rules:
In the SR860 V3, the ThinkSystem V3 Management NIC Adapter Kit is only supported in OCP slot 1
IPMI via the Ethernet port (IPMI over LAN) is supported, however it is disabled by default. For CTO orders you
can specify whether you want to the feature enabled or disabled in the factory, using the feature codes listed
in the following table.

Table 46. IPMI-over-LAN settings


Feature code Description
B7XZ Disable IPMI-over-LAN (default)
B7Y0 Enable IPMI-over-LAN

XCC2 Platinum
In the SR860 V3, XCC2 has the Platinum level of features built into the server. Compared to the XCC
functions of ThinkSystem V2 and earlier systems, Platinum offers the same features as Enterprise and
Advanced levels in ThinkSystem V2, plus additional features.

DCSC tip: Even though XCC2 Platinum is a standard feature of the SR860 V3, it does not appear in the
list of feature codes for the configuration in DCSC.

XCC2 Platinum includes the following Enterprise and Advanced functions:


Remotely viewing video with graphics resolutions up to 1600x1200 at 75 Hz with up to 23 bits per pixel,
regardless of the system state
Remotely accessing the server using the keyboard and mouse from a remote client

Lenovo ThinkSystem SR860 V3 Server 58


International keyboard mapping support
Syslog alerting
Redirecting serial console via SSH
Component replacement log (Maintenance History log)
Access restriction (IP address blocking)
Lenovo SED security key management
Displaying graphics for real-time and historical power usage data and temperature
Boot video capture and crash video capture
Virtual console collaboration - Ability for up to 6 remote users to be log into the remote session
simultaneously
Remote console Java client
Mapping the ISO and image files located on the local client as virtual drives for use by the server
Mounting the remote ISO and image files via HTTPS, SFTP, CIFS, and NFS
Power capping
System utilization data and graphic view
Single sign on with Lenovo XClarity Administrator
Update firmware from a repository
License for XClarity Energy Manager
XCC2 Platinum also includes the following features that are new to XCC2:
System Guard - Monitor hardware inventory for unexpected component changes, and simply log the
event or prevent booting
Enterprise Strict Security mode - Enforces CNSA 1.0 level security
Neighbor Group - Enables administrators to manage and synchronize configurations and firmware level
across multiple servers
With XCC2 Platinum, for CTO orders, you can request that System Guard be enabled in the factory and the
first configuration snapshot be recorded. To add this to an order, select feature code listed in the following
table. The selection is made in the Security tab of the DCSC configurator.

Table 47. Enable System Guard in the factory (CTO orders)


Feature code Description
BUT2 Install System Guard

For more information about System Guard, see https://pubs.lenovo.com/xcc2/NN1ia_c_systemguard

Lenovo XClarity Provisioning Manager


Lenovo XClarity Provisioning Manager (LXPM) is a UEFI-based application embedded in ThinkSystem
servers and accessible via the F1 key during system boot.
LXPM provides the following functions:
Graphical UEFI Setup
System inventory information and VPD update
System firmware updates (UEFI and XCC)
RAID setup wizard
OS installation wizard (including unattended OS installation)
Diagnostics functions

Lenovo ThinkSystem SR860 V3 Server 59


Lenovo XClarity Administrator
Lenovo XClarity Administrator is a centralized resource management solution designed to reduce complexity,
speed response, and enhance the availability of Lenovo systems and solutions. It provides agent-free
hardware management for ThinkSystem servers, in addition to ThinkServer, System x, and Flex System
servers. The administration dashboard is based on HTML 5 and allows fast location of resources so tasks can
be run quickly.
Because Lenovo XClarity Administrator does not require any agent software to be installed on the managed
endpoints, there are no CPU cycles spent on agent execution, and no memory is used, which means that up
to 1GB of RAM and 1 - 2% CPU usage is saved, compared to a typical managed system where an agent is
required.
Lenovo XClarity Administrator is an optional software component for the SR860 V3. The software can be
downloaded and used at no charge to discover and monitor the SR860 V3 and to manage firmware upgrades.
If software support is required for Lenovo XClarity Administrator, or premium features such as configuration
management and operating system deployment are required, Lenovo XClarity Pro software subscription
should be ordered. Lenovo XClarity Pro is licensed on a per managed system basis, that is, each managed
Lenovo system requires a license.
The following table lists the Lenovo XClarity software license options.

Table 48. Lenovo XClarity Pro ordering information


Part number Feature code Description
00MT201 1339 Lenovo XClarity Pro, per Managed Endpoint w/1 Yr SW S&S
00MT202 1340 Lenovo XClarity Pro, per Managed Endpoint w/3 Yr SW S&S
00MT203 1341 Lenovo XClarity Pro, per Managed Endpoint w/5 Yr SW S&S
7S0X000HWW SAYV Lenovo XClarity Pro, per Managed Endpoint w/6 Yr SW S&S
7S0X000JWW SAYW Lenovo XClarity Pro, per Managed Endpoint w/7 Yr SW S&S

Lenovo XClarity Administrator offers the following standard features that are available at no charge:
Auto-discovery and monitoring of Lenovo systems
Firmware updates and compliance enforcement
External alerts and notifications via SNMP traps, syslog remote logging, and e-mail
Secure connections to managed endpoints
NIST 800-131A or FIPS 140-2 compliant cryptographic standards between the management solution
and managed endpoints
Integration into existing higher-level management systems such as cloud automation and orchestration
tools through REST APIs, providing extensive external visibility and control over hardware resources
An intuitive, easy-to-use GUI
Scripting with Windows PowerShell, providing command-line visibility and control over hardware
resources
Lenovo XClarity Administrator offers the following premium features that require an optional Pro license:
Pattern-based configuration management that allows to define configurations once and apply
repeatedly without errors when deploying new servers or redeploying existing servers without
disrupting the fabric
Bare-metal deployment of operating systems and hypervisors to streamline infrastructure provisioning
For more information, refer to the Lenovo XClarity Administrator Product Guide:
http://lenovopress.com/tips1200

Lenovo ThinkSystem SR860 V3 Server 60


Lenovo XClarity Integrators
Lenovo also offers software plug-in modules, Lenovo XClarity Integrators, to manage physical infrastructure
from leading external virtualization management software tools including those from Microsoft and VMware.
These integrators are offered at no charge, however if software support is required, a Lenovo XClarity Pro
software subscription license should be ordered.
Lenovo XClarity Integrators offer the following additional features:
Ability to discover, manage, and monitor Lenovo server hardware from VMware vCenter or Microsoft
System Center
Deployment of firmware updates and configuration patterns to Lenovo x86 rack servers and Flex
System from the virtualization management tool
Non-disruptive server maintenance in clustered environments that reduces workload downtime by
dynamically migrating workloads from affected hosts during rolling server updates or reboots
Greater service level uptime and assurance in clustered environments during unplanned hardware
events by dynamically triggering workload migration from impacted hosts when impending hardware
failures are predicted
For more information about all the available Lenovo XClarity Integrators, see the Lenovo XClarity
Administrator Product Guide: https://lenovopress.com/tips1200-lenovo-xclarity-administrator

Lenovo XClarity Essentials


Lenovo offers the following XClarity Essentials software tools that can help you set up, use, and maintain the
server at no additional cost:
Lenovo Essentials OneCLI
OneCLI is a collection of server management tools that uses a command line interface program to
manage firmware, hardware, and operating systems. It provides functions to collect full system health
information (including health status), configure system settings, and update system firmware and
drivers.
Lenovo Essentials UpdateXpress
The UpdateXpress tool is a standalone GUI application for firmware and device driver updates that
enables you to maintain your server firmware and device drivers up-to-date and help you avoid
unnecessary server outages. The tool acquires and deploys individual updates and UpdateXpress
System Packs (UXSPs) which are integration-tested bundles.
Lenovo Essentials Bootable Media Creator
The Bootable Media Creator (BOMC) tool is used to create bootable media for offline firmware update.
For more information and downloads, visit the Lenovo XClarity Essentials web page:
http://support.lenovo.com/us/en/documents/LNVO-center

Lenovo XClarity Energy Manager


Lenovo XClarity Energy Manager (LXEM) is a power and temperature management solution for data centers.
It is an agent-free, web-based console that enables you to monitor and manage power consumption and
temperature in your data center through the management console. It enables server density and data center
capacity to be increased through the use of power capping.
LXEM is a licensed product. A single-node LXEM license is included with the XClarity Controller Platinum
version. Because the Platinum version of XCC is standard in the SR860 V3, a license for XClarity Energy
Manager is included.

Lenovo ThinkSystem SR860 V3 Server 61


For more information about XClarity Energy Manager, see the following resources:
Lenovo Support page:
https://datacentersupport.lenovo.com/us/en/solutions/lnvo-lxem
User Guide for XClarity Energy Manager:
https://pubs.lenovo.com/lxem/

Lenovo Capacity Planner


Lenovo Capacity Planner is a power consumption evaluation tool that enhances data center planning by
enabling IT administrators and pre-sales professionals to understand various power characteristics of racks,
servers, and other devices. Capacity Planner can dynamically calculate the power consumption, current,
British Thermal Unit (BTU), and volt-ampere (VA) rating at the rack level, improving the planning efficiency for
large scale deployments.
For more information, refer to the Capacity Planner web page:
http://datacentersupport.lenovo.com/us/en/solutions/lnvo-lcp

Security
Topics in this section:
Security features
Platform Firmware Resiliency - Lenovo ThinkShield
Security standards

Security features
The SR860 V3 server offers the following electronic security features:
Secure Boot function of the Intel Xeon processor
Support for Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT) - see the Platform
Firmware Resiliency section
Firmware signature processes compliant with FIPS and NIST requirements
System Guard (part of XCC2 Platinum ) - Proactive monitoring of hardware inventory for unexpected
component changes
Administrator and power-on password
Integrated Trusted Platform Module (TPM) supporting TPM 2.0
Self-encrypting drives (SEDs) with support for enterprise key managers - see the SED encryption key
management section
The server is NIST SP 800-147B compliant.
The SR860 V3 server also offers the following physical security features:
Chassis intrusion switch (standard on some models, otherwise available as a field upgrade)
Lockable top cover to help prevent access to internal components
The following table lists the security options for the server.

Table 49. Security options


Part number Feature code Description
4M27A11826 BCPG ThinkSystem SR860 V3/SR850 V3/SR850 V2 Intrusion Cable Kit

For SED drives and IBM Security Key Lifecycle Manager support see the SED encryption key management
with ISKLM section.

Platform Firmware Resiliency - Lenovo ThinkShield

Lenovo ThinkSystem SR860 V3 Server 62


Lenovo's ThinkShield Security is a transparent and comprehensive approach to security that extends to all
dimensions of our data center products: from development, to supply chain, and through the entire product
lifecycle.
The ThinkSystem SR860 V3 includes Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT)
which enables the system to be NIST SP800-193 compliant. This offering further enhances key platform
subsystem protections against unauthorized firmware updates and corruption, to restore firmware to an
integral state, and to closely monitor firmware for possible compromise from cyber-attacks.
PFR operates upon the following server components:
UEFI image – the low-level server firmware that connects the operating system to the server hardware
XCC image – the management “engine” software that controls and reports on the server status
separate from the server operating system
FPGA image – the code that runs the server’s lowest level hardware controller on the motherboard
The Lenovo Platform Root of Trust Hardware performs the following three main functions:
Detection – Measures the firmware and updates for authenticity
Recovery – Recovers a corrupted image to a known-safe image
Protection – Monitors the system to ensure the known-good firmware is not maliciously written
These enhanced protection capabilities are implemented using a dedicated, discrete security processor
whose implementation has been rigorously validated by leading third-party security firms. Security evaluation
results and design details are available for customer review – providing unprecedented transparency and
assurance.
The SR860 V3 includes support for Secure Boot, a UEFI firmware security feature developed by the UEFI
Consortium that ensures only immutable and signed software are loaded during the boot time. The use of
Secure Boot helps prevent malicious code from being loaded and helps prevent attacks, such as the
installation of rootkits. Lenovo offers the capability to enable secure boot in the factory, to ensure end-to-end
protection.
The following table lists the relevant feature code(s).

Table 50. Secure Boot options


Part Feature
number code Description Purpose
CTO only BPKQ TPM 2.0 with Secure Configure the system in the factory with Secure Boot
Boot enabled.

Security standards
The SR860 V3 supports the following security standards and capabilities:
Industry Standard Security Capabilities
Intel CPU Enablement
AES-NI (Advanced Encryption Standard New Instructions)
CBnT (Converged Boot Guard and Trusted Execution Technology)
CET (Control flow Enforcement Technology)
Hardware-based side channel attack resilience enhancements
MKTME/TME (Multi-Key Total Memory Encryption)
SGX (Software Guard eXtensions)
SGX-TEM (Trusted Environment Mode)
TDX (Trust Domain Extensions)
TXT (Trusted eXecution Technology)
VT (Virtualization Technology)
XD (eXecute Disable)

Lenovo ThinkSystem SR860 V3 Server 63


Microsoft Windows Security Enablement
Credential Guard
Device Guard
Host Guardian Service
TCG (Trusted Computing Group) TPM (Trusted Platform Module) 2.0
UEFI (Unified Extensible Firmware Interface) Forum Secure Boot
Hardware Root of Trust and Security
Independent security subsystem providing platform-wide NIST SP800-193 compliant Platform
Firmware Resilience (PFR)
Management domain RoT supplemented by the Secure Boot features of XCC
Platform Security
Boot and run-time firmware integrity monitoring with rollback to known-good firmware (e.g., “self-
healing”)
Non-volatile storage bus security monitoring and filtering
Resilient firmware implementation, such as to detect and defeat unauthorized flash writes or
SMM (System Management Mode) memory incursions
Patented IPMI KCS channel privileged access authorization (USPTO Patent# 11,256,810)
Host and management domain authorization, including integration with CyberArk for enterprise
password management
KMIP (Key Management Interoperability Protocol) compliant, including support for IBM SKLM
and Thales KeySecure
Reduced “out of box” attack surface
Configurable network services
FIPS 140-3 (in progress) validated cryptography for XCC
CNSA Suite 1.0 Quantum-resistant cryptography for XCC
Lenovo System Guard
For more information on platform security, see the paper “How to Harden the Security of your
ThinkSystem Server and Management Applications” available from https://lenovopress.com/lp1260-
how-to-harden-the-security-of-your-thinksystem-server.
Standards Compliance and/or Support
NIST SP800-131A rev 2 “Transitioning the Use of Cryptographic Algorithms and Key Lengths”
NIST SP800-147B “BIOS Protection Guidelines for Servers”
NIST SP800-193 “Platform Firmware Resiliency Guidelines”
ISO/IEC 11889 “Trusted Platform Module Library”
Common Criteria TCG Protection Profile for “PC Client Specific TPM 2.0”
European Union Commission Regulation 2019/424 (“ErP Lot 9”) “Ecodesign Requirements for
Servers and Data Storage Products” Secure Data Deletion
Optional FIPS 140-2 validated Self-Encrypting Disks (SEDs) with external KMIP-based key
management
Product and Supply Chain Security
Suppliers validated through Lenovo’s Trusted Supplier Program
Developed in accordance with Lenovo’s Secure Development Lifecycle (LSDL)
Continuous firmware security validation through automated testing, including static code
analysis, dynamic network and web vulnerability testing, software composition analysis, and
subsystem-specific testing, such as UEFI security configuration validation

Lenovo ThinkSystem SR860 V3 Server 64


Ongoing security reviews by US-based security experts, with attestation letters available from
our third-party security partners
Digitally signed firmware, stored and built on US-based infrastructure and signed on US-based
Hardware Security Modules (HSMs)
TAA (Trade Agreements Act) compliant manufacturing, by default in Mexico for North American
markets with additional US and EU manufacturing options
US 2019 NDAA (National Defense Authorization Act) Section 889 compliant

Rack installation
The following table lists the rack installation options that are available for the server.

Table 51. Rack installation options


Option Feature Code Description
4XF7A86616 BTTK ThinkSystem SR860 V3 Slide Rail
4XF7A86617 BT6J ThinkSystem SR850 V3/SR860 V3 Cable Management Arm

The following table summarizes the rail kit features and specifications.

Table 52. Rail kit features and specifications summary


Feature ThinkSystem SR860 V3 Slide Rail
Part number 4XF7A86616
Rail type Full-out slide rail (ball bearing)
Toolless installation Yes
Cable Management Arm (CMA) support Optional (4XF7A86617)
In-rack server maintenance Yes
1U PDU support Yes
0U PDU support Limited*
Rack type Four-post IBM and Lenovo standard rack, complying with the IEC standard
Mounting holes Square (9.5mm), round (7.1mm)
Mounting flange thickness 2.0-3.3 mm (0.08 - 0.13 inches )
Distance between front and rear 610-903 mm (24 - 35.75 inches )
mounting flanges
Rail length*** 886 mm (34.9 inches)
* For 0U PDU support, the rack must be at least 1100 mm (43.31 in.) deep without the CMA, or at least 1200
mm (47.24 in.) deep if the CMA is used.
*** Measured when mounted on the rack, from the front surface of the front mounting flange to the rear most
point of the rail.
For additional information, see the document Rail and supported rack specifications for ThinkSystem servers ,
available from:
https://www.lenovo.com/us/en/resources/data-center-solutions/brochures/thinksystem-rail-support-matrix/

Operating system support


Lenovo ThinkSystem SR860 V3 Server 65
Operating system support
The server supports the following operating systems:
Microsoft Windows Server 2019
Microsoft Windows Server 2022
Microsoft Windows Server 2025
Red Hat Enterprise Linux 8.6
Red Hat Enterprise Linux 8.7
Red Hat Enterprise Linux 8.8
Red Hat Enterprise Linux 8.9
Red Hat Enterprise Linux 8.10
Red Hat Enterprise Linux 9.0
Red Hat Enterprise Linux 9.1
Red Hat Enterprise Linux 9.2
Red Hat Enterprise Linux 9.3
Red Hat Enterprise Linux 9.4
Red Hat Enterprise Linux 9.5
SUSE Linux Enterprise Server 15 SP4
SUSE Linux Enterprise Server 15 SP5
SUSE Linux Enterprise Server 15 SP6
SUSE Linux Enterprise Server 15 Xen SP4
SUSE Linux Enterprise Server 15 Xen SP5
Ubuntu 20.04 LTS 64-bit
Ubuntu 22.04 LTS 64-bit
Ubuntu 24.04 LTS 64-bit
VMware ESXi 7.0 U3
VMware ESXi 8.0
VMware ESXi 8.0 U1
VMware ESXi 8.0 U2
VMware ESXi 8.0 U3
For a complete list of supported, certified and tested operating systems, plus additional details and links to
relevant web sites, see the Operating System Interoperability Guide:
https://lenovopress.lenovo.com/osig#servers=sr860-v3-7d94-7d93-7d95
For configure-to-order configurations, the SR860 V3 can be preloaded with VMware ESXi. Ordering
information is listed in the following table.

Table 53. VMware ESXi preload


Part number Feature code Description
CTO only BMEY VMware ESXi 7.0 U3 (Factory Installed)
CTO only BYC7 VMware ESXi 8.0 U2 (Factory Installed)
CTO only BZ97 VMware ESXi 8.0 U3 (Factory Installed)

Configuration rule:
An ESXi preload cannot be selected if the configuration includes an NVIDIA GPU (ESXi preload cannot
include the NVIDIA driver)
You can download supported VMware vSphere hypervisor images from the following web page and install it
using the instructions provided:
https://vmware.lenovo.com/content/custom_iso/

Physical and electrical specifications

Lenovo ThinkSystem SR860 V3 Server 66


The SR860 V3 has the following overall physical dimensions, excluding components that extend outside the
standard chassis, such as EIA flanges, front security bezel (if any), and power supply handles:
Width: 447 mm (17.6 inches)
Height: 175 mm (6.9 inches)
Depth: 906 mm (35.7 inches)
The following table lists the detailed dimensions. See the figure below for the definition of each dimension.

Table 54. Detailed dimensions


Dimension Description
482 mm Xa = Width, to the outsides of the front EIA flanges
435 mm Xb = Width, to the rack rail mating surfaces
447 mm Xc = Width, to the outer most chassis body feature
175 mm Ya = Height, from the bottom of chassis to the top of the chassis
825 mm Za = Depth, from the rack flange mating surface to the rearmost I/O port surface
869 mm Zb = Depth, from the rack flange mating surface to the rearmost feature of the chassis body
871 mm (≤1100W Zc = Depth, from the rack flange mating surface to the rearmost feature such as power supply
PSU) handle
899 mm (1800W
PSU)
925 mm (2400W
PSU)
37 mm Zd = Depth, from the forwardmost feature on front of EIA flange to the rack flange mating
surface
47 mm Ze = Depth, from the front of security bezel (if applicable) or forwardmost feature to the rack
flange mating surface

Lenovo ThinkSystem SR860 V3 Server 67


Figure 16. Server dimensions
The shipping (cardboard packaging) dimensions of the SR860 V3 are as follows:
Width: 600 mm (23.6 inches)
Height: 587 mm (23.1 inches)
Depth: 1200 mm (47.2 inches)
The server has the following weight:
Base configuration:
Maximum weight: 59.4 kg (131 lb)
Electrical specifications for AC input power supplies:
Input voltage:
100 to 127 (nominal) Vac, 50 Hz or 60 Hz
200 to 240 (nominal) Vac, 50 Hz or 60 Hz
180 to 300 Vdc (China only)
Inlet current: See the following table.

Lenovo ThinkSystem SR860 V3 Server 68


Table 55. Maximum inlet current
Part 100V 200V 220V 240V
number Description AC AC AC DC
AC input power - 80 PLUS Titanium efficiency
4P57A72666 ThinkSystem 1100W 230V Titanium Hot-Swap Gen2 Power No 5.9A 5.3A 5A
Supply support
4P57A78359 ThinkSystem 1800W 230V Titanium Hot-Swap Gen2 Power No 9.7A 8.7A 8.3A
Supply support
4P57A72667 ThinkSystem 2600W 230V Titanium Hot-Swap Gen2 Power No 13.2A 13A 11.9A
Supply v4 support
AC input power - 80 PLUS Platinum efficiency
4P57A72671 ThinkSystem 1100W 230V/115V Platinum Hot-Swap Gen2 12A 6A 5.4A 5.1A
Power Supply v3
4P57A26294 ThinkSystem 1800W 230V Platinum Hot-Swap Gen2 Power No 10A 9.1A 9A
Supply v2 support
4P57A26295 ThinkSystem 2400W 230V Platinum Hot-Swap Gen2 Power No 14A 12.6A 12A
Supply support

Electrical specifications for DC input power supply:


Input voltage: -48 to -60 Vdc
Inlet current (1100W power supply): 26 A

Operating environment
The SR860 V3 server complies with ASHRAE Class A2 specifications with most configurations, and
depending on the hardware configuration, also complies with ASHRAE Class A3 and Class A4 specifications.
Depending on the hardware configuration, the SR860 V3 server also complies with ASHRAE Class H1
specification. System performance may be impacted when operating temperature is outside ASHRAE H1
specification.
Topics in this section:
Temperature and humidity
Ambient temperature requirements
Acoustical noise emissions
Shock and vibration
Particulate contamination

Temperature and humidity


The server is supported in the following environment:
Air temperature:
Operating
ASHRAE Class A2: 10°C to 35°C (50°F to 95°F); the maximum ambient temperature
decreases by 1°C for every 300 m (984 ft) increase in altitude above 900 m (2,953 ft).
ASHRAE Class A3: 5°C to 40°C (41°F to 104°F); the maximum ambient temperature
decreases by 1°C for every 175 m (574 ft) increase in altitude above 900 m (2,953 ft).
ASHRAE Class A4: 5°C to 45°C (41°F to 113°F); the maximum ambient temperature
decreases by 1°C for every 125 m (410 ft) increase in altitude above 900 m (2,953 ft).
ASHRAE Class H1: 5 °C to 25 °C (41 °F to 77 °F); Decrease the maximum ambient
temperature by 1°C for every 500 m (1640 ft) increase in altitude above 900 m (2,953 ft).

Lenovo ThinkSystem SR860 V3 Server 69


Server off: 5°C to 45°C (41°F to 113°F)
Shipment/storage: -40°C to 60°C (-40°F to 140°F)
Maximum altitude: 3,050 m (10,000 ft)
Relative Humidity (non-condensing):
Operating
ASHRAE Class A2: 8% to 80%; maximum dew point: 21°C (70°F)
ASHRAE Class A3: 8% to 85%; maximum dew point: 24°C (75°F)
ASHRAE Class A4: 8% to 90%; maximum dew point: 24°C (75°F)
ASHRAE Class H1: 8% to 80%; Maximum dew point: 17°C (63°F)
Shipment/storage: 8% to 90%

Ambient temperature requirements


Adjust ambient temperature when specific components are installed:
The ambient temperature must be limited to 45°C or lower if the server has 48x drives and any of the
following components:
CPUs with TDP ≤ 270W (except 6434H)
Memory module with 64 GB or lower capacity
The ambient temperature must be limited to 35°C or lower if the server has 48x drives and any of the
following components:
CPUs with TDP ≤ 350W with standard heat sink
Memory module with 256 GB or lower capacity
ConnectX-6 Dx 100GbE QSFP56 2-port with Active Optic Cable
ConnectX-6 HDR 200GbE QSFP56 2-port with Active Optic Cable
ConnectX-7 NDR200 QSFP 2-port without Active Optic Cable
ConnectX-7 NDR400 OSFP 1-port without Active Optic Cable
ConnectX-7 NDR200 QSFP 2-port with Active Optic Cable, and CPUs with TDP ≤ 270W is
installed.
ConnectX-7 NDR400 OSFP 1-port with Active Optic Cable, and CPUs with TDP ≤ 270W is
installed.
The ambient temperature must be limited to 30°C or lower if the server has 48x drives and any of the
following components:
CPUs with TDP ≤ 350W with performance heat sink
GPU adapters
ConnectX-7 NDR200 QSFP 2-port with Active Optic Cable
ConnectX-7 NDR400 OSFP 1-port with Active Optic Cable

Acoustical noise emissions


The server has the following acoustic noise emissions declaration:
Sound power level (L WAd)
Idling:
Typical: 7.1 Bel
Storage rich: 7.1 Bel
GPU: 8.0 Bel
Operating:
Typical: 8.0Bel
Storage rich: 8.0 Bel
GPU: 9.2 Bel
Sound pressure level (L pAm ):
Idling:
Typical: 52.5dBA
Storage rich: 52.5 dBA
GPU: 63.6 dBA

Lenovo ThinkSystem SR860 V3 Server 70


Operating:
Typical: 63.6dBA
Storage: 63.6 dBA
GPU: 75.0 dBA
Notes:
These sound levels were measured in controlled acoustical environments according to procedures
specified by ISO7779 and are reported in accordance with ISO 9296.
The declared acoustic sound levels are based on the specified configurations, which may change
depending on configuration/conditions.
Typical configuration: four 250W CPUs, thirty-two 64GB RDIMMs, eight SAS HDDs, RAID 940-
8i, Intel X710-T2L 10GBASE-T 2-port OCP, two 1100W PSUs.
GPU configuration: four 205W CPUs, Four H100 GPUs, thirty-two 64GB RDIMMs, twenty-four
SAS HDDs, RAID 940-16i, Intel X710-T2L 10GBASE-T 2-port OCP, two 1800W PSUs.
Storage rich configuration: four 205W CPUs, thirty-two 64GB RDIMMs, twenty-four SAS HDDs,
RAID 940-8i, Intel X710-T2L 10GBASE-T 2-port OCP, two 2600W PSUs.
Government regulations (such as those prescribed by OSHA or European Community Directives) may
govern noise level exposure in the workplace and may apply to you and your server installation. The
actual sound pressure levels in your installation depend upon a variety of factors, including the number
of racks in the installation; the size, materials, and configuration of the room; the noise levels from
other equipment; the room ambient temperature, and employee's location in relation to the equipment.
Further, compliance with such government regulations depends on a variety of additional factors,
including the duration of employees' exposure and whether employees wear hearing protection.
Lenovo recommends that you consult with qualified experts in this field to determine whether you are in
compliance with the applicable regulations.

Shock and vibration


The server has the following vibration and shock limits:
Vibration:
Operating: 0.21 G rms at 5 Hz to 500 Hz for 15 minutes across 3 axes
Non-operating: 1.04 G rms at 2 Hz to 200 Hz for 15 minutes across 6 surfaces
Shock:
Operating: 15 G for 3 milliseconds in each direction (positive and negative X, Y, and Z axes)
Non-operating:
23 kg - 31 kg: 35 G for 152 in./sec velocity change across 6 surfaces
32 kg - 68 kg: 35 G for 136 in./sec velocity change across 6 surfaces

Particulate contamination
Airborne particulates (including metal flakes or particles) and reactive gases acting alone or in combination
with other environmental factors such as humidity or temperature might damage the system that might cause
the system to malfunction or stop working altogether.
The following specifications indicate the limits of particulates that the system can tolerate:
Reactive gases:
The copper reactivity level shall be less than 200 Angstroms per month (Å/month)
The silver reactivity level shall be less than 200 Å/month
Airborne particulates:
The room air should be continuously filtered with MERV 8 filters.
Air entering a data center should be filtered with MERV 11 or preferably MERV 13 filters.
The deliquescent relative humidity of the particulate contamination should be more than 60%
RH
Environment must be free of zinc whiskers

Lenovo ThinkSystem SR860 V3 Server 71


For additional information, see the Specifications section of the documentation for the server, available from
the Lenovo Documents site, https://pubs.lenovo.com/

Lenovo ThinkSystem SR860 V3 Server 72


Warranty upgrades and post-warranty support
The SR860 V3 has a 1-year or 3-year warranty based on the machine type of the system:
7D94 - 1-year warranty
7D93 - 3-year warranty
7D95 - SAP HANA configurations with 3-year warranty
Our global network of regional support centers offers consistent, local-language support enabling you to vary
response times and level of service to match the criticality of your support needs:
Standard Next Business Day – Best choice for non-essential systems requiring simple maintenance.
Premier Next Business Day – Best choice for essential systems requiring technical expertise from
senior-level Lenovo engineers.
Premier 24x7 4-Hour Response – Best choice for systems where maximum uptime is critical.
Premier Enhanced Storage Support 24x7 4-Hour Response – Best choice for storage systems
where maximum uptime is critical.
For more information, consult the brochure Lenovo Operational Support Services for Data Centers Services .

Services
Lenovo Data Center Services empower you at every stage of your IT lifecycle. From expert advisory and
strategic planning to seamless deployment and ongoing support, we ensure your infrastructure is built for
success. Our comprehensive services accelerate time to value, minimize downtime, and free your IT staff to
focus on driving innovation and business growth.

Note: Some service options may not be available in all markets or regions. For more information, go to
https://lenovolocator.com/. For information about Lenovo service upgrade offerings that are available in
your region, contact your local Lenovo sales representative or business partner.

In this section:
Lenovo Advisory Services
Lenovo Plan & Design Services
Lenovo Deployment, Migration, and Configuration Services
Lenovo Support Services
Lenovo Managed Services
Lenovo Sustainability Services

Lenovo ThinkSystem SR860 V3 Server 73


Lenovo Advisory Services
Lenovo Advisory Services simplify the planning process, enabling customers to build future-proofed strategies
in as little as six weeks. Consultants provide guidance on projects including VM migration, storage, backup
and recovery, and cost management to accelerate time to value, improve cost efficiency, and build a flexibly
scalable foundation.
Assessment Services
An Assessment helps solve your IT challenges through an onsite, multi-day session with a Lenovo
technology expert. We perform a tools-based assessment which provides a comprehensive and
thorough review of a company's environment and technology systems. In addition to the technology
based functional requirements, the consultant also discusses and records the non-functional business
requirements, challenges, and constraints. Assessments help organizations like yours, no matter how
large or small, get a better return on your IT investment and overcome challenges in the ever-changing
technology landscape.
Design Services
Professional Services consultants perform infrastructure design and implementation planning to support
your strategy. The high-level architectures provided by the assessment service are turned into low level
designs and wiring diagrams, which are reviewed and approved prior to implementation. The
implementation plan will demonstrate an outcome-based proposal to provide business capabilities
through infrastructure with a risk-mitigated project plan.

Lenovo Plan & Design Services


Unlock faster time to market with our tailored, strategic design workshops to align solution approaches with
your business goals and technical requirements. Leverage our deep solution expertise and end-to-end
delivery partnership to meet your goals efficiently and effectively.

Lenovo Deployment, Migration, and Configuration Services


Optimize your IT operations by shifting labor-intensive functions to Lenovo's skilled technicians for seamless
on-site or remote deployment, configuration, and migration. Enjoy peace of mind, faster time to value, and
comprehensive knowledge sharing with your IT staff, backed by our best-practice methodology.
Deployment Services for Storage and ThinkAgile
A comprehensive range of remote and onsite options tailored specifically for your business needs to
ensure your storage and ThinkAgile hardware are fully operational from the start.
Hardware Installation Services
A full-range, comprehensive setup for your hardware, including unpacking, inspecting, and positioning
components to ensure your equipment is operational and error-free for the most seamless and efficient
installation experience, so you can quickly benefit from your investments.
DM/DG File Migration Services
Take the burden of file migration from your IT’s shoulders. Our experts will align your requirements and
business objectives to the migration plans while coordinating with your team to plan and safely execute
the data migration to your storage platforms.
DM/DG/DE Health Check Services
Our experts perform proactive checks of your Firmware and system health to ensure your machines are
operating at peak and optimal efficiency to maximize up-time, avoid system failures, ensure the security
of IT solutions and simplify maintenance.
Factory Integrated Services
A suite of value-added offerings provided during the manufacturing phase of a server or storage
system that reduces time to value. These services aim at improving your hardware deployment
experience and enhance the quality of a standard configuration before it arrives at your facility.

Lenovo ThinkSystem SR860 V3 Server 74


Lenovo Support Services
In addition to response time options for hardware parts, repairs, and labor, Lenovo offers a wide array of
additional support services to ensure your business is positioned for success and longevity. Our goal is to
reduce your capital outlays, mitigate your IT risks, and accelerate your time to productivity.
Premier Support for Data Centers
Your direct line to the solution that promises the best, most comprehensive level of support to help you
fully unlock the potential of your data center.
Premier Enhanced Storage Support (PESS)
Gain all the benefits of Premier Support for Data Centers, adding dedicated storage specialists and
resources to elevate your storage support experience to the next level.
Committed Service Repair (CSR)
Our commitment to ensuring the fastest, most seamless resolution times for mission-critical systems
that require immediate attention to ensure minimal downtime and risk for your business. This service is
only available for machines under the Premier 4-Hour Response SLA.
Multivendor Support Services (MVS)
Your single point of accountability for resolution support across vast range of leading Server, Storage,
and Networking OEMs, allowing you to manage all your supported infrastructure devices seamlessly
from a single source.
Keep Your Drive (KYD)
Protect sensitive data and maintain compliance with corporate retention and disposal policies to ensure
your data is always under your control, regardless of the number of drives that are installed in your
Lenovo server.
Technical Account Manager (TAM)
Your single point of contact to expedite service requests, provide status updates, and furnish reports to
track incidents over time, ensuring smooth operations and optimized performance as your business
grows.
Enterprise Software Support (ESS)
Gain comprehensive, single-source, and global support for a wide range of server operating systems
and Microsoft server applications.
For more information, consult the brochure Lenovo Operational Support Services for Data Centers .

Lenovo Managed Services


Achieve peak efficiency, high security, and minimal disruption with Lenovo's always-on Managed Services.
Our real-time monitoring, 24x7 incident response, and problem resolution ensure your infrastructure operates
seamlessly. With quarterly health checks for ongoing optimization and innovation, Lenovo's remote active
monitoring boosts end-user experience and productivity by keeping your data center's hardware performing at
its best.
Lenovo Managed Services provides continuous 24x7 remote monitoring (plus 24x7 call center availability) and
proactive management of your data center using state-of-the-art tools, systems, and practices by a team of
highly skilled and experienced Lenovo services professionals.
Quarterly reviews check error logs, verify firmware & OS device driver levels, and software as needed. We’ll
also maintain records of latest patches, critical updates, and firmware levels, to ensure you systems are
providing business value through optimized performance.

Lenovo Sustainability Services

Lenovo ThinkSystem SR860 V3 Server 75


Asset Recovery Services
Lenovo Asset Recovery Services (ARS) provides a secure, seamless solution for managing end-of-life
IT assets, ensuring data is safely sanitized while contributing to a more circular IT lifecycle. By
maximizing the reuse or responsible recycling of devices, ARS helps businesses meet sustainability
goals while recovering potential value from their retired equipment. For more information, see the Asset
Recovery Services offering page.
CO2 Offset Services
Lenovo’s CO2 Offset Services offer a simple and transparent way for businesses to take tangible action
on their IT footprint. By integrating CO2 offsets directly into device purchases, customers can easily
support verified climate projects and track their contributions, making meaningful progress toward their
sustainability goals without added complexity.
Lenovo Certified Refurbished
Lenovo Certified Refurbished offers a cost-effective way to support IT circularity without compromising
on quality and performance. Each device undergoes rigorous testing and certification, ensuring reliable
performance and extending its lifecycle. With Lenovo’s trusted certification, you gain peace of mind
while making a more sustainable IT choice.

Lenovo TruScale
Lenovo TruScale XaaS is your set of flexible IT services that makes everything easier. Streamline IT
procurement, simplify infrastructure and device management, and pay only for what you use – so your
business is free to grow and go anywhere.
Lenovo TruScale is the unified solution that gives you simplified access to:
The industry’s broadest portfolio – from pocket to cloud – all delivered as a service
A single-contract framework for full visibility and accountability
The global scale to rapidly and securely build teams from anywhere
Flexible fixed and metered pay-as-you-go models with minimal upfront cost
The growth-driving combination of hardware, software, infrastructure, and solutions – all from one
single provider with one point of accountability.
For information about Lenovo TruScale offerings that are available in your region, contact your local Lenovo
sales representative or business partner.

Regulatory compliance
Lenovo ThinkSystem SR860 V3 Server 76
Regulatory compliance
The SR860 V3 conforms to the following standards:
ANSI/UL 62368-1
IEC 62368-1 (CB Certificate and CB Test Report)
CSA C22.2 No. 62368-1
Argentina IEC 60950-1
Mexico NOM-019
India BIS 13252 (Part 1)
Germany GS
TUV-GS (EN62368-1, and EK1-ITB2000)
Brazil INMETRO
South Africa NRCS LOA
Ukraine UkrCEPRO
Morocco CMIM Certification (CM)
Russia, Belorussia and Kazakhstan, TP EAC 037/2016 (for RoHS)
Russia, Belorussia and Kazakhstan, EAC: TP TC 004/2011 (for Safety); TP TC 020/2011 (for EMC)
CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55024, EN55035, EN61000-3-2, EN61000-3-3,
(EU) 2019/424, and EN IEC 63000 (RoHS))
FCC - Verified to comply with Part 15 of the FCC Rules, Class A
Canada ICES-003, issue 7, Class A
CISPR 32, Class A, CISPR 35
Korea KN32, Class A, KN35
Japan VCCI, Class A
Taiwan BSMI CNS15936, Class A; CNS15598-1; Section 5 of CNS15663
Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
UL Green Guard, UL2819
Energy Star 4.0
EPEAT (NSF/ ANSI 426) Bronze
Japanese Energy-Saving Act
EU2019/424 Energy Related Product (ErP Lot9)
China CELP certificate, HJ 2507-2011

External drive enclosures


Lenovo ThinkSystem SR860 V3 Server 77
External drive enclosures
The server supports attachment to external drive enclosures using a RAID controller with external ports or a
SAS host bus adapter. Adapters supported by the server are listed in the SAS adapters for external storage
section.
Note: Information provided in this section is for ordering reference purposes only. For the operating system
and adapter support details, refer to the interoperability matrix for a particular storage enclosure that can be
found on the Lenovo Data Center Support web site:
http://datacentersupport.lenovo.com

Table 56. External drive enclosures


Model Description
4587HC1 Lenovo Storage D1212 Disk Expansion Enclosure (2U enclosure with 12x LFF drive bays)
4587HC2 Lenovo Storage D1224 Disk Expansion Enclosure (2U enclosure with 24x SFF drive bays)
6413HC1 Lenovo Storage D3284 High Density Expansion Enclosure (5U enclosure with 84x LFF drive bays)
7DAHCTO1WW Lenovo ThinkSystem D4390 Direct Attached Storage (4U enclosure with 90x LFF drive bays)

For details about supported drives, adapters, and cables, see the following Lenovo Press Product Guides:
Lenovo Storage D1212 and D1224
http://lenovopress.lenovo.com/lp0512
Lenovo Storage D3284
http://lenovopress.lenovo.com/lp0513
Lenovo ThinkSystem D4390
https://lenovopress.lenovo.com/lp1681

External storage systems


Lenovo offers the ThinkSystem DE Series and ThinkSystem DM Series external storage systems for high-
performance storage. See the DE Series and DM Series product guides for specific controller models,
expansion enclosures and configuration options:
ThinkSystem DE Series Storage
https://lenovopress.com/storage/thinksystem/de-series#rt=product-guide
ThinkSystem DM Series Storage
https://lenovopress.com/storage/thinksystem/dm-series#rt=product-guide
ThinkSystem DG Series Storage
https://lenovopress.com/storage/thinksystem/dg-series#rt=product-guide

External backup units


Lenovo ThinkSystem SR860 V3 Server 78
External backup units
The following table lists the external backup options that are offered by Lenovo.

Table 57. External backup options


Part number Description
External RDX USB drives
4T27A10725 ThinkSystem RDX External USB 3.0 Dock
External SAS tape backup drives
6160S7E IBM TS2270 Tape Drive Model H7S
6160S8E IBM TS2280 Tape Drive Model H8S
6160S9E IBM TS2290 Tape Drive Model H9S
External SAS tape backup autoloaders
6171S7R IBM TS2900 Tape Autoloader w/LTO7 HH SAS
6171S8R IBM TS2900 Tape Autoloader w/LTO8 HH SAS
6171S9R IBM TS2900 Tape Autoloader w/LTO9 HH SAS
External tape backup libraries
6741A1F IBM TS4300 3U Tape Library-Base Unit
6741A3F IBM TS4300 3U Tape Library-Expansion Unit
Full High 8 Gb Fibre Channel for TS4300
01KP938 LTO 7 FH Fibre Channel Drive
01KP954 LTO 8 FH Fibre Channel Drive
02JH837 LTO 9 FH Fibre Channel Drive
Half High 8 Gb Fibre Channel for TS4300
01KP936 LTO 7 HH Fibre Channel Drive
01KP952 LTO 8 HH Fibre Channel Drive
02JH835 LTO 9 HH Fibre Channel Drive
Half High 6 Gb SAS for TS4300
01KP937 LTO 7 HH SAS Drive
01KP953 LTO 8 HH SAS Drive
02JH836 LTO 9 HH SAS Drive

For more information, see the list of Product Guides in the Backup units category:
https://lenovopress.com/servers/options/backup

Fibre Channel SAN switches


Lenovo offers the ThinkSystem DB Series of Fibre Channel SAN switches for high-performance storage
expansion. See the DB Series product guides for models and configuration options:
ThinkSystem DB Series SAN Switches:
https://lenovopress.com/storage/switches/rack#rt=product-guide

Uninterruptible power supply units


Lenovo ThinkSystem SR860 V3 Server 79
Uninterruptible power supply units
The following table lists the uninterruptible power supply (UPS) units that are offered by Lenovo.

Table 58. Uninterruptible power supply units


Part number Description
Rack-mounted or tower UPS units - 100-125VAC
7DD5A001WW RT1.5kVA 2U Rack or Tower UPS-G2 (100-125VAC)
7DD5A003WW RT3kVA 2U Rack or Tower UPS-G2 (100-125VAC)
Rack-mounted or tower UPS units - 200-240VAC
7DD5A002WW RT1.5kVA 2U Rack or Tower UPS-G2 (200-240VAC)
7DD5A005WW RT3kVA 2U Rack or Tower UPS-G2 (200-240VAC)
7DD5A007WW RT5kVA 3U Rack or Tower UPS-G2 (200-240VAC)
7DD5A008WW RT6kVA 3U Rack or Tower UPS-G2 (200-240VAC)
7DD5A00AWW RT11kVA 6U Rack or Tower UPS-G2 (200-240VAC)
† Only available in China and the Asia Pacific market.
For more information, see the list of Product Guides in the UPS category:
https://lenovopress.com/servers/options/ups

Lenovo ThinkSystem SR860 V3 Server 80


Power distribution units
The following table lists the power distribution units (PDUs) that are offered by Lenovo.

Table 59. Power distribution units

ASEAN

JAPAN
RUCIS
Brazil

INDIA
MEA
Part Feature

PRC
ANZ

HTK
EET

WE

NA
LA
number code Description
0U Basic PDUs
4PU7A93176 C0QH 0U 36 C13 and 6 C19 Basic 32A 1 Phase PDU Y Y Y Y Y Y Y Y Y N Y Y Y
v2
4PU7A93169 C0DA 0U 36 C13 and 6 C19 Basic 32A 1 Phase PDU Y Y Y Y Y Y Y Y Y N Y Y Y
4PU7A93177 C0QJ 0U 24 C13/C15 and 24 C13/C15/C19 Basic 32A Y Y Y Y Y Y Y Y Y Y Y Y Y
3 Phase WYE PDU v2
4PU7A93170 C0D9 0U 24 C13/C15 and 24 C13/C15/C19 Basic 32A Y Y Y Y Y Y Y Y Y N Y Y Y
3 Phase WYE PDU
0U Switched and Monitored PDUs
4PU7A93181 C0QN 0U 21 C13/C15 and 21 C13/C15/C19 Switched N Y N N N N N Y N Y N Y N
and Monitored 48A 3 Phase Delta PDU V2 (60A
derated)
4PU7A93174 C0D5 0U 21 C13/C15 and 21 C13/C15/C19 Switched N Y N N N N N Y N N N Y N
and Monitored 48A 3 Phase Delta PDU (60A
derated)
4PU7A93178 C0QK 0U 20 C13 and 4 C19 Switched and Monitored Y Y Y Y Y Y Y Y Y N Y Y Y
32A 1 Phase PDU v2
4PU7A93171 C0D8 0U 20 C13 and 4 C19 Switched and Monitored Y Y Y Y Y Y Y Y Y N Y Y Y
32A 1 Phase PDU
4PU7A93182 C0QP 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y Y Y Y Y
and Monitored 63A 3 Phase WYE PDU v2
4PU7A93175 C0CS 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y N Y Y Y
and Monitored 63A 3 Phase WYE PDU
4PU7A93180 C0QM 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y Y Y Y Y
and Monitored 32A 3 Phase WYE PDU v2
4PU7A93173 C0D6 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y N Y Y Y
and Monitored 32A 3 Phase WYE PDU
4PU7A93179 C0QL 0U 16 C13/C15 and 16 C13/C15/C19 Switched N Y N N N N N Y N Y N Y N
and Monitored 24A 1 Phase PDU v2 (30A
derated)
4PU7A93172 C0D7 0U 16 C13/C15 and 16 C13/C15/C19 Switched N Y N N N N N Y N N N Y N
and Monitored 24A 1 Phase PDU(30A derated)
1U Switched and Monitored PDUs
4PU7A90808 C0D4 1U 18 C19/C13 Switched and monitored 48A N N N N N N N Y N Y Y Y N
3P WYE PDU V2 ETL
4PU7A81117 BNDV 1U 18 C19/C13 switched and monitored 48A 3P N N N N N N N N N N N Y N
WYE PDU - ETL
4PU7A90809 C0DE 1U 18 C19/C13 Switched and monitored 48A Y Y Y Y Y Y Y Y Y Y Y N Y
3P WYE PDU V2 CE
4PU7A81118 BNDW 1U 18 C19/C13 switched and monitored 48A 3P Y Y Y Y Y Y Y Y Y Y Y N Y
WYE PDU – CE

Lenovo ThinkSystem SR860 V3 Server 81


ASEAN

JAPAN
RUCIS
Brazil

INDIA
MEA
Part Feature

PRC
ANZ

HTK
EET

WE

NA
LA
number code Description
4PU7A90810 C0DD 1U 18 C19/C13 Switched and monitored 80A N N N N N N N Y N Y Y Y N
3P Delta PDU V2
4PU7A77467 BLC4 1U 18 C19/C13 Switched and Monitored 80A N N N N N N N N N Y N Y N
3P Delta PDU
4PU7A90811 C0DC 1U 12 C19/C13 Switched and monitored 32A Y Y Y Y Y Y Y Y Y Y Y Y Y
3P WYE PDU V2
4PU7A77468 BLC5 1U 12 C19/C13 switched and monitored 32A 3P Y Y Y Y Y Y Y Y Y Y Y Y Y
WYE PDU
4PU7A90812 C0DB 1U 12 C19/C13 Switched and monitored 60A N N N N N N N Y N Y Y Y N
3P Delta PDU V2
4PU7A77469 BLC6 1U 12 C19/C13 switched and monitored 60A 3P N N N N N N N N N N N Y N
Delta PDU
1U Ultra Density Enterprise PDUs (9x IEC 320 C13 + 3x IEC 320 C19 outlets)
71763NU 6051 Ultra Density Enterprise C19/C13 PDU N N Y N N N N N N Y Y Y N
60A/208V/3PH
71762NX 6091 Ultra Density Enterprise C19/C13 PDU Module Y Y Y Y Y Y Y Y Y Y Y Y Y
1U C13 Enterprise PDUs (12x IEC 320 C13 outlets)
39Y8941 6010 Enterprise C13 PDU Y Y Y Y Y Y Y Y Y Y Y Y Y
1U Front-end PDUs (3x IEC 320 C19 outlets)
39Y8938 6002 DPI 30amp/125V Front-end PDU with NEMA Y Y Y Y Y Y Y Y Y Y Y Y Y
L5-30P
39Y8939 6003 DPI Single-phase 30A/208V Front-end PDU Y Y Y Y Y Y Y Y Y Y Y Y Y
(US)
39Y8934 6005 DPI 32amp/250V Front-end PDU with IEC 309 Y Y Y Y Y Y Y Y Y Y Y Y Y
2P+Gnd
39Y8940 6004 DPI 60amp/250V Front-end PDU with IEC 309 Y N Y Y Y Y Y N N Y Y Y N
2P+Gnd connector
39Y8935 6006 DPI 63amp/250V Front-end PDU with IEC 309 Y Y Y Y Y Y Y Y Y Y Y Y Y
2P+Gnd connector
1U NEMA PDUs (6x NEMA 5-15R outlets)
39Y8905 5900 DPI 100-127v PDU with Fixed Nema L5-15P Y Y Y Y Y Y Y Y Y Y Y Y Y
line cord
Line cords for 1U PDUs that ship without a line cord
40K9611 6504 DPI 32a Cord (IEC 309 3P+N+G) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9612 6502 DPI 32a Cord (IEC 309 P+N+G) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9613 6503 4.3m, 63A/230V, EPDU to IEC 309 P+N+G Y Y Y Y Y Y Y Y Y Y Y Y Y
(non-US) Line Cord
40K9614 6500 DPI 30a Cord (NEMA L6-30P) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9615 6501 DPI 60a Cord (IEC 309 2P+G) N N Y N N N Y N N Y Y Y N
40K9617 6505 4.3m, 32A/230V, Souriau UTG to AS/NZS 3112 Y Y Y Y Y Y Y Y Y Y Y Y Y
(Aus/NZ) Line Cord
40K9618 6506 4.3m, 32A/250V, Souriau UTG Female to KSC Y Y Y Y Y Y Y Y Y Y Y Y Y
8305 (S. Korea) Line Cord

Lenovo ThinkSystem SR860 V3 Server 82


For more information, see the Lenovo Press documents in the PDU category:
https://lenovopress.com/servers/options/pdu

Rack cabinets
The following table lists the supported rack cabinets.

Table 60. Rack cabinets


Model Description
7D2NCTO1WW 12U 1200mm Deep Micro Datacenter Rack
93072RX 25U Standard Rack (1000mm)
93072PX 25U Static S2 Standard Rack (1000mm)
7D6DA007WW ThinkSystem 42U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6DA008WW ThinkSystem 42U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA009WW ThinkSystem 48U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA00AWW ThinkSystem 48U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
1410O42 Lenovo EveryScale 42U Onyx Heavy Duty Rack Cabinet
1410P42 Lenovo EveryScale 42U Pearl Heavy Duty Rack Cabinet
1410O48 Lenovo EveryScale 48U Onyx Heavy Duty Rack Cabinet
1410P48 Lenovo EveryScale 48U Pearl Heavy Duty Rack Cabinet
93604PX 42U 1200mm Deep Dynamic Rack
93614PX 42U 1200mm Deep Static Rack
93634PX 42U 1100mm Dynamic Rack
93634EX 42U 1100mm Dynamic Expansion Rack
93074RX 42U Standard Rack (1000mm)

For specifications about these racks, see the Lenovo Rack Cabinet Reference, available from:
https://lenovopress.com/lp1287-lenovo-rack-cabinet-reference
For more information, see the list of Product Guides in the Rack cabinets category:
https://lenovopress.com/servers/options/racks

Installation restriction - 1100mm racks and the use of the CMA : The SR860 V3 with the cable
management arm (CMA) attached is supported in 1100mm rack cabinets, however there is insufficient
clearance to route any cables between the CMA and the rear door. As a result, if you require cable access
through the lower cable access panel of the rack and you have an SR860 V3 installed at the bottom
position of the rack, then it is not supported to use a CMA with that server. Similarly, if you require cable
access through the upper cable access panel of the rack and you have an SR860 V3 installed at the top
position of the rack, then it is not supported to use a CMA with that server. This limitation does not exist
with rack cabinets with 1200mm depth.

Lenovo ThinkSystem SR860 V3 Server 83


KVM console options
The following table lists the supported KVM consoles.

Table 61. KVM console


Part number Description
4XF7A84188 ThinkSystem 18.5" LCD Console (with US English keyboard)

The following table lists the available KVM switches and the options that are supported with them.

Table 63. KVM switches and options


Part number Description
KVM Console switches
1754D1X Global 2x2x16 Console Manager (GCM16)
1754A2X Local 2x16 Console Manager (LCM16)
1754A1X Local 1x8 Console Manager (LCM8)
Cables for GCM and LCM Console switches
46M5383 Virtual Media Conversion Option Gen2 (VCO2)
46M5382 Serial Conversion Option (SCO)

For more information, see the list of Product Guides in the KVM Switches and Consoles category:
http://lenovopress.com/servers/options/kvm

Lenovo Financial Services


Lenovo ThinkSystem SR860 V3 Server 84
Lenovo Financial Services
Why wait to obtain the technology you need now? No payments for 90 days and predictable, low monthly
payments make it easy to budget for your Lenovo solution.
Flexible
Our in-depth knowledge of the products, services and various market segments allows us to offer
greater flexibility in structures, documentation and end of lease options.
100% Solution Financing
Financing your entire solution including hardware, software, and services, ensures more predictability
in your project planning with fixed, manageable payments and low monthly payments.
Device as a Service (DaaS)
Leverage latest technology to advance your business. Customized solutions aligned to your needs.
Flexibility to add equipment to support growth. Protect your technology with Lenovo's Premier Support
service.
24/7 Asset management
Manage your financed solutions with electronic access to your lease documents, payment histories,
invoices and asset information.
Fair Market Value (FMV) and $1 Purchase Option Leases
Maximize your purchasing power with our lowest cost option. An FMV lease offers lower monthly
payments than loans or lease-to-own financing. Think of an FMV lease as a rental. You have the
flexibility at the end of the lease term to return the equipment, continue leasing it, or purchase it for the
fair market value. In a $1 Out Purchase Option lease, you own the equipment. It is a good option when
you are confident you will use the equipment for an extended period beyond the finance term. Both
lease types have merits depending on your needs. We can help you determine which option will best
meet your technological and budgetary goals.
Ask your Lenovo Financial Services representative about this promotion and how to submit a credit
application. For the majority of credit applicants, we have enough information to deliver an instant decision
and send a notification within minutes.

Seller training courses


The following sales training courses are offered for employees and partners (login required). Courses are
listed in date order.

Lenovo ThinkSystem SR860 V3 Server 85


1. ThinkSystem Rack and Tower Introduction for ISO Client Managers
2024-12-10 | 20 minutes | Employees Only

In this course, you will learn about Lenovo’s Data Center Portfolio, its ThinkSystem Family and the key
features of the Rack and Tower servers. It will equip you with foundational knowledge which you can
then expand upon by participating in the facilitated session of the curriculum.

Course Objectives:
• By the end of this course, you should be able to:
• Identify Lenovo’s main data center brands.
• Describe the key components of the ThinkSystem Family servers.
• Differentiate between the Rack and Tower servers of the ThinkSystem Family.
• Understand the value Rack and Tower servers can provide to customers.
Published: 2024-12-10
Length: 20 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DSRTO101r2


2. Lenovo Data Center Technical Sales Certification Exam Study Guide
2024-11-27 | 10 minutes | Employees and Partners

This guide includes information to help candidates prepare and register for the Data Center Technical
Sales practice and certification exams.
Published: 2024-11-27
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: LENU-322C-SG


3. Lenovo Data Center Sales Certification Exam Study Guide
2024-11-27 | 10 minutes | Employees and Partners

This guide includes information to help candidates prepare and register for the Data Center Sales
practice and certification exams.
Published: 2024-11-27
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: LENU-122C-SG

Lenovo ThinkSystem SR860 V3 Server 86


4. Partner Technical Webinar - Server Update with Mark Bica
2024-11-26 | 60 minutes | Employees and Partners

in this 60-minute replay, Mark Bica, Lenovo Product Manager gave an update on the server portfolio.
Mark presented on the new V4 Intel servers with Xeon 6 CPUs. He reviewed where the new AMD 5th
Gen EPYC CPUs will be used in our servers. He followed with a review of the GPU dense servers
including SR680, SR680a, SR575 and SR780a. Mark concluded with a review of the SC777 and
SC750 that were introduced at TechWorld.
Published: 2024-11-26
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 112224


5. Partner Technical Webinar - LenovoPress updates and LPH Demo
2024-11-13 | 60 minutes | Employees and Partners

in this 60-minute replay, we had 3 topics. First, David Watts, Lenovo Sr Manager LenovoPress, gave
an update on LenovoPress and improvements to finding Seller Training Courses (both partner and
Lenovo). Next, Ryan Tuttle, Lenovo LETS Solution Architect, gave a demo of Lenovo Partner Hub
(LPH) including how to find replays of Partner Webinars in LPL. Finally, Joe Murphy, Lenovo Sr
Manager of LETS NA, gave a quick update on the new Stackable Warranty Options in DCSC.
Published: 2024-11-13
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 110824


6. Virtual Facilitated Session - ThinkSystem Rack and Tower Primer for ISO Client Managers
2024-10-31 | 90 minutes | Employees Only

In this Virtual Instructor-Led Training Session, ISO Client Managers will be able to build on the
knowledge gained in Module 1 (eLearning) of the ThinkSystem Rack and Tower Server Primer for ISO
Client Managers curriculum.

<p>
<span style="color:red; font-weight:bold;"> IMPORTANT! </span> <span style="font-weight:bold;">
Module 1 (eLearning) must be completed to be eligible to participate in this session. Please note that
places are subject to availability. If you are selected, you will receive the invite to this session via email.
</span>
</p>
Published: 2024-10-31
Length: 90 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DSRTO102

Lenovo ThinkSystem SR860 V3 Server 87


7. Partner Technical Webinar - OneIQ
2024-07-15 | 60 minutes | Employees and Partners

In this 60-minute replay, Peter Grant, Field CTO for OneIQ, reviewed and demo’d the capabilities of
OneIQ including collecting data and analyzing. Additionally, Peter and the team discussed how specific
partners (those with NA Channel SA coverage) will get direct access to OneIQ and other partners can
get access to OneIQ via Distribution or the NA LETS team.
Published: 2024-07-15
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 071224


8. SAP Webinar for Lenovo Sellers: Lenovo Portfolio Update for SAP Landscapes
2024-06-04 | 60 minutes | Employees Only

Join Mark Kelly, Advisory IT Architect with the Lenovo Global SAP Center of Competence as he
discusses:
• Challenges in the SAP environment
• Lenovo On-premise Solutions for SAP
• Lenovo support resources for SAP solutions
Published: 2024-06-04
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DSAPF101


9. Lenovo Data Center Product Portfolio
2024-05-29 | 20 minutes | Employees and Partners

This course introduces the Lenovo data center portfolio, and covers servers, storage, storage
networking, and software-defined infrastructure products. After completing this course about Lenovo
data center products, you will be able to identify product types within each data center family, describe
Lenovo innovations that this product family or category uses, and recognize when a specific product
should be selected.
Published: 2024-05-29
Length: 20 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1110r7

Lenovo ThinkSystem SR860 V3 Server 88


10. VTT Cloud Architecture: NVIDIA Using Cloud for GPUs and AI
2024-05-22 | 60 minutes | Employees Only

Join JD Dupont, NVIDIA Head of Americas Sales, Lenovo partnership and Veer Mehta, NVIDIA
Solution Architect on an interactive discussion about cloud to edge, designing cloud Solutions with
NVIDIA GPUs and minimizing private\hybrid cloud OPEX with GPUs. Discover how you can use what
is done at big public cloud providers for your customers. We will also walk through use cases and see
a demo you can use to help your customers.
Published: 2024-05-22
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DVCLD212


11. Partner Technical Webinar - ISG Portfolio Update
2024-04-15 | 60 minutes | Employees and Partners

In this 60-minute replay, Mark Bica, NA ISG Server Product Manager reviewed the Lenovo ISG
portfolio. He covered new editions such as the SR680a \ SR685a, dense servers, and options that are
strategic for any workload.
Published: 2024-04-15
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 041224


12. Partner Technical Webinar - StorMagic
2024-03-19 | 60 minutes | Employees and Partners

March 08, 2024 – In this 60-minute replay, Stuart Campbell and Wes Ganeko of StorMagic joined us
and provided an overview of StorMagic on Lenovo. They also demonstrated the interface while sharing
some interesting use cases.
Published: 2024-03-19
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 030824

Lenovo ThinkSystem SR860 V3 Server 89


13. Family Portfolio: Storage Controller Options
2024-01-23 | 25 minutes | Employees and Partners

This course covers the storage controller options available for use in Lenovo servers. The classes of
storage controller are discussed, along with a discussion of where they are used, and which to choose.

After completing this course, you will be able to:


• Describe the classes of storage controllers
• Discuss where each controller class is used
• Describe the available options in each controller class
Published: 2024-01-23
Length: 25 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1111


14. Lenovo-Intel Sustainable Solutions QH
2024-01-22 | 10 minutes | Employees and Partners

This Quick Hit explains how Lenovo and Intel are committed to sustainability, and introduces the
Lenovo-Intel joint sustainability campaign. You will learn how to use this campaign to show customers
what that level of commitment entails, how to use the campaign's unsolicited proposal approach, and
how to use the campaign as a conversation starter which may lead to increased sales.
Published: 2024-01-22
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW2524a


15. FY24Q3 Intel Servers Update
2023-12-11 | 15 minutes | Employees and Partners

This update is designed to help you discuss the features and customer benefits of Lenovo servers that
use the 5th Gen Intel® Xeon® processors. Lenovo has also introduced a new server, the ThinkSystem
SD650-N V3, which expands the supercomputer server family. Reasons to call your customer and talk
about refreshing their infrastructure are also included as a guideline.
Published: 2023-12-11
Length: 15 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW2522a

Lenovo ThinkSystem SR860 V3 Server 90


16. VTT: SAP HANA Transition and Refresh Opportunity - July 2023
2023-07-14 | 60 minutes | Employees Only

In this session, we cover:


- What Next for SAP Clients?
- Lenovo Opportunity
- Lenovo Portfolio for SAP Solutions
- RISE with SAP
Published: 2023-07-14
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DVDAT202


17. Family Portfolio: ThinkSystem Intel Mission Critical Servers
2023-01-09 | 10 minutes | Employees and Partners

This course is designed to give Lenovo sales and partner representatives the foundation of the Intel
Mission Critical server family of products. As an introduction to the products, this course also includes
Lenovo innovations and when to select a specific product.

When you finish this course, you should be able to identify products and features within the family,
describe Lenovo innovations that this product family uses, and recognize when a specific product or
products should be selected.
Published: 2023-01-09
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1209r6


18. Family Portfolio Intel Mission Critical Servers V3 Preview
2023-01-04 | 3 minutes | Employees and Partners

This Quick Hit introduces two new servers, the SR850 V3 and SR860 V3, in the ThinkSystem Intel
Mission Critical server family, and introduces new features.
Note: This course is presented as audio only. There are no slides or video.
Published: 2023-01-04
Length: 3 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1209r6a

Lenovo ThinkSystem SR860 V3 Server 91


19. Introduction to the Intel Xeon Scalable Gen4 Processors
2022-12-30 | 10 minutes | Employees and Partners

When you complete this course, you should be able to define the Gen4 Intel Xeon Scalable processors
and the four tiers used in the family. You should also be able to discuss the new features of the Gen4
processors and the family value proposition.
Published: 2022-12-30
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW2500


20. Technical Champions Webinar: ThinkSystem V3 Servers using Intel Eagle Stream
2022-12-08 | 86 minutes | Employees Only

This webinar discusses the key new features of the Intel Eagle Stream platform and its implementation
in the ThinkSystem Server portfolio. This webinar will cover what has changed from the previous
generation of servers, and timeframes for availability.
Published: 2022-12-08
Length: 86 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DTSP201


21. Lenovo Infrastructure Solutions Launch
2022-09-16 | 8 minutes | Employees and Partners

This Quick Hit introduces a wealth of new products, solutions, and services announced as part of the
Lenovo ThinkSystem 30th Anniversary celebration.
Published: 2022-09-16
Length: 8 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: FY23Q2a


22. Lenovo Sustainable Computing
2022-09-16 | 4 minutes | Employees and Partners

This Quick Hit describes the Lenovo sustainable computing program, and the many ways in which
Lenovo strives to respect and protect the environment.
Published: 2022-09-16
Length: 4 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW2504a

Lenovo ThinkSystem SR860 V3 Server 92


23. Introduction to DDR5 Memory
2022-08-23 | 10 minutes | Employees and Partners

This course introduces DDR5 memory, describes new features of this memory generation, and
discusses the advantages to customers of this new memory generation.
Published: 2022-08-23
Length: 10 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW2502

Related publications and links


For more information, see these resources:
Product web page for the ThinkSystem SR860 V3:
https://www.lenovo.com/us/en/p/mission-critical/len21ts0016
Datasheet for the SR860 V3
https://lenovopress.lenovo.com/DS0156
ThinkSystem SR860 V3 drivers and support
http://datacentersupport.lenovo.com/products/servers/thinksystem/sr860-v3/7d93/downloads
Lenovo ThinkSystem SR860 V3 product publications:
http://thinksystem.lenovofiles.com/help/index.jsp
Quick Start
Rack Installation Guide
Setup Guide
Hardware Maintenance Manual
Messages and Codes Reference
Memory Population Reference
ServerProven hardware compatibility:
https://serverproven.lenovo.com/

Related product families


Product families related to this document are the following:
4-Socket Rack Servers
Mission Critical Servers
ThinkSystem SR860 V3 Server

Lenovo ThinkSystem SR860 V3 Server 93


Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your local
Lenovo representative for information on the products and services currently available in your area. Any reference to a
Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service
may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual
property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any
other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any license to these patents. You can
send license inquiries, in writing, to:

Lenovo (United States), Inc.


8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without
notice.

The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not affect
or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or implied
license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained in this
document was obtained in specific environments and is presented as an illustration. The result obtained in other
operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for
this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was
determined in a controlled environment. Therefore, the result obtained in other operating environments may vary
significantly. Some measurements may have been made on development-level systems and there is no guarantee that
these measurements will be the same on generally available systems. Furthermore, some measurements may have
been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.

© Copyright Lenovo 2025. All rights reserved.

This document, LP1606, was created or updated on January 7, 2025.


Send us your comments in one of the following ways:
Use the online Contact us review form found at:
https://lenovopress.lenovo.com/LP1606
Send your comments in an e-mail to:
comments@lenovopress.com
This document is available online at https://lenovopress.lenovo.com/LP1606.

Lenovo ThinkSystem SR860 V3 Server 94


Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other
countries, or both. A current list of Lenovo trademarks is available on the Web at
https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
AnyBay®
ServerProven®
System x®
ThinkAgile®
ThinkServer®
ThinkShield®
ThinkSystem®
XClarity®
The following terms are trademarks of other companies:
AMD and AMD Instinct™ are trademarks of Advanced Micro Devices, Inc.
Intel®, Intel Optane®, and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Linux® is the trademark of Linus Torvalds in the U.S. and other countries.
Microsoft®, ActiveX®, Hyper-V®, PowerShell, Windows PowerShell®, Windows Server®, and Windows® are
trademarks of Microsoft Corporation in the United States, other countries, or both.
SPECpower® is a trademark of the Standard Performance Evaluation Corporation (SPEC).
Other company, product, or service names may be trademarks or service marks of others.

Lenovo ThinkSystem SR860 V3 Server 95


ThinkSystem DB720S
FC SAN Switch
Maximize performance, simplify
tasks

Overview Scale-Out Flash Storage Environments


With the growing adoption of flash and the ramp-up of Lenovo storage solutions deliver the performance,
NVMe flash-based storage, enterprises are moving application response time, and scalability needed for
ever-increasing amounts of data through storage area next-generation data centers.
networks (SAN), forcing an increase in I/O capacity to
keep up with escalating demand. Coupled with rising The ThinkSystem DB720S switch and storage arrays
complexity and higher expectations for availability, support NVMe over Fibre Channel which enables
organizations need a SAN capable of maximizing enterprises to move their high-performance, latency-
performance while simplifying and automating sensitive workloads to the NVMe protocol without
management. disruption.

To meet these requirements, the storage network Gen 7 Fibre Channel


needs to evolve. A Lenovo Gen 7 Fibre Channel Lenovo Gen 7 Fibre Channel is the storage network
infrastructure, based on Brocade® technology, enables infrastructure for mission-critical workloads in medium
the full performance of NVMe workloads with reduced to large environments. The DB720S is a Gen 7 building-
latency and increased bandwidth. Also, Lenovo’s Gen 7 block switch that provides ultra-low latency and
network lays the foundation for an autonomous SAN flexible scalability from 24 to 64 SFP+ ports in a 1U
by combining powerful analytics and advanced form factor, supporting up to 64Gbps port bandwidth
automation capabilities to maximize performance, with simplified deployment, configuration, and
ensure reliability, and realize a self-learning, self- management of SAN resources.
optimizing, and self-healing SAN.
The ThinkSystem DB720S is an autonomous SAN
Autonomous SAN Innovation
building block; along with unmatched 64Gb/s Through a robust analytics architecture and Fabric
performance, these Gen 7 switches offer a 50% latency Vision technology, the DB720S delivers autonomous
reduction compared to Gen 6 to ensure maximum SAN infrastructure that offers self-learning, self-
performance from NVMe storage. optimizing, and self-healing capabilities. Fabric Vision is
a suite of features that leverage comprehensive data
collection capabilities with powerful analytics to
quickly understand the health and performance of the
environment and identify any potential impacts or
trending problems.
2 | ThinkSystem DB720S FC SAN Switch

DB720S Specifications
Models 32G: 24 active ports with 32Gbps SWL FC transceivers (R/F airflow)
64G: 24 active ports with 64Gbps SWL FC transceivers (R/F airflow)
Fibre Channel ports Switch mode (default): Minimum of 24 ports and maximum of 64 ports
Ports are enabled in increments of 8 or 16 ports up to 64 ports via Ports on Demand (PoD)
licenses; E_Ports, F_Ports, D_Ports, EX_Ports
Access Gateway default port mapping: 64 SFP+ F_Ports, 8 SFP+ N_Ports
Port on Demand Options 8-Port SW License with 32Gbps SWL FC transceivers
8-Port SW License with 64Gbps SWL FC transceivers
16-Port SW License with 64Gbps SWL SFP-DD FC transceivers
Aggregate bandwidth 4.096Tb/s
Maximum fabric latency Latency for locally switched ports is 460 ns (including FEC)
Media types 64Gbps: hot-pluggable SFP-DD, SN connector; 64Gb/s SWL
(Brocade transceivers 64Gbps: hot-pluggable SFP+, LC connector; 64Gb/s SWL, LWL 10 km, ELWL 25 km
required) 32Gbps: hot-pluggable SFP+, LC connector; 32Gb/s SWL, LWL 10 km, ELWL 25 km
10Gbps: hot-pluggable SFP+, LC connector; 10Gb/s SWL, LWL 10 km
Fibre Channel distance is subject to fiber-optic cable and port speed
Rack-mount rail kits Fixed rail kit is included standard, Mid-mount rack kit optional
Software Included Enterprise Software: Trunking, Fabric Vision, Extended Fabric and Integrated Routing
Enclosure (R) Back-to-front airflow; non-port-side intake; power from back, 1U
(F) Front-to-back airflow; non-port-side exhaust; power from back, 1U
Power supply Dual, hot-swappable redundant power supplies with integrated system cooling fans
Warranty 3-year hardware and firmware/FOS (upgrades available)

For technical details, refer to the ThinkSystem DB720S Product Guide .

About Lenovo For More Information


To learn more about Lenovo rack and infrastructure
Lenovo (HKSE: 992) (ADR: LNVGY) is a US$62 billion
revenue global technology powerhouse, ranked #171 in solutions, contact your Lenovo Business Partner or visit:
lenovo.com/servers/options.
the Fortune Global 500, employing 77,000 people around
the world, and serving millions of customers every day in
Learn more about Lenovo Servers
180 markets. Focused on a bold vision to deliver smarter NEED SERVERS? lenovo.com/systems/servers
technology for all, Lenovo is expanding into new growth
areas of infrastructure, mobile, solutions and services. This NEED Learn more about Lenovo Services
transformation is building a more inclusive, trustworthy, SERVICES? lenovo.com/systems/services
and sustainable digital society for everyone, everywhere.
GET IT AS A Learn more about Lenovo TruScale
SERVICE? lenovo.com/truscale

© 2025 Lenovo. All rights reserved.


Availability: Offers, prices, specifications and availability may change without notice. Lenovo is not responsible for photographic or typographic
errors. Warranty: For a copy of applicable warranties, write to: Lenovo Warranty Information, 1009 Think Place, Morrisville, NC, 27560. Lenovo
makes no representation or warranty regarding third-party products or services. Trademarks: Lenovo, the Lenovo logo, ThinkSystem® are
trademarks or registered trademarks of Lenovo. Other company, product, or service names may be trademarks or service marks of others.
Document number DS0120, published April 26, 2022. For the latest version, go to lenovopress.lenovo.com/ds0120.
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch
Product Guide

The Lenovo ThinkSystem DB720S Gen 7 FC SAN Switch, with its unmatched 64Gbps performance and
industry-leading port density, provides a building block that supports data growth, demanding workloads, and
data-center consolidation. Delivering unmatched 64Gbps performance and 50% lower latency compared to
the previous generation, this switch delivers a fixed-port building block, designed to maximize the performance
of flash and NVMe environments to meet demanding workloads. With Brocade® Gen 7 technology, the
DB720S delivers far more than just speed and latency improvements. It can eliminate the pain of managing
your data center, with autonomous SAN technology to deliver a network that can self-learn, self-optimize, and
self-heal without intervention.
The Lenovo DB720 is built for maximum flexibility, scalability, and ease of use. Organizations can scale from
24 to 64 SFP+ ports in an efficient 1U form factor that delivers industry-leading port density and space
utilization.
The following figure shows the Lenovo ThinkSystem DB720S Gen 7 FC SAN Switch.

Figure 1. Lenovo ThinkSystem DB720S Gen 7 FC SAN Switch

Did you know?


The DB720S is designed for maximum flexibility and value. This enterprise-class switch offers pay-as-you-
grow scalability with Ports on Demand (PoD). Organizations can quickly, easily, and cost-effectively scale
from 24 ports to 64 ports in an efficient 1U form factor that delivers industry-leading port density and space
utilization. This switch also provides easy integration into existing SAN environments -- from 8Gb to 64Gb
speeds -- while introducing the benefits of Gen 7 Fibre Channel connectivity. And the DB720S simplifies
deployment, configuration, and management of SAN resources with a collection of easy-to-use tools.
With Lenovo FC SAN switch offerings, Lenovo can be your trusted partner that offers "one stop shop" and
single point of contact for delivery of leading edge technologies and innovations from Lenovo and other leading
IT vendors. These offerings can satisfy the wide range of your end-to-end IT infrastructure needs, including
end-user devices, servers, storage, networking, services, management software, and financing.

Click here to check for updates


Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 1
Key features
The DB720S provides exceptional price/performance value by including standard enterprise class software
standard like Fabric Vision®, ISL Trunking, Integrated Routing, and Extended Fabrics.
The ThinkSystem DB720S FC SAN Switch offers the following features and benefits:
Provides high scalability in an ultra-dense, 1U switch with up to 56 ports, providing up to 64
connections with the use of SFP-DD transceivers, to support high-density server virtualization, cloud
architectures, and flash-based storage environments.
Increases scalability by using SFP-DD transceivers that provide dual SN connections that allow
organizations to connect more servers, storage, or switches in a small footprint. Each transceiver
supports two independent connections of 64G Fibre Channel via a two-lane electrical interface.
Accelerates critical workloads with 64Gb/s links
Maximizes performance of NVMe storage with 50% lower switching latency than Gen 6
Enables pay-as-you-grow scalability from 24 to 64 ports—for on-demand flexibility
Safeguards mission-critical workloads from vulnerabilities with Gen 7 integrated security.
Provides cyber-resiliency with integrated security technology that protects mission-critical operations by
validating the integrity of Gen 7 hardware and software.
Guarantees critical application performance by automatically prioritizing traffic and avoiding congestion
with Brocade Traffic optimizer.
Simplifies troubleshooting by identifying and isolating issues
Collects comprehensive telemetry data across the fabric to enable powerful analytics
Visualizes the data to easily understand the health and performance of the SAN
Automates repetitive tasks to save time and eliminate human error
Protects existing device investments with auto-sensing 8, 16 and 32 Gbit/sec capabilities and native
operation with any Brocade SAN fabrics.
Leverages Fabric Vision technology’s powerful monitoring, management, and diagnostic tools to
simplify administration, increase uptime, and reduce costs.
Supplies a rich set of standard features at no extra cost, including fabric services, advanced zoning,
adaptive networking, full fabric and access gateway operations, integrated 10 Gb FC, and diagnostic
tools.
Expands fabric capabilities with optional licensed functions, including trunking, advanced monitoring
and alerting, long-distance fabrics, and FC-FC routing.
Compresses in-flight data on up to four ports for more efficient link utilization.
Maximizes resiliency with redundant hot-swap power supplies.
Accelerates troubleshooting with built-in advanced diagnostics tools featuring ClearLink Diagnostics
with D_Ports (Diagnostic Ports) and select adapters from QLogic and Emulex, which helps ensure
optical and signal integrity for 32/64 Gb Fibre Channel optics and cables.
Brocade Fabric Vision
To further simplify operations and increase visibility, the DB720S includes Brocade Fabric Vision® technology
to monitor and analyze the SAN. This technology provides visibility and insight to quickly identify problems and
achieve critical service-level agreements (SLAs).
The DB720S Switch with Fabric Vision technology provides a robust analytics architecture that delivers
autonomous SAN technology through self-learning, self-optimizing, and self-healing capabilities. Fabric Vision
technology is a suite of features that leverage comprehensive data collection capabilities with powerful
analytics to quickly understand the health and performance of the environment and identify any potential
impacts or trending problems. The combination of SAN analytics and automation technologies unlocks the
capabilities to deliver a self-learning, self-optimizing, and self-healing autonomous SAN.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 2


Features of Fabric Vision include:
Self-learning
Gather and transform billions of data points into network intelligence
Visualize application and device-based performance and health metrics
Detect abnormal traffic behaviors and degraded performance
Eliminate operational steps by automatically learning application flows
Self-optimizing
Optimize critical application performance by automatically prioritizing traffic
Guarantee application performance by proactively monitoring and actively shaping traffic
Eliminate human errors and performance impacts through open DevOps automation technology
Optimize administrative resources with cloud-like SAN orchestration
Self-healing
Instantly notify end devices of congestion for automatic resolution
Ensure data delivery with automatic failover from physical or congestion issues
Detect and automatically reconfigure out-of-compliance fabrics
Eliminate performance impacts by automatically taking corrective action on misbehaving devices
Brocade SANnav Management Portal
To streamline management workflows, organizations can leverage the optional subscription-based Brocade
SANnav Management Portal software to accelerate the deployment of new applications, switches, servers,
and storage. Furthermore, a modernized graphical user interface (GUI) improves operational efficiencies with
visual dashboards for instant visibility and faster troubleshooting.
Perfect for high-performance, latency-sensitive workloads
Enterprises are quickly moving their high-performance, latency-sensitive workloads to NVMe flash-based
storage. The DB720S Switch supports NVMe over Fibre Channel, enabling organizations to integrate Gen 7
Fibre Channel networks with next-generation flash storage, without a disruptive rip-and-replace. This enables
enterprises to achieve faster application response times and harness the performance innovation inherent in
NVMe storage. NVMe, combined with the high performance and low latency of Gen 7 Fibre Channel, delivers
the performance, application response time, and scalability needed for next-generation data centers.
Access Gateway
The DB720S can be deployed as a full-fabric switch or as an Access Gateway, which simplifies fabric
topologies and allows heterogeneous fabric connectivity (the default mode setting is a switch). Access
Gateway mode utilizes N_Port ID Virtualization (NPIV) switch standards to present physical and virtual
servers directly to the core of SAN fabrics. Access Gateway allows you to configure your fabric to handle
additional devices without increasing the number of switch domains. Key benefits of Access Gateway mode
include the following:
Improved scalability for large or rapidly growing server and virtual server environments
Reduced management of the network edge, since Access Gateway does not have a domain identity
and appears transparent to the core fabric
Support for heterogeneous SAN configurations without reduced functionality for server connectivity

Components and connectors


Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 3
Components and connectors
The following figure shows the port-side view of the DB720S FC SAN Switch.

Figure 2. DB720S FC SAN Switch port-side view


The following figure shows the non-port side view of the DB720S FC SAN Switch.

Figure 3. DB720S FC SAN Switch non-port-side view

System specifications
The following table lists the ThinkSystem DB720S system specifications.

Table 1. System specifications


Component Specification
Machine type 7D5J
System Architecture
Fibre Channel Switch mode (default): Minimum of 24 ports and maximum of 64 ports. Ports are enabled in
Ports increments of 8 ports up to 64 ports via Ports on Demand (PoD) licenses; E_Ports, F_Ports,
M_Ports, D_Ports, and EX_Ports.

Up to 48 autosensing ports that support 32Gb or 64Gb SFP+ transceivers


Up to 8 double density (DD) ports that support either SFP+ (32Gb or 64Gb) transceivers
for a single connection, or SFP-DD transceivers that each support two 64G connections,
thus providing an additional 16 ports at 64Gb.

Access Gateway default port mapping: 56x SFP+ F_Ports, 8x SFP+ N_Ports.

The SFP+ ports are capable of auto-negotiating to 8, 16, 32, or 64Gb speeds depending on the
SFP+ model and the minimum supported speed of the optical transceiver at the other end of the
link.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 4


Component Specification
Scalability Full-fabric architecture with a maximum of 239 switches
Certified 4K active nodes; 56 switches, 19 hops in Brocade Fabric OS® fabrics
maximum
Performance Non-blocking architecture with wire-speed forwarding of traffic:

8GFC: 8.5 Gb/sec line speed, full duplex


10GFC: 10.53Gb/s line speed, full duplex; 10Gb/s optionally programmable to fixed port
speed.
16GFC: 14.025 Gb/sec line speed, full duplex
32GFC: 28.05 Gb/sec line speed, full duplex
64GFC: 57.8 Gb/sec line speed, full duplex

Traffic load Frame-based ISL Trunking load balances supports up to eight SFP+ ports per ISL trunk;
balancing up to 512Gb/s per ISL trunk when using 64Gb/s optics
Dynamic Path Selection (DPS) provides exchange-based load balancing across all
available ISLs.

Aggregate 4.096 Tb/s


bandwidth
Maximum fabric Latency for locally switched ports is 460 ns (including FEC).
latency
Maximum frame 2112-byte payload
size
Frame buffers 24K per switching ASIC
Classes of service Class 2, Class 3, Class F (inter-switch frames)
Port types D_Port (ClearLink® Diagnostic Port), E_Port, EX_Port, F_Port, M_Port
Optional port-type control
Brocade Access Gateway mode: F_Port and NPIV-enabled N_Port

Data traffic types Fabric switches supporting unicast


Media types 64Gb/s: Brocade Secure hot-pluggable SFP-DD, SN connector; 64Gb/s SWL
64Gb/s: Brocade Secure hot-pluggable SFP+, LC connector; 64Gb/s SWL, LWL 10 km,
ELWL 25 km
32Gb/s: Brocade Secure hot-pluggable SFP+, LC connector; 32Gb/s SWL, LWL 10 km,
ELWL 25 km
10Gb/s: Brocade Secure hot-pluggable SFP+, LC connector; 10Gb/s SWL, LWL 10 km

Fibre Channel distance is subject to fiber-optic cable and port speed.

USB port One standard USB port for firmware download, support save, and configuration upload or
download.
Fabric services BB Credit Recovery; Brocade Advanced Zoning (Default Zoning, Port/WWN Zoning, Peer
Zoning); Congestion Signaling; Dynamic Path Selection (DPS); Extended Fabrics; Fabric
Performance Impact Notification (FPIN); Fabric Vision; FDMI; FICON CUP; Flow Vision; F_Port
Trunking; FSPF; Integrated Routing; ISL Trunking; Management Server; NPIV; NTP v3; Port
Decommission/Fencing; QoS; Registered State Change Notification (RSCN); Name Server; Slow
Drain Device Quarantine (SDDQ); Target-Driven Zoning; Traffic Optimizer; Virtual Fabrics
(Logical Switch, Logical Fabric); VMID+ and AppServer.

Access Gateway mode: Some fabric services do not apply or are unavailable in Access
Gateway mode

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 5


Component Specification
Extension Fibre Channel, in-flight compression (Brocade LZO) and encryption (ES-GCM-256 encryption on
FC ISLs, E_Port); integrated optional 10Gb/s Fibre Channel for DWDM MAN connectivity
Power supplies Dual, hot-swappable redundant power supplies (80 PLUS Gold) with integrated system cooling
fans (3 built into each power supply), N+N cooling redundancy
Management
Management Brocade Advanced Web Tools; Brocade SANnav Management Portal and SANnav Global View;
Command Line Interface (CLI); HTTP/HTTPS; RESTful API; SNMP v1/v3 (FE MIB, FC
Management MIB); SSH.
Security DH-CHAP (between switches and end devices); FCAP switch authentication; HTTPS; IP filtering;
LDAP with IPv6; OpenLDAP; Port Binding; RADIUS; TACACS+; user-defined Role-Based
Access Control (RBAC); Secure Boot, Secure Copy (SCP); Secure Syslog; Secure FTP (SFTP);
Secure Shell (SSH) v2; Secure Socket Layer (SSL); Switch Binding; Trusted Switch
Management 10/100/1000Mb/s Ethernet (RJ-45) port, serial console port (mini-USB)
access
Diagnostics Active Support Connectivity (ASC) and Brocade Support Link (BSL); built-in flow generator;
ClearLink optics and cable diagnostics, including electrical/optical loopback, link
traffic/latency/distance; Fabric Performance Impact Monitoring (FPI); flow mirroring; Forward
Error Correction (FEC); frame viewer; IO Insight for SCSI and NVMe monitoring; Monitoring and
Alerting Policy Suite (MAPS); nondisruptive daemon restart; optics health monitoring; POST and
embedded online/offline diagnostics, including environmental monitoring, FCping and Pathinfo
(FC traceroute); power monitoring; RAStrace logging; Rolling Reboot Detection (RRD);
Syslog/Audit Log; VM Insight
Mechanical
Enclosure Front-to-back airflow; non-port-side exhaust; power from back, 1U
Back-to-front airflow; non-port-side intake; power from back, 1U

Dimensions Width: 440 mm (17.3 in.)


Height: 44 mm (1.7 in.)
Depth: 356 mm (14.0 in.)

Weight 7.17 kg (15.8 lb) with two power supply FRUs, without transceivers
Support
Warranty Three-year customer-replaceable unit limited warranty with 9x5 next business day parts
delivered. Three-year software/firmware entitlement.
Service and Optional service upgrades are available through Lenovo Services: 9x5 next business day onsite
support response, 24x7 2-hour or 4-hour onsite response, 24x7 6-hour or 24-hour committed service
repair, up to 5 years of warranty coverage, 1-year or 2-year post-warranty extensions, and Basic
Hardware Installation Services.

Models
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 6
Models
The following table lists the ThinkSystem DB720S FC SAN Switch models.

Table 2. Lenovo ThinkSystem DB720S FC SAN Switch models


Machine Feature
Part number Type-Model code Description
Port side exhaust airflow
7D5JCTO1WW 7D5JA000WW BF60 Lenovo ThinkSystem DB720S, 24 ports active with 32Gb SWL SFPs, 2
power supplies (Port side exhaust), rail kit, Software: Fabric Vision,
Trunking, Integrated Routing, Extended Fabric
7D5JCTO2WW 7D5JA001WW BF61 Lenovo ThinkSystem DB720S, 24 ports active with 64Gb SWL SFPs, 2
power supplies (Port side exhaust), rail kit, Software: Fabric Vision,
Trunking, Integrated Routing, Extended Fabric (model requires FOS
9.0.1a or later)
Port side intake airflow (for Telco)
7D5JCTO3WW 7D5JA002WW BF62 Lenovo ThinkSystem DB720S, 24 ports active with 32Gb SWL SFPs, 2
power supplies (Port side intake like Telco), rail kit, Software: Fabric
Vision, Trunking, Integrated Routing, Extended Fabric
7D5JCTO4WW 7D5JA003WW BF63 Lenovo ThinkSystem DB720S, 24 ports active with 64Gb SWL SFPs, 2
power supplies (Port side intake like Telco), rail kit, Software: Fabric
Vision, Trunking, Integrated Routing, Extended Fabric (model requires
FOS 9.0.1a or later)

The DB720S FC SAN Switch part numbers include the following items:
One FC SAN Switch
Model CTO1WW/CTO3WW: With 24 ports activated and 24x 32 Gb FC SWL SFP+ transceivers
included
Model CTO2WW/CTO4WW: With 24 ports activated and 24x 64 Gb FC SWL SFP+ transceivers
included
Serial cable (Mini-USB console cable to DB-9/RJ-45)
Rubber feet for setting up the switch as a standalone unit
Universal rack mount kit, 4-post & installation guide
Web pointer document (Downloading FOS, SANnav and Docs)
Firmware Download Instructions Flyer (Instructions for downloading publicly-available Brcd docs + docs
behind CSP + access to open source code.)
Note: The switch comes standard without power cords; two power cables must be purchased together with
the switch (see Power supplies and cables for details).

Port activation licenses


Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 7
Port activation licenses
The DB720S FC SAN Switch Model includes 24 licensed ports and 24 x 32/64 GB FC SWL SFP+
Transceivers depending on the model. The remaining 32 unlicensed ports can be activated by purchasing and
installing the Ports on Demand (POD) licenses that are available with transceivers in 8-port increments.
The following table lists additional POD options for the DB720S FC SAN Switch.

Table 3. POD options


Feature Maximum
Part number code Description quantity
4M27A65819 BF6L DB720S 8-Port SW License with 8x 32 Gbps SWL SFP+ Transceivers 4
4M27A65820 BFGC DB720S 8-Port SW License with 8x 64 Gbps SWL SFP+ transceivers 4
4M27A65821 BPJ4 DB720S 16-Port SW License with 8x 64 Gbps SWL SFP-DD transceivers 1

Transceivers and cables


With the flexibility of the DB720S FC SAN Switch, customers can choose the following connectivity
technologies:
SFP-DD (double density) ports
For 64 Gbps FC links, customers can use 64 Gb FC SFP-DD SWL optical transceivers for
distances up to 100 meters on OM4 or up to 70 meters on OM3 50 µ MMF cables. These
transceivers can operate at 64 Gbps, 32 Gbps, or 16 Gbps speeds.
SFP+ ports
For 64 Gbps FC links, customers can use 64 Gb FC SFP+ SWL optical transceivers for
distances up to 100 meters on OM4 or up to 70 meters on OM3 50 µ MMF cables. For longer
distances, the 64 Gb FC LWL SFP+ optical transceivers can support up to 10 km on SMF
cables. For extended distances, the 64 Gb FC ELWL SFP+ optical transceivers can support up
to 25 kilometers on SMF cables. These transceivers can operate at 64 Gbps, 32 Gbps, or 16
Gbps speeds.
For 32 Gbps FC links, customers can use 32 Gb FC SFP+ SWL optical transceivers for
distances up to 100 meters on OM4 or up to 70 meters on OM3 50 µ MMF cables. For longer
distances, the 32 Gb FC LWL SFP+ optical transceivers can support up to 10 km on SMF
cables. For extended distances, the 32 Gb FC ELWL SFP+ optical transceivers can support up
to 25 kilometers on SMF cables. These transceivers can operate at 32 Gbps, 16 Gbps, or 8
Gbps speeds. (Except ELWL part number 4M27A65431 which can only operate at 32Gbps and
16 Gbps).
For 10 Gbps FC links, customers can use 10 Gb FC SFP+ SWL transceivers for distances up to
125 meters on OM4 or up to 100 meters on OM3 50 µ MMF cables, or 10 Gb FC SFP+ LWL
transceivers for distances up to 10 km on SMF cables. 10 Gb FC operations allow metro
connectivity by directly utilizing a fiber optic cable between sites or by creating multiple channels
on an optical cable between sites, utilizing Wave Division Multiplexing (WDM) technology (the
Extended Fabric feature is NOT required for long distance 10 Gb FC connectivity).
1 GbE RJ-45 management port: Customers can use UTP cables for distances up to 100 meters.
The DB720S FC SAN Switch comes with 24x 32 Gb or 64 Gb FC SWL SFP+ transceivers. Additional SWL,
LWL, and ELWL SFP+ transceivers can be ordered for the switch, if needed.
The following table lists the supported transceiver and cable options. POD kits and switches come with SWL
optics included.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 8


Table 4. Transceivers and cables
Part number Feature code Description Maximum quantity
64 Gb FC SFP+ Double Density transceivers (require FOS 9.1.0 or later)
4M27A65427 BPJ5 Brocade Secure 64-Gb SWL SFP-DD Transceiver 8
4M27A65428 BPJ6 Brocade Secure 64-Gb SWL SFP-DD Transceiver 8-pack 1
64 Gb FC SFP+ transceivers (require FOS 9.0.1a or later)
4M27A65425 BF6J Brocade Secure 64-Gb SWL SFP+ Transceiver 56
4M27A65426 BF6K Brocade Secure 64-Gb SWL SFP+ Transceiver 8-pack 7
4M27A65433 BQQG Brocade Secure 64Gb LWL SFP+ Transceiver (10 km) 56
4M27A65434 BQQH Brocade Secure 64Gb LWL SFP+ Transceiver (10 km) 8-pack 7
4M27A65432 BQQF Brocade Secure 64Gb ELWL SFP+ Transceiver (25 km) 56
32 Gb FC SFP+ transceivers
4M27A65416 BF69 Brocade Secure 32-Gb SWL SFP+ Transceiver 56
4M27A65417 BF6A Brocade Secure 32-Gb SWL SFP+ Transceiver 8-pack 7
4M27A65418 BF6B Brocade Secure 32-Gb LWL SFP+ Transceiver (10 km) 56
4M27A65419 BF6C Brocade Secure 32-Gb LWL SFP+ Transceiver (10 km) 8-pack 7
4M27A65424 BF6D Brocade Secure 32-Gb ELWL SFP+ Transceiver (25 km) 56**
4M27A65431 BQQE Brocade Secure 32Gb ELWL SFP+ V2 Transceiver (25 km)* 56**
10 Gb FC SFP+ transceivers
4M27A65420 BF6E Brocade Secure 10Gb FC SWL SFP+ Transceiver 56
4M27A65421 BF6F Brocade Secure 10Gb FC LWL SFP+ Transceiver 56
OM3 optical cables for 32 Gb and 64 Gb FC SW SFP+ transceivers
00MN499 ASR5 Lenovo 0.5m LC-LC OM3 MMF Cable 56
00MN502 ASR6 Lenovo 1m LC-LC OM3 MMF Cable 56
00MN505 ASR7 Lenovo 3m LC-LC OM3 MMF Cable 56
00MN508 ASR8 Lenovo 5m LC-LC OM3 MMF Cable 56
00MN511 ASR9 Lenovo 10m LC-LC OM3 MMF Cable 56
00MN514 ASRA Lenovo 15m LC-LC OM3 MMF Cable 56
00MN517 ASRB Lenovo 25m LC-LC OM3 MMF Cable 56
00MN520 ASRC Lenovo 30m LC-LC OM3 MMF Cable 56
OM4 optical cables for 32 Gb and 64 Gb FC SW SFP+ transceivers
4Z57A10845 B2P9 Lenovo 0.5m LC-LC OM4 MMF Cable 56
4Z57A10846 B2PA Lenovo 1m LC-LC OM4 MMF Cable 56
4Z57A10847 B2PB Lenovo 3m LC-LC OM4 MMF Cable 56
4Z57A10848 B2PC Lenovo 5m LC-LC OM4 MMF Cable 56
4Z57A10849 B2PD Lenovo 10m LC-LC OM4 MMF Cable 56
4Z57A10850 B2PE Lenovo 15m LC-LC OM4 MMF Cable 56
4Z57A10851 B2PF Lenovo 25m LC-LC OM4 MMF Cable 56
4Z57A10852 B2PG Lenovo 30m LC-LC OM4 MMF Cable 56
OM4 SN to LC (SFP-DD to SFP) optical cables for 64 Gb FC SW SFP-DD transceivers
4X97A81905 BPAF Lenovo 1M SN-LC SFP-DD OM4 FC Cable 16
4X97A81907 BPAG Lenovo 3M SN-LC SFP-DD OM4 FC Cable 16
4X97A81908 BPAH Lenovo 5M SN-LC SFP-DD OM4 FC Cable 16

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 9


Part number Feature code Description Maximum quantity
4X97A81910 BPAJ Lenovo 10M SN-LC SFP-DD OM4 FC Cable 16
4X97A81911 BPAK Lenovo 15M SN-LC SFP-DD OM4 FC Cable 16
4X97A81913 BPAL Lenovo 25M SN-LC SFP-DD OM4 FC Cable 16
4X97A81914 BPAM Lenovo 30M SN-LC SFP-DD OM4 FC Cable 16
OM4 SN to SN (SFP-DD to SFP-DD) optical cables for 64 Gb FC SW SFP-DD transceivers
4X97A81893 BPA8 Lenovo 1M SN-SN SFP-DD OM4 FC Cable 16
4X97A81895 BPA9 Lenovo 3M SN-SN SFP-DD OM4 FC Cable 16
4X97A81896 BPAA Lenovo 5M SN-SN SFP-DD OM4 FC Cable 16
4X97A81898 BPAB Lenovo 10M SN-SN SFP-DD OM4 FC Cable 16
4X97A81899 BPAC Lenovo 15M SN-SN SFP-DD OM4 FC Cable 16
4X97A81901 BPAD Lenovo 25M SN-SN SFP-DD OM4 FC Cable 16
4X97A81902 BPAE Lenovo 30M SN-SN SFP-DD OM4 FC Cable 16
UTP Category 6 cables (Green) for the 1 GbE RJ-45 management port
00WE123 AVFW 0.75m CAT6 Green Cable 1
00WE127 AVFX 1.0m CAT6 Green Cable 1
00WE131 AVFY 1.25m CAT6 Green Cable 1
00WE135 AVFZ 1.5m CAT6 Green Cable 1
00WE139 AVG0 3m CAT6 Green Cable 1
90Y3718 A1MT 10m CAT6 Green Cable 1
90Y3727 A1MW 25m CAT6 Green Cable 1
UTP Category 5e cables (Blue) for the 1 GbE RJ-45 management port
40K5679 3801 0.6m Blue Cat5e Cable 1
40K8785 3802 1.5m Blue Cat5e Cable 1
40K5581 3803 3m Blue Cat5e Cable 1
40K8927 3804 10m Blue Cat5e Cable 1
40K8930 3805 25m Blue Cat5e Cable 1
* The specific ELWL only operates at 32 Gbps and 16Gbps.
** ELWL Requires same optic type/part number on both ends (no-mixing) to assure interoperability.
The following table lists the cabling requirements for the switch.

Table 5. DB720S FC SAN Switch cabling requirements


Transceiver Standard Cable Connector
64 Gb Fibre Channel
64 Gb FC SWL SFP+ FC-PI-6 Up to 30 m with LC-LC MMF cables supplied by Lenovo LC
(4M27A65425, (see Table 4) or the following 850 nm 50 µ MMF cables:
4M27A65426)
64GFC: Up to 100 m (OM4) or up to 70 m (OM3)
32GFC: Up to 100 m (OM4) or up to 70 m (OM3)
16GFC: Up to 125 m (OM4) or up to 100 m (OM3)
8GFC: No support

64 Gb FC LWL SFP+ FC-PI-6 1310 nm 9 µ SMF cable: LC


(4M27A65433,
4M27A65434) 64GFC, 32GFC, 16GFC: Up to 10 km.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 10


Transceiver Standard Cable Connector
64 Gb FC ELWL SFP+ FC-PI-5 1310 nm 9 µ SMF cable: LC
(4M27A65432)
64GFC, 32GFC, 16GFC: Up to 25 km.

32 Gb Fibre Channel
32 Gb FC SWL SFP+ FC-PI-6 Up to 30 m with LC-LC MMF cables supplied by Lenovo LC
(4M27A65416, (see Table 4), or the following 850 nm 50 µ MMF cables:
4M27A65417)
32GFC: Up to 100 m (OM4) or up to 70 m (OM3).
16GFC: Up to 125 m (OM4) or up to 100 m (OM3).
8GFC: Up to 125 m (OM4) or up to 100 m (OM3).

32 Gb FC LWL SFP+ FC-PI-6 1310 nm 9 µ SMF cable: LC


(4M27A65418,
4M27A65419) 32GFC, 16GFC, 8GFC: Up to 10 km

32 Gb FC ELWL SFP+ FC-PI-6 1310 nm 9 µ SMF cable: LC


(4M27A65424,
4M27A65431) 32GFC, 16GFC, 8GFC: Up to 25 km
32GFC, 16GFC: Up to 25 km

10 Gb Fibre Channel
10Gb FC SWL SFP+ FC-10GFC 850 nm 50 µ MMF cable: LC
(4M27A65420)
10GFC: Up to 550 m (OM4) or up to 300 m (OM3)

10Gb FC LWL SFP+ FC-10GFC 1310 nm 9 µ SMF cable: LC


(4M27A65421)
10GFC: Up to 10 km

Management ports
Serial console port (mini- RS-232 Mini-USB console cable to DB-9/RJ-45 (included with the RJ45
USB). switch).
10/100/1000 Mb Ethernet 1000BASE-T Up to 25 m with UTP cables supplied by Lenovo (see Table RJ45
port 4) or other UTP Category 5, 5E, and 6 up to 100 meters.

Firmware
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 11
Firmware
For details on features supported with the DB720S FC SAN Switch look for the latest Administration Guide for
the latest available Fabric OS version:
https://www.broadcom.com/products/fibre-channel-networking/software/fabric-operating-system
The following features comes standard with the DB720S FC SAN Switch:
Enterprise Bundle
ISL Trunking (TRK): Allows frame-based consolidation of up to 8 inter-switch links (ISLs) into
fault-tolerant and load-balanced trunks with bandwidth of up to 256 Gbps.
Fabric Vision (FV)
Monitoring and Alerting Policy Suite (MAPS): Provides a policy-based, fabric-wide
threshold monitoring and alerting tool.
Flow Vision: Identifies, monitors, and analyzes specific application flows.
VM Insight: Seamlessly monitors health and performance of individual Virtual Machines
(VMs) to quickly identify abnormal VM behavior and enable administrators to proactively
facilitate troubleshooting and fault isolation, helping to ensure performance and
operational stability.
IO Insight: Proactively monitors I/O performance and behavior to gain deep insight into
issues and ensure service levels by non-disruptively and non-intrusively gathering I/O
statistics for storage traffic and applying this information within a policy-based monitoring
and alerting suite to configure thresholds and alarms.
Fabric Performance Impact (FPI) Monitoring: Leverages predefined MAPS policies to
automatically identify and isolate devices that cause network performance issues by
detecting different latency severity levels, and to alert administrators.
Extended Fabric (EF): Extends Fibre Channel SANs beyond 10 km distance limitations for
replication and backup at full bandwidth.
Control Unit Port (CUP). The Control Unit Port provides an in-band management interface that
the FICON host (Mainframe) can use for managing and monitoring the FC SAN switch.
Integrated Routing: The FC-FC routing service provides Fibre Channel routing between two or more
fabrics without merging those fabrics.

Management software
Lenovo offers optional Brocade SANnav™ Management Portal and SANnav Global View software license
subscriptions that provide comprehensive visibility into the SAN environment, allow administrators to quickly
identify, isolate, and correct problems, and accelerate administrative tasks by simplifying and automating
workflows.
SANnav Management Portal is a next-generation SAN management application with a simple browser-based
user interface (UI) and with a focus on streamlining common workflows, such as configuration, zoning,
deployment, monitoring, troubleshooting, reporting, and analytics.
Lenovo offers the following SANnav Management Portal subscriptions:
SANnav Management Portal Base: Designed for mid-sized SANs to manage up to 600 SAN switch
ports only (SAN director ports can only be managed with the Enterprise edition).
SANnav Management Portal Enterprise: Designed for enterprise-class SANs to manage up to 15 000
SAN switch and director ports.
SANnav Management Portal supports all Brocade SAN switches and platforms that run the Fabric OS®
version 7.4 or above, including Lenovo B6505, B6510, DB610S, DB620S, DB400D, DB720S, DB730S,
DB800D, Brocade Directors, and FC5022.
With SANnav Global View, administrators can quickly visualize the health, performance, and inventory of
multiple SANnav Management Portal instances using a simple, intelligent dashboard and can easily navigate
from a global view down to local environments to investigate points of interest. SANnav Global View is
designed to manage up to 20 SANnav Management Portal instances.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 12


For more information, refer to the SANnav Management Portal documentation:
http://www.broadcom.com/products/fibre-channel-networking/software/sannav-management-
portal#documentation
The following table lists ordering information for the optional SANnav Management Portal and SANnav Global
View management tools. After a client has an active SANnav license, Lenovo offers a “license
extension/renewal”. This offering provides our clients the flexible to extend their subscription down to a
specific end date. This allows clients the ability to align to your company’s budget or align with warranty of your
FC SAN switches/directors. Please engage directly with your Lenovo sales representative for more details.

Table 6. SANnav Management Portal and SANnav Global View subscription licenses
Part number Feature code Description
SANnav Management Portal electronic authorization licenses
7S0C0010WW S1K6 Brocade SANnav Mgmt Portal Base Edition - 1YR License 600 ports
7S0C0013WW S1K8 Brocade SANnav Mgmt Portal Base Edition - 3YR License 600 ports
7S0C001KWW S4MB Brocade SANnav Mgmt Portal Base Edition - 5YR License 600 ports
7S0C0011WW S1K7 Brocade SANnav Mgmt Portal Enterprise Edition - 1YR License 15K ports
7S0C0014WW S1K9 Brocade SANnav Mgmt Portal Enterprise Edition - 3YR License 15K ports
7S0C001LWW S4MC Brocade SANnav Mgmt Portal Enterprise Edition - 5YR License 15K ports
SANnav Global View electronic authorization licenses
7S0C0012WW S1D8 Brocade SANnav Global View - 1YR License
7S0C0015WW S1D9 Brocade SANnav Global View - 3YR License
7S0C001JWW S4MA Brocade SANnav Global View - 5YR License

The SANnav licenses are subscription-based with 1-year, 3-year, or 5-year software entitlement and support.

Fibre Channel standards


The DB720S FC SAN Switch supports the standards listed at the following web page:
https://www.broadcom.com/support/fibre-channel-networking/san-standards/standards-compliance

Power supplies and cables


Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 13
Power supplies and cables
The DB720S FC SAN Switch ships with two redundant hot-swap 350 W AC power supplies. Each power
supply has an IEC 309-C14 connector.
The switch comes standard without a power cord; two rack power cables or line cords must be ordered
together with the switch (see the following table).

Table 7. Power cord options


Part number Feature code Description
Rack power cables
39Y7937 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
None* 6568 1.8m, 10A/100-250V, 2xC13PM to IEC 320-C14 Rack Power Cable
4L67A08366 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
39Y7938 6204 2.8m, 10A/100-250V, C13 to IEC 320-C20 Rack Power Cable
39Y7932 6263 4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Line cords
39Y7931 6207 10A/125V C13 to NEMA 5-15P 4.3m line cord
46M2592 A1RF 10A/250V C13 to NEMA 6-15P 2.8m line cord
39Y7930 6222 Argentina 10A/250V C13 to IRAM 2073 2.8m line cord
39Y7924 6211 Australia/NZ 10A/250V C13 to AS/NZ 3112 2.8m line cord
39Y7929 6223 Brazil 10A/125V C13 to NBR 6147 2.8m line cord
39Y7928 6210 China 10A/250V C13 to GB 2099.1 2.8m line cord
39Y7918 6213 Denmark 10A/250V C13 to DK2-5a 2.8m line cord
39Y7917 6212 European 10A/230V C13 to CEE7-VII 2.8m line cord
39Y7927 6269 India 10A/250V C13 to IS 6538 2.8m line cord
39Y7920 6218 Israel 10A/250V C13 to SI 32 2.8m line cord
39Y7921 6217 Italy 10A/250V C13 to CEI 23-16 2.8m line cord
46M2593 A1RE Japan 12A/125V C13 to JIS C-8303 2.8m line cord
39Y7925 6219 Korea 12A/250V C13 to KETI 2.8m line cord
39Y7922 6214 South Africa 10A/250V C13 to SABS 164 2.8m line cord
39Y7919 6216 Switzerland 10A/250V C13 to SEV 1011-S24507 2.8m line cord
00CG265 A53E Taiwan 10A/250V C13 to CNS 10917-3 2.8m line cord
00CG267 A53F Taiwan 15A/125V C13 to CNS 10917-3 2.8m line cord
39Y7923 6215 United Kingdom 10A/250V C13 to BS 1363/A 2.8m line cord
* Available for factory-built custom configurations and solutions only.

Rack installation
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 14
Rack installation
The DB720S FC SAN Switch comes standard with the fixed rack mount kit that can be used for 4-post rack
installations. If needed, the DB720S FC SAN Switch can be mounted in a 2-post rack cabinet by using the
optional mid-mount rack kit that is listed in the following table.

Table 8. Rack-mount options


Part number Feature code Description
01KN770 AVG7 Lenovo Mid-mount Rack Kit

The optional mid-mount rack kit for the DB720S FC SAN Switch is shown in the following figure.

Figure 4. Lenovo DB720S Mid-mount Rack Kit

Physical specifications
The DB720S FC SAN Switch has the following dimensions and weight (approximate):
Height: 44 mm (1.7 in.)
Width: 440 mm (17.3 in.)
Depth: 356 mm (14.0 in.)
Weight: 7.17 kg (15.8 lb) with two power supply FRUs, without transceivers

Operating environment
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 15
Operating environment
The DB720S FC SAN Switch is supported in the following environment:
Air temperature:
Operating: 0°C to 40°C (32°F to 104°F)
Non-operating: -25°C to +70°C (-13°F to 158°F)
Maximum altitude:
Operating: 3,000 m (9,842 ft)
Non-operating: 12,000 m (39,370 ft)
Humidity:
Operating: 8% to 90% non-condensing
Non-operating: 8% to 90% non-condensing
Electrical power:
AC Voltage range: 90V to 264V, maximum input current 4.5A
AC Frequency: 50 Hz to 60 Hz nominal, 47 Hz to 63 Hz range
Power consumption (differs based on VAC input @100 or @200):
Idle: 56-58W (No optics)
Maximum: 349W with all 64 ports operating at 64G (48 ports populated with 64G SWL
transceivers and 8 ports populated with 2x64G SFP-DD SWL transceivers).
Heat dissipation (differs based on VAC input @100 or @200):
Idle: 191-196 BTU per hour (no optics)
Maximum: 881-1192 BTU per hour
Acoustical noise emission: 65 dB maximum

Warranty upgrades and post-warranty support


The DB720S FC SAN Switch, machine type 7D5J, has a three-year warranty.
Our global network of regional support centers offers consistent, local-language support enabling you to vary
response times and level of service to match the criticality of your support needs:
Standard Next Business Day – Best choice for non-essential systems requiring simple maintenance.
Premier Next Business Day – Best choice for essential systems requiring technical expertise from
senior-level Lenovo engineers.
Premier 24x7 4-Hour Response – Best choice for systems where maximum uptime is critical.
Premier Enhanced Storage Support 24x7 4-Hour Response – Best choice for storage systems
where maximum uptime is critical.
For more information, consult the brochure Lenovo Operational Support Services for Data Centers Services .

Services
Lenovo Data Center Services empower you at every stage of your IT lifecycle. From expert advisory and
strategic planning to seamless deployment and ongoing support, we ensure your infrastructure is built for
success. Our comprehensive services accelerate time to value, minimize downtime, and free your IT staff to
focus on driving innovation and business growth.

Note: Some service options may not be available in all markets or regions. For more information, go to
https://lenovolocator.com/. For information about Lenovo service upgrade offerings that are available in
your region, contact your local Lenovo sales representative or business partner.

In this section:
Lenovo Advisory Services
Lenovo Plan & Design Services

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 16


Lenovo Deployment, Migration, and Configuration Services
Lenovo Support Services
Lenovo Managed Services
Lenovo Sustainability Services

Lenovo Advisory Services


Lenovo Advisory Services simplify the planning process, enabling customers to build future-proofed strategies
in as little as six weeks. Consultants provide guidance on projects including VM migration, storage, backup
and recovery, and cost management to accelerate time to value, improve cost efficiency, and build a flexibly
scalable foundation.
Assessment Services
An Assessment helps solve your IT challenges through an onsite, multi-day session with a Lenovo
technology expert. We perform a tools-based assessment which provides a comprehensive and
thorough review of a company's environment and technology systems. In addition to the technology
based functional requirements, the consultant also discusses and records the non-functional business
requirements, challenges, and constraints. Assessments help organizations like yours, no matter how
large or small, get a better return on your IT investment and overcome challenges in the ever-changing
technology landscape.
Design Services
Professional Services consultants perform infrastructure design and implementation planning to support
your strategy. The high-level architectures provided by the assessment service are turned into low level
designs and wiring diagrams, which are reviewed and approved prior to implementation. The
implementation plan will demonstrate an outcome-based proposal to provide business capabilities
through infrastructure with a risk-mitigated project plan.

Lenovo Plan & Design Services


Unlock faster time to market with our tailored, strategic design workshops to align solution approaches with
your business goals and technical requirements. Leverage our deep solution expertise and end-to-end
delivery partnership to meet your goals efficiently and effectively.

Lenovo Deployment, Migration, and Configuration Services


Optimize your IT operations by shifting labor-intensive functions to Lenovo's skilled technicians for seamless
on-site or remote deployment, configuration, and migration. Enjoy peace of mind, faster time to value, and
comprehensive knowledge sharing with your IT staff, backed by our best-practice methodology.
Deployment Services for Storage and ThinkAgile
A comprehensive range of remote and onsite options tailored specifically for your business needs to
ensure your storage and ThinkAgile hardware are fully operational from the start.
Hardware Installation Services
A full-range, comprehensive setup for your hardware, including unpacking, inspecting, and positioning
components to ensure your equipment is operational and error-free for the most seamless and efficient
installation experience, so you can quickly benefit from your investments.
DM/DG File Migration Services
Take the burden of file migration from your IT’s shoulders. Our experts will align your requirements and
business objectives to the migration plans while coordinating with your team to plan and safely execute
the data migration to your storage platforms.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 17


DM/DG/DE Health Check Services
Our experts perform proactive checks of your Firmware and system health to ensure your machines are
operating at peak and optimal efficiency to maximize up-time, avoid system failures, ensure the security
of IT solutions and simplify maintenance.
Factory Integrated Services
A suite of value-added offerings provided during the manufacturing phase of a server or storage
system that reduces time to value. These services aim at improving your hardware deployment
experience and enhance the quality of a standard configuration before it arrives at your facility.

Lenovo Support Services


In addition to response time options for hardware parts, repairs, and labor, Lenovo offers a wide array of
additional support services to ensure your business is positioned for success and longevity. Our goal is to
reduce your capital outlays, mitigate your IT risks, and accelerate your time to productivity.
Premier Support for Data Centers
Your direct line to the solution that promises the best, most comprehensive level of support to help you
fully unlock the potential of your data center.
Premier Enhanced Storage Support (PESS)
Gain all the benefits of Premier Support for Data Centers, adding dedicated storage specialists and
resources to elevate your storage support experience to the next level.
Committed Service Repair (CSR)
Our commitment to ensuring the fastest, most seamless resolution times for mission-critical systems
that require immediate attention to ensure minimal downtime and risk for your business. This service is
only available for machines under the Premier 4-Hour Response SLA.
Multivendor Support Services (MVS)
Your single point of accountability for resolution support across vast range of leading Server, Storage,
and Networking OEMs, allowing you to manage all your supported infrastructure devices seamlessly
from a single source.
Keep Your Drive (KYD)
Protect sensitive data and maintain compliance with corporate retention and disposal policies to ensure
your data is always under your control, regardless of the number of drives that are installed in your
Lenovo server.
Technical Account Manager (TAM)
Your single point of contact to expedite service requests, provide status updates, and furnish reports to
track incidents over time, ensuring smooth operations and optimized performance as your business
grows.
Enterprise Software Support (ESS)
Gain comprehensive, single-source, and global support for a wide range of server operating systems
and Microsoft server applications.
For more information, consult the brochure Lenovo Operational Support Services for Data Centers .

Lenovo Managed Services


Achieve peak efficiency, high security, and minimal disruption with Lenovo's always-on Managed Services.
Our real-time monitoring, 24x7 incident response, and problem resolution ensure your infrastructure operates
seamlessly. With quarterly health checks for ongoing optimization and innovation, Lenovo's remote active
monitoring boosts end-user experience and productivity by keeping your data center's hardware performing at
its best.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 18


Lenovo Managed Services provides continuous 24x7 remote monitoring (plus 24x7 call center availability) and
proactive management of your data center using state-of-the-art tools, systems, and practices by a team of
highly skilled and experienced Lenovo services professionals.
Quarterly reviews check error logs, verify firmware & OS device driver levels, and software as needed. We’ll
also maintain records of latest patches, critical updates, and firmware levels, to ensure you systems are
providing business value through optimized performance.

Lenovo Sustainability Services


Asset Recovery Services
Lenovo Asset Recovery Services (ARS) provides a secure, seamless solution for managing end-of-life
IT assets, ensuring data is safely sanitized while contributing to a more circular IT lifecycle. By
maximizing the reuse or responsible recycling of devices, ARS helps businesses meet sustainability
goals while recovering potential value from their retired equipment. For more information, see the Asset
Recovery Services offering page.
CO2 Offset Services
Lenovo’s CO2 Offset Services offer a simple and transparent way for businesses to take tangible action
on their IT footprint. By integrating CO2 offsets directly into device purchases, customers can easily
support verified climate projects and track their contributions, making meaningful progress toward their
sustainability goals without added complexity.
Lenovo Certified Refurbished
Lenovo Certified Refurbished offers a cost-effective way to support IT circularity without compromising
on quality and performance. Each device undergoes rigorous testing and certification, ensuring reliable
performance and extending its lifecycle. With Lenovo’s trusted certification, you gain peace of mind
while making a more sustainable IT choice.

Lenovo TruScale
Lenovo TruScale XaaS is your set of flexible IT services that makes everything easier. Streamline IT
procurement, simplify infrastructure and device management, and pay only for what you use – so your
business is free to grow and go anywhere.
Lenovo TruScale is the unified solution that gives you simplified access to:
The industry’s broadest portfolio – from pocket to cloud – all delivered as a service
A single-contract framework for full visibility and accountability
The global scale to rapidly and securely build teams from anywhere
Flexible fixed and metered pay-as-you-go models with minimal upfront cost
The growth-driving combination of hardware, software, infrastructure, and solutions – all from one
single provider with one point of accountability.
For information about Lenovo TruScale offerings that are available in your region, contact your local Lenovo
sales representative or business partner.

Regulatory compliance
The DB720S FC SAN Switch conforms to the following regulations which can be found in the Hardware
Installation Guide, available from the following web page:
https://www.broadcom.com/products/fibre-channel-networking/switches/g720-switch

Interoperability
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 19
Interoperability
For end-to-end storage configuration support, refer to the Lenovo Storage Interoperation Center (LSIC):
https://datacentersupport.lenovo.com/us/en/lsic
Use the LSIC to select the known components of your configuration and then get a list all other supported
combinations, with details about supported hardware, firmware, operating systems, and drivers, plus any
additional configuration notes. View results on screen or export them to Excel.

External storage systems


Lenovo offers the ThinkSystem DE Series and ThinkSystem DM Series external storage systems for high-
performance storage. See the DE Series and DM Series product guides for specific controller models,
expansion enclosures and configuration options:
ThinkSystem DE Series Storage
https://lenovopress.com/storage/thinksystem/de-series#rt=product-guide
ThinkSystem DM Series Storage
https://lenovopress.com/storage/thinksystem/dm-series#rt=product-guide
ThinkSystem DG Series Storage
https://lenovopress.com/storage/thinksystem/dg-series#rt=product-guide

External backup units


The following table lists the external backup options that are offered by Lenovo that can be used in Lenovo FC
SAN solutions.
Note: Information provided in this section is for ordering reference purposes only. End-to-end LTO Ultrium
configuration support for a particular tape backup unit must be verified through the System Storage
Interoperation Center (SSIC):
http://www.ibm.com/systems/support/storage/ssic

Table 9. External Fibre Channel backup options


Part number Description
External tape backup libraries
6741A1F IBM TS4300 3U Tape Library-Base Unit
Fibre Channel backup drives for TS4300 Tape Library - Full Height
01KP938 LTO 7 FH Fibre Channel Drive
01KP954 LTO 8 FH Fibre Channel Drive
02JH837 LTO 9 FH Fibre Channel Drive
Fibre Channel backup drives for TS4300 Tape Library - Half Height
01KP936 LTO 7 HH Fibre Channel Drive
01KP952 LTO 8 HH Fibre Channel Drive
02JH835 LTO 9 HH Fibre Channel Drive

For more information, see the list of Product Guides in the Tape Autoloaders and Libraries category:
https://lenovopress.com/storage/tape/library

Rack cabinets
Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 20
Rack cabinets
The following table lists the supported rack cabinets.

Table 10. Rack cabinets


Model Description
7D6DA007WW ThinkSystem 42U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6DA008WW ThinkSystem 42U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA009WW ThinkSystem 48U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA00AWW ThinkSystem 48U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
93604PX 42U 1200mm Deep Dynamic Rack
93614PX 42U 1200mm Deep Static Rack
93634PX 42U 1100mm Dynamic Rack
93634EX 42U 1100mm Dynamic Expansion Rack
93074RX 42U Standard Rack (1000mm)

For specifications about these racks, see the Lenovo Rack Cabinet Reference, available from:
https://lenovopress.com/lp1287-lenovo-rack-cabinet-reference
For more information, see the list of Product Guides in the Rack cabinets category:
https://lenovopress.com/servers/options/racks

Power distribution units


The following table lists the power distribution units (PDUs) that are offered by Lenovo.

Table 11. Power distribution units


ASEAN

JAPAN
RUCIS
Brazil

INDIA
MEA

Part Feature

PRC
ANZ

HTK
EET

WE

NA
LA
number code Description
0U Basic PDUs
4PU7A93176 C0QH 0U 36 C13 and 6 C19 Basic 32A 1 Phase PDU Y Y Y Y Y Y Y Y Y N Y Y Y
v2
4PU7A93169 C0DA 0U 36 C13 and 6 C19 Basic 32A 1 Phase PDU Y Y Y Y Y Y Y Y Y N Y Y Y
4PU7A93177 C0QJ 0U 24 C13/C15 and 24 C13/C15/C19 Basic 32A Y Y Y Y Y Y Y Y Y Y Y Y Y
3 Phase WYE PDU v2
4PU7A93170 C0D9 0U 24 C13/C15 and 24 C13/C15/C19 Basic 32A Y Y Y Y Y Y Y Y Y N Y Y Y
3 Phase WYE PDU
0U Switched and Monitored PDUs
4PU7A93181 C0QN 0U 21 C13/C15 and 21 C13/C15/C19 Switched N Y N N N N N Y N Y N Y N
and Monitored 48A 3 Phase Delta PDU V2 (60A
derated)
4PU7A93174 C0D5 0U 21 C13/C15 and 21 C13/C15/C19 Switched N Y N N N N N Y N N N Y N
and Monitored 48A 3 Phase Delta PDU (60A
derated)
4PU7A93178 C0QK 0U 20 C13 and 4 C19 Switched and Monitored Y Y Y Y Y Y Y Y Y N Y Y Y
32A 1 Phase PDU v2
4PU7A93171 C0D8 0U 20 C13 and 4 C19 Switched and Monitored Y Y Y Y Y Y Y Y Y N Y Y Y
32A 1 Phase PDU

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 21


ASEAN

JAPAN
RUCIS
Brazil

INDIA
MEA
Part Feature

PRC
ANZ

HTK
EET

WE

NA
LA
number code Description
4PU7A93182 C0QP 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y Y Y Y Y
and Monitored 63A 3 Phase WYE PDU v2
4PU7A93175 C0CS 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y N Y Y Y
and Monitored 63A 3 Phase WYE PDU
4PU7A93180 C0QM 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y Y Y Y Y
and Monitored 32A 3 Phase WYE PDU v2
4PU7A93173 C0D6 0U 18 C13/C15 and 18 C13/C15/C19 Switched Y Y Y Y Y Y Y Y Y N Y Y Y
and Monitored 32A 3 Phase WYE PDU
4PU7A93179 C0QL 0U 16 C13/C15 and 16 C13/C15/C19 Switched N Y N N N N N Y N Y N Y N
and Monitored 24A 1 Phase PDU v2 (30A
derated)
4PU7A93172 C0D7 0U 16 C13/C15 and 16 C13/C15/C19 Switched N Y N N N N N Y N N N Y N
and Monitored 24A 1 Phase PDU(30A derated)
1U Switched and Monitored PDUs
4PU7A90808 C0D4 1U 18 C19/C13 Switched and monitored 48A N N N N N N N Y N Y Y Y N
3P WYE PDU V2 ETL
4PU7A81117 BNDV 1U 18 C19/C13 switched and monitored 48A 3P N N N N N N N N N N N Y N
WYE PDU - ETL
4PU7A90809 C0DE 1U 18 C19/C13 Switched and monitored 48A Y Y Y Y Y Y Y Y Y Y Y N Y
3P WYE PDU V2 CE
4PU7A81118 BNDW 1U 18 C19/C13 switched and monitored 48A 3P Y Y Y Y Y Y Y Y Y Y Y N Y
WYE PDU – CE
4PU7A90810 C0DD 1U 18 C19/C13 Switched and monitored 80A N N N N N N N Y N Y Y Y N
3P Delta PDU V2
4PU7A77467 BLC4 1U 18 C19/C13 Switched and Monitored 80A N N N N N N N N N Y N Y N
3P Delta PDU
4PU7A90811 C0DC 1U 12 C19/C13 Switched and monitored 32A Y Y Y Y Y Y Y Y Y Y Y Y Y
3P WYE PDU V2
4PU7A77468 BLC5 1U 12 C19/C13 switched and monitored 32A 3P Y Y Y Y Y Y Y Y Y Y Y Y Y
WYE PDU
4PU7A90812 C0DB 1U 12 C19/C13 Switched and monitored 60A N N N N N N N Y N Y Y Y N
3P Delta PDU V2
4PU7A77469 BLC6 1U 12 C19/C13 switched and monitored 60A 3P N N N N N N N N N N N Y N
Delta PDU
1U Ultra Density Enterprise PDUs (9x IEC 320 C13 + 3x IEC 320 C19 outlets)
71763NU 6051 Ultra Density Enterprise C19/C13 PDU N N Y N N N N N N Y Y Y N
60A/208V/3PH
71762NX 6091 Ultra Density Enterprise C19/C13 PDU Module Y Y Y Y Y Y Y Y Y Y Y Y Y
1U C13 Enterprise PDUs (12x IEC 320 C13 outlets)
39Y8941 6010 Enterprise C13 PDU Y Y Y Y Y Y Y Y Y Y Y Y Y
1U Front-end PDUs (3x IEC 320 C19 outlets)
39Y8938 6002 DPI 30amp/125V Front-end PDU with NEMA Y Y Y Y Y Y Y Y Y Y Y Y Y
L5-30P
39Y8939 6003 DPI Single-phase 30A/208V Front-end PDU Y Y Y Y Y Y Y Y Y Y Y Y Y
(US)

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 22


ASEAN

JAPAN
RUCIS
Brazil

INDIA
MEA
Part Feature

PRC
ANZ

HTK
EET

WE

NA
LA
number code Description
39Y8934 6005 DPI 32amp/250V Front-end PDU with IEC 309 Y Y Y Y Y Y Y Y Y Y Y Y Y
2P+Gnd
39Y8940 6004 DPI 60amp/250V Front-end PDU with IEC 309 Y N Y Y Y Y Y N N Y Y Y N
2P+Gnd connector
39Y8935 6006 DPI 63amp/250V Front-end PDU with IEC 309 Y Y Y Y Y Y Y Y Y Y Y Y Y
2P+Gnd connector
1U NEMA PDUs (6x NEMA 5-15R outlets)
39Y8905 5900 DPI 100-127v PDU with Fixed Nema L5-15P Y Y Y Y Y Y Y Y Y Y Y Y Y
line cord
Line cords for 1U PDUs that ship without a line cord
40K9611 6504 DPI 32a Cord (IEC 309 3P+N+G) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9612 6502 DPI 32a Cord (IEC 309 P+N+G) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9613 6503 4.3m, 63A/230V, EPDU to IEC 309 P+N+G Y Y Y Y Y Y Y Y Y Y Y Y Y
(non-US) Line Cord
40K9614 6500 DPI 30a Cord (NEMA L6-30P) Y Y Y Y Y Y Y Y Y Y Y Y Y
40K9615 6501 DPI 60a Cord (IEC 309 2P+G) N N Y N N N Y N N Y Y Y N
40K9617 6505 4.3m, 32A/230V, Souriau UTG to AS/NZS 3112 Y Y Y Y Y Y Y Y Y Y Y Y Y
(Aus/NZ) Line Cord
40K9618 6506 4.3m, 32A/250V, Souriau UTG Female to KSC Y Y Y Y Y Y Y Y Y Y Y Y Y
8305 (S. Korea) Line Cord

For more information, see the Lenovo Press documents in the PDU category:
https://lenovopress.com/servers/options/pdu

Uninterruptible power supply units


The following table lists the uninterruptible power supply (UPS) units that are offered by Lenovo.

Table 12. Uninterruptible power supply units


Part number Description
Rack-mounted or tower UPS units - 100-125VAC
7DD5A001WW RT1.5kVA 2U Rack or Tower UPS-G2 (100-125VAC)
7DD5A003WW RT3kVA 2U Rack or Tower UPS-G2 (100-125VAC)
Rack-mounted or tower UPS units - 200-240VAC
7DD5A002WW RT1.5kVA 2U Rack or Tower UPS-G2 (200-240VAC)
7DD5A005WW RT3kVA 2U Rack or Tower UPS-G2 (200-240VAC)
7DD5A007WW RT5kVA 3U Rack or Tower UPS-G2 (200-240VAC)
7DD5A008WW RT6kVA 3U Rack or Tower UPS-G2 (200-240VAC)
7DD5A00AWW RT11kVA 6U Rack or Tower UPS-G2 (200-240VAC)
† Only available in China and the Asia Pacific market.
For more information, see the list of Product Guides in the UPS category:
https://lenovopress.com/servers/options/ups

Seller training courses

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 23


The following sales training courses are offered for employees and partners (login required). Courses are
listed in date order.
1. Family Portfolio: Storage Networking
2024-10-14 | 15 minutes | Employees and Partners

This course will provide you an overview of the Storage Networking family. After completing this
course, you should be able to identify the products in the Storage Networking portfolio and their
features, describe product family benefits, and recognize when a specific product should be used.
Published: 2024-10-14
Length: 15 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1113r8


2. Partner Technical Webinar - Fibre Channel Market Tranistions
2024-10-08 | 60 minutes | Employees and Partners

in this 60-minute replay, there were two topics. Mike Easterly, Fibre Channel Technical Sales Executive
for Broadcom, reviewed the Fibre Channel offerings. He described then need to focus on Gen 7 over
Gen5 & Gen6. He described the many security features of Fibre Channel and why Fibre Channel is the
gold standard of storage networking. Herb Ducey, Lenovo Storage Product Manager concluded with
updates on the storage product line including DM Series, DE Series and D3284 updates.
Published: 2024-10-08
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 100424


3. Lenovo Data Center Product Portfolio
2024-05-29 | 20 minutes | Employees and Partners

This course introduces the Lenovo data center portfolio, and covers servers, storage, storage
networking, and software-defined infrastructure products. After completing this course about Lenovo
data center products, you will be able to identify product types within each data center family, describe
Lenovo innovations that this product family or category uses, and recognize when a specific product
should be selected.
Published: 2024-05-29
Length: 20 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: SXXW1110r7

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 24


4. Simplify Selling Fibre Channel Storage Solutions
2024-04-23 | 45 minutes | Employees and Partners

In this session we look at the benefits of Fibre Channel and the benefits to you and your customers of
bundling FC networking with your storage arrays.
Plus, we will take a closer look at some of the changes Lenovo has made to the Data Center Solutions
Configurator to help you and the clients build bundled FC solutions.

Course Objectives:
1. Learn the benefits of Fibre Channel
2. Understand the benefits of bundling FC networking with your storage arrays
3. Discover the latest updates in DCSC (Data Center Solutions Configurator)
Published: 2024-04-23
Length: 45 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: DNFP101


5. Partner Technical Webinar - Fibre Channel and DG Updates
2024-04-23 | 60 minutes | Employees and Partners

In this 60-minute replay, Mike Easterly, Broadcom, reviewed Lenovo solutions for Fibre Channel (FC)
including Emulex FC Adapters and Brocade FC switches. Next, Mark Clayton, Lenovo Storage
Architect, reviewed the lasted on the Data Management portfolio with updates on DG, HS350x Ready
Nodes and Data Protection.
Published: 2024-04-23
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: 041924


6. VTT Data Management How to sell storage - April 2024
2024-04-10 | 60 minutes | Employees Only

In this course, you will know:

- Why do we sell storage?


- What are the basics you need to get an opportunity rolling?
- Why Lenovo for Storage?
- What is happening in the market today?
- How to determine traction?
Published: 2024-04-10
Length: 60 minutes

Start the training:


Employee link: Grow@Lenovo

Course code: DVDAT209

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 25


7. Selling the Gen7 ThinkSystem DB720S SAN Switch
2023-02-01 | 20 minutes | Employees and Partners

This course is designed to give Lenovo sales (general and technical) and partner representatives a
foundation for the Gen7 ThinkSystem DB720S SAN switch and the Autonomous SAN solution. As an
introduction to the product, this course enables the learner to identify the features and benefits of the
switch and how it is integrated into the Autonomous SAN solution. By the end of this training, you
should be able to: describe what is happening in the data center and the pressures surrounding the
storage network; explain why the network will need to evolve to keep pace with the next wave of
innovation in the data center; and describe how Gen 7 Fibre Channel enables an autonomous SAN that
will harness the full value of next-gen data centers.
Published: 2023-02-01
Length: 20 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: DDBS206


8. Why Gen 7 with 64G Fibre Channel
2022-03-17 | 20 minutes | Employees and Partners

In this course we will be discussing what is happening in the data center and the pressures
surrounding the network. We will look at why the network will need to evolve to keep pace with the next
wave of innovation; which creates an imbalance in performance. We will review how Gen 7 Fibre
Channel enables an autonomous SAN that will harness the full value of next-gen data centers. Last, we
will review the Gen 7 hardware. Let’s start with the what businesses require from their infrastructures.
Course objectives:
1. Understand the benefits of clients investing in 64G Gen 7 switches today
2. Help customers have a clear understanding if 64G Gen 7 is a better fit than a Gen 6 switch for their
requirements
3. Articulate to your clients that Gen 7 is more that just 64G speeds
4. Explain how Lenovo is well positioned for clients looking for the best investment protection of their
infrastructure with multiple Gen 7 Fibre Channel offerings
Published: 2022-03-17
Length: 20 minutes

Start the training:


Employee link: Grow@Lenovo
Partner link: Lenovo Partner Learning

Course code: DDBS210

Related publications and links


Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 26
Related publications and links
For more information, see the following resources:
Datasheet of the DB720S:
https://lenovopress.com/datasheet/ds0120-thinksystem-db720s-fc-san-switch
Interactive 3D Tour of the DB720S:
https://lenovopress.com/lp1409-3d-tour-thinksystem-db720s
Analyst report "64 Gb Fibre Channel Performance with Lenovo ThinkSystem Emulex LPe36002 Gen 7
FC HBA"
https://www.lenovo.com/us/en/resources/data-center-solutions/analyst-reports/64-gb-fibre-channel-
performance-with-lenovo-thinksystem-emulex-lpe36002-gen-7-test-report/
Lenovo ThinkSystem DB720S FC SAN Switch product publications - see the Brocade G720
documentation:
https://www.broadcom.com/products/fibre-channel-networking/switches/g720-switch

Tip: Some of the Fabric OS documents can be accessed via the support portal by validating your
serial number for software entitlement

Hardware Installation Guide


Fabric OS Access Gateway Administration Guide
Fabric OS Administration Guide
Fabric OS Extension Configuration Guide
Fabric OS Troubleshooting and Diagnostics Guide
Fabric OS Command Reference
Fabric OS Message Reference
Fabric OS MIB Reference
Web Tools Administration Guide
Flow Vision Configuration Guide
Monitoring and Alerting Policy Suite Configuration Guide
Brocade 64G SWL SFP-DD Product Brief:
https://docs.broadcom.com/docs/SFP-DD-64G-SWL-PB
Lenovo Data Center Support for the ThinkSystem DB720S FC SAN Switch:
https://datacentersupport.lenovo.com/us/en/products/storage/fibre-channel-switches/db720s-fc-
switch/7d5j

Related product families


Product families related to this document are the following:
DB Series SAN Switches
Rack SAN Switches

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 27


Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your local
Lenovo representative for information on the products and services currently available in your area. Any reference to a
Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service
may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual
property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any
other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any license to these patents. You can
send license inquiries, in writing, to:

Lenovo (United States), Inc.


8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without
notice.

The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not affect
or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or implied
license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained in this
document was obtained in specific environments and is presented as an illustration. The result obtained in other
operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for
this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was
determined in a controlled environment. Therefore, the result obtained in other operating environments may vary
significantly. Some measurements may have been made on development-level systems and there is no guarantee that
these measurements will be the same on generally available systems. Furthermore, some measurements may have
been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.

© Copyright Lenovo 2025. All rights reserved.

This document, LP1358, was created or updated on September 9, 2024.


Send us your comments in one of the following ways:
Use the online Contact us review form found at:
https://lenovopress.lenovo.com/LP1358
Send your comments in an e-mail to:
comments@lenovopress.com
This document is available online at https://lenovopress.lenovo.com/LP1358.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 28


Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other
countries, or both. A current list of Lenovo trademarks is available on the Web at
https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ThinkAgile®
ThinkSystem®
The following terms are trademarks of other companies:
Microsoft® and Excel® are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.

Lenovo ThinkSystem DB720S Gen7 FC SAN Switch 29


42U Dynamic Expansion Rack and
42U 1100 mm Enterprise V2 Dynamic Rack
Installation Guide

Type: 9363
Note

Before using this information and the product it supports, read the general information in Appendix
A “Getting help and technical assistance” on page 51, Appendix B “Notices” on page 55, the safety
information, warranties, and licenses information on the Lenovo Web site at:
https://support.lenovo.com/documents/LNVO-DOCS

Third Edition (September 2024)


© Copyright Lenovo 2015, 2016.

LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services
Administration “GSA” contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No.
GS-35F-05925
Contents

Safety ............................................................... iii Chapter 6. Parts listing ............................... 47


Safety statements .................................................................iv
Appendix A. Getting help and
Chapter 1. Introduction ................................. 1 technical assistance ................................... 51
Notices and statements in this document ......................... 1 Before you call ..................................................................... 51
Using the documentation.................................................... 52
Chapter 2. Installing a rack cabinet ........... 3 Getting help and information from the World Wide
Size and weight specifications............................................. 5 Web ....................................................................................... 52
Planning the floor layout ...................................................... 6 How to send DSA data ....................................................... 52
Removing and installing the outriggers (side Creating a personalized support web page .....................52
stabilizers) .............................................................................. 6 Software service and support ............................................ 53
Installing the front stabilizer bracket, recirculation Hardware service and support........................................... 53
prevention plate (optional) and securing the rack to
the floor surface .................................................................... 8 Taiwan product service....................................................... 53
Removing and installing the side covers ......................... 16
Appendix B. Notices .................................... 55
Removing and installing a front door ................................ 17
Trademarks .......................................................................... 56
Removing and installing a rear door ................................. 18
Important notes ................................................................... 56
Reversing a front door ........................................................ 19
Recycling information ......................................................... 56
Attaching racks in a suite ................................................... 26
Particulate contamination ................................................... 57
Chapter 3. Installing optional Telecommunication regulatory statement ........................57
devices ........................................................... 29 Electronic emission notices ............................................... 58
Installation guidelines ......................................................... 29 Federal Communications Commission (FCC)
statement...................................................................... 58
Installing devices on the rack-mounting flanges ............ 30
Industry Canada Class A emission compliance
Installing threaded rails .............................................. 31 statement...................................................................... 58
Installing cage nuts ..................................................... 31 Avis de conformité à la réglementation
Installing clip nuts ....................................................... 32 d'Industrie Canada ..................................................... 58
Installing devices vertically in the rack cabinet ............... 32 Australia and New Zealand Class A
Installing a 1U PDU or console switch vertically statement...................................................................... 58
in the rack side area ................................................... 33 European Union EMC Directive conformance
Installing a 1U PDU or console switch vertically statement...................................................................... 58
in a rack side pocket ................................................... 33 Germany Class A statement ......................................59
Installing a 0U PDU vertically in the rear of a Japanese electromagnetic compatibility
rack cabinet.................................................................. 34 statements .................................................................... 60
Korea Communications Commission (KCC)
Chapter 4. Managing cables ...................... 37 statement...................................................................... 60
Front-to-rear cable channels ............................................. 37 Russia Electromagnetic Interference (EMI)
Using the cable-access bar in the bottom of the Class A statement ....................................................... 61
rack cabinet ......................................................................... 38 People's Republic of China Class A electronic
Using the cable-access openings in the top of the emission statement ..................................................... 61
rack ....................................................................................... 39 Taiwan Class A compliance statement .....................61
Mounting an overhead cable tray ..................................... 41 Taiwan BSMI RoHS declaration.................................62

Chapter 5. Moving a rack cabinet ............. 43 Index ............................................................... 63

© Copyright Lenovo 2015, 2016 i


ii 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Safety
Before installing this product, read the Safety Information.

Antes de instalar este produto, leia as Informações de Segurança.

Læs sikkerhedsforskrifterne, før du installerer dette produkt.

Lees voordat u dit product installeert eerst de veiligheidsvoorschriften.

Ennen kuin asennat tämän tuotteen, lue turvaohjeet kohdasta Safety Information.

Avant d'installer ce produit, lisez les consignes de sécurité.

Vor der Installation dieses Produkts die Sicherheitshinweise lesen.

Prima di installare questo prodotto, leggere le Informazioni sulla Sicurezza.

Les sikkerhetsinformasjonen (Safety Information) før du installerer dette produktet.

© Copyright Lenovo 2015, 2016 iii


Antes de instalar este produto, leia as Informações sobre Segurança.

Antes de instalar este producto, lea la información de seguridad.

Läs säkerhetsinformationen innan du installerar den här produkten.

Safety statements
These statements provide the caution and danger information that is used in this documentation.

Important: Each caution and danger statement in this documentation is labeled with a number. This number
is used to cross reference an English-language caution or danger statement with translated versions of the
caution or danger statement in the Safety Information document.

For example, if a caution statement is labeled Statement 1, translations for that caution statement are in the
Safety Information document under Statement 1.

Be sure to read all caution and danger statements in this documentation before you perform the procedures.
Read any additional safety information that comes with your system or optional device before you install
the device.

iv 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Statement 1

DANGER

Electrical current from power, telephone, and communication cables is hazardous.

To avoid a shock hazard:


• Do not connect or disconnect any cables or perform installation, maintenance, or reconfiguration
of this product during an electrical storm.
• Connect all power cords to a properly wired and grounded electrical outlet.
• Connect to properly wired outlets any equipment that will be attached to this product.
• When possible, use one hand only to connect or disconnect signal cables.
• Never turn on any equipment when there is evidence of fire, water, or structural damage.
• Disconnect the attached power cords, telecommunications systems, networks, and modems
before you open the device covers, unless instructed otherwise in the installation and
configuration procedures.
• Connect and disconnect cables as described in the following table when installing, moving, or
opening covers on this product or attached devices.

To Connect: To Disconnect:
1. Turn everything OFF. 1. Turn everything OFF.
2. First, attach all cables to devices. 2. First, remove power cords from outlet.
3. Attach signal cables to connectors. 3. Remove signal cables from connectors.
4. Attach power cords to outlet. 4. Remove all cables from devices.
5. Turn device ON.

Statement 2

CAUTION:
When replacing the lithium battery, use only Part Number 33F8354 or an equivalent type battery
recommended by the manufacturer. If your system has a module containing a lithium battery, replace
it only with the same module type made by the same manufacturer. The battery contains lithium and
can explode if not properly used, handled, or disposed of. Do not:
• Throw or immerse into water
• Heat to more than 100°C (212°F)
• Repair or disassemble

Dispose of the battery as required by local ordinances or regulations.

© Copyright Lenovo 2015, 2016 v


Statement 3

CAUTION:
When laser products (such as CD-ROMs, DVD drives, fiber optic devices, or transmitters) are
installed, note the following:
• Do not remove the covers. Removing the covers of the laser product could result in exposure to
hazardous laser radiation. There are no serviceable parts inside the device.
• Use of controls or adjustments or performance of procedures other than those specified herein
might result in hazardous radiation exposure.

DANGER

Some laser products contain an embedded Class 3A or Class 3B laser diode. Note the following.

Laser radiation when open. Do not stare into the beam, do not view directly with optical
instruments, and avoid direct exposure to the beam.

Statement 4

CAUTION: Use safe practices when lifting.

≥ 18 kg (39.7 lb) ≥ 32 kg (70.5 lb) ≥ 55 kg (121.2 lb)

vi 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Statement 5

CAUTION:
The power control button on the device and the power switch on the power supply do not turn off
the electrical current supplied to the device. The device also might have more than one power
cord. To remove all electrical current from the device, ensure that all power cords are disconnected
from the power source.

Statement 8

CAUTION:
Never remove the cover on a power supply or any part that has the following label attached.

Hazardous voltage, current, and energy levels are present inside any component that has this label
attached. There are no serviceable parts inside these components. If you suspect a problem with
one of these parts, contact a service technician.

Statement 11

CAUTION:
The following label indicates sharp edges, corners, or joints nearby.

© Copyright Lenovo 2015, 2016 vii


Statement 12

CAUTION:
The following label indicates a hot surface nearby.

Statement 13

DANGER

Overloading a branch circuit is potentially a fire hazard and a shock hazard under certain
conditions. To avoid these hazards, ensure that your system electrical requirements do not exceed
branch circuit protection requirements. Refer to the information that is provided with your device
for electrical specifications.

Statement 15

CAUTION:
Make sure that the rack is secured properly to avoid tipping when the server unit is extended.

Statement 17

CAUTION:
The following label indicates moving parts nearby.

viii 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Statement 26

CAUTION:
Do not place any object on top of rack-mounted devices.

© Copyright Lenovo 2015, 2016 ix


Statement 31

DANGER

Electrical current from power, telephone, and communication cables is hazardous.

To avoid a shock hazard:


• Do not connect or disconnect any cables or perform installation, maintenance, or reconfiguration
of this product during an electrical storm.
• Connect all power cords to a properly wired and grounded power source.
• Connect to properly wired power sources any equipment that will be attached to this product.
• When possible, use one hand only to connect or disconnect signal cables.
• Never turn on any equipment when there is evidence of fire, water, or structural damage.
• Disconnect the attached ac power cords, dc power sources, network connections,
telecommunications systems, and serial cables before you open the device covers, unless you
are instructed otherwise in the installation and configuration procedures.
• Connect and disconnect cables as described in the following table when you install, move, or
open covers on this product or attached devices.

To Connect: To Disconnect:
1. Turn OFF all power sources and equipment that is to 1. Turn OFF all power sources and equipment that is to
be attached to this product. be attached to this product.
2. Attach signal cables to the product. • For ac systems, remove all power cords from the
3. Attach power cords to the product. chassis power receptacles or interrupt power at
the ac power distribution unit.
• For ac systems, use appliance inlets.
• For dc systems, disconnect dc power sources
• For dc systems, ensure correct polarity of -48 V dc at the breaker panel or by turning off the power
connections: RTN is + and -48 V dc is -. Earth source. Then, remove the dc cables.
ground should use a two-hole lug for safety.
2. Remove the signal cables from the connectors.
4. Attach signal cables to other devices.
3. Remove all cables from the devices.
5. Connect power cords to their sources.
6. Turn ON all the power sources.

x 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Statement 34

CAUTION:
To reduce the risk of electric shock or energy hazards:
• This equipment must be installed by trained service personnel in a restricted-access location,
as defined by the NEC and IEC 60950-1, First Edition, The Standard for Safety of Information
Technology Equipment.
• Connect the equipment to a properly grounded safety extra low voltage (SELV) source. A SELV
source is a secondary circuit that is designed so that normal and single fault conditions do not
cause the voltages to exceed a safe level (60 V direct current).
• Incorporate a readily available approved and rated disconnect device in the field wiring.
• See the specifications in the product documentation for the required circuit-breaker rating for
branch circuit overcurrent protection.
• Use copper wire conductors only. See the specifications in the product documentation for the
required wire size.
• See the specifications in the product documentation for the required torque values for the
wiring-terminal screws.

Statement 35:

>240VA

CAUTION:
Hazardous energy present. Voltages with hazardous energy
might cause heating when shorted with metal, which might
result in splattered metal, burns, or both.

Statement 36:

CAUTION:
Always install the slide retention screw.

© Copyright Lenovo 2015, 2016 xi


Statement 37

DANGER

When you populate a rack cabinet, adhere to the following guidelines:


• Always lower the leveling pads on the rack cabinet.
• Always install the stabilizer brackets on the rack cabinet.
• Always install the heaviest devices in the bottom of the rack cabinet.
• Do not extend multiple devices from the rack cabinet simultaneously, unless the rack-mounting
instructions direct you to do so. Multiple devices extended into the service position can cause
your rack cabinet to tip.
• If you are not using the IBM 9308 rack cabinet, securely anchor the rack cabinet to ensure
its stability.

Attention: This product is suitable for use on an IT power distribution system whose maximum phase-to
phase-voltage is 240 V under any distribution fault condition.

xii 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 1. Introduction
This document contains general installation instructions for installing the following rack cabinets. Always
read the documentation for your server or optional device for detailed installation instructions.
• 42U 1100 mm Enterprise V2 Dynamic Rack, Type 9363-4PX
• 42U 1100 mm Enterprise V2 Dynamic Expansion Rack, Type 9363-4EX

Installing the rack cabinet consists of the following tasks:


1. Unpack the rack according to the 42U 1100 mm Enterprise V2 Dynamic Rack and Expansion Rack
Unpacking Instructions.
2. Install the rack stabilizer brackets on all rack cabinets.
3. Prepare the rack for optional devices:
• Remove the side covers, if applicable.
• Remove the front and rear doors from all racks, if necessary.
• Attach expansion racks to a standard rack or to each other to form suites.
4. Install one or more optional devices.

Note: Install the heaviest devices in the bottom of the rack cabinet.
5. Complete the rack cabinet installation:
• Reinstall side covers on all racks or on the outermost racks in a suite.
• Reinstall front and rear doors on all racks.

If documentation updates are available, you can download them from the Lenovo® website. The rack
cabinet might have features that are not described in the documentation that comes with the rack, and the
documentation might be updated occasionally to include information about those features, or technical
updates might be available to provide additional information that is not included in the rack documentation.
To check for updates, go to http://www.lenovo.com/support.

Note: Changes are made periodically to the Lenovo pages on the World Wide Web. Procedures for locating
documentation might vary slightly from what is described in this document.

For more information about rack cabinets and options, see http://publib.boulder.ibm.com/infocenter/
systemx/documentation/index.jsp.

Notices and statements in this document


The caution and danger statements in this document are also in the multilingual Safety Information
document, which is available at https://support.lenovo.com/documents/LNVO-DOCS. Each statement is
numbered for reference to the corresponding statement in your language in the Safety Information document.

The following notices and statements are used in this document:

• Note: These notices provide important tips, guidance, or advice.


• Important: These notices provide information or advice that might help you avoid inconvenient or
problem situations.
• Attention: These notices indicate potential damage to programs, devices, or data. An attention notice is
placed just before the instruction or situation in which damage might occur.
• Caution: These statements indicate situations that can be potentially hazardous to you. A caution
statement is placed just before the description of a potentially hazardous procedure step or situation.

© Copyright Lenovo 2015, 2016 1


• Danger: These statements indicate situations that can be potentially lethal or extremely hazardous
to you. A danger statement is placed just before the description of a potentially lethal or extremely
hazardous procedure step or situation.

2 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 2. Installing a rack cabinet
The standard and expansion rack cabinets are 42U-high1 racks. The standard rack cabinet comes with side
covers installed. The expansion rack cabinet does not come with side covers but includes the required
hardware for building a suite of racks. You need one standard rack cabinet per suite.

Notes:
1. If required by local building codes, each stand-alone rack can be bolted to the floor with a fastener
in each corner.
2. The illustrations in this document might differ slightly from your hardware.

Statement 1

CAUTION:
To ensure safety, all applicable components of the rack cabinet must be certified by a nationally
recognized testing laboratory in order to verify compliance with country-specific safety regulations.
This process ensures that the end product remains safe for the operator and service personnel
under normal and forseeable misuse conditions.

1. One U is equal to 4.45 cm (1.75 in.)

© Copyright Lenovo 2015, 2016 3


Figure 1. 42U 1100 mm Enterprise V2 Dynamic Rack, Type 9363-4PX

4 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Figure 2. 42U 1100 mm Enterprise V2 Dynamic Expansion Rack, Type 9363-4EX (comes without side covers)

Size and weight specifications


The 42U racks and 42U expansion racks conform to the Electronic Industries Association (EIA) standard
EIA-310-D Cabinets, Racks, Panels, and Associated Equipment (1992). For the rack cabinet dimensions
and weights, see the following tables.

Table 1. 42U rack physical dimensions

The last row contains a note that is associated with a superscript number in the previous cells.

Configuration Dimensions
9363-4PX 42U 1100 mm Enterprise V2 Dynamic Rack without outriggers 2009 mm x 604 mm1x 1100 mm
(79.1 in. x 23.8 in. x 43.3 in.)
9363-4PX 42U 1100 mm Enterprise V2 Dynamic Rack with outriggers 2009 mm x 780 mm x 1100 mm
(79.1 in. x 30.7 in. x 43.3 in.)
9363-4EX 42U 1100 mm Enterprise V2 Dynamic Expansion Rack without 2009 mm x 600 mm x 1100 mm
outriggers (79.1 in. x 23.6 in. x 43.3 in.)

Chapter 2. Installing a rack cabinet 5


Table 1. 42U rack physical dimensions (continued)
Configuration Dimensions
9363-4EX 42U 1100 mm Enterprise V2 Dynamic Expansion Rack with 2009 mm x 780 mm x 1100 mm
outriggers (79.1 in. x 30.7 in. x 43.3 in.)
Notes:
1. Includes side cover latches. When the side covers are removed, the rack width is 600 mm (23.6 in.).

Table 2. 42U 1100 mm Enterprise V2 Dynamic Rack and Expansion Rack weights
9363-4EX 42U dynamic expansion
9363-4PX 42U dynamic rack rack
Empty (with outriggers) 169 kg (372 lb) 132 kg (292 lb)
Total load 953 kg (2100 lb) 953 kg (2100 lb)
Maximum configuration 1121 kg (2472 lb) 1085 kg (2392 lb)

Planning the floor layout


Step 1. For planning purposes, use the floor layout that is shown in the following illustration as a guide for
cutting holes in the floor tiles to run cables for the devices in the rack cabinet.

The circles in the illustration represent the area where the casters and leveling feet might touch the
ground. Make sure that there are no holes in the floor tiles that are too close to these circles.

600 mm

45.97 mm
198.74 mm
65.2 mm
1095.48 mm

65.2 mm

458.37 mm

Front of Rack

Figure 3. Floor layout for cutting holes in the floor tiles

Removing and installing the outriggers (side stabilizers)


The outriggers are the stabilizers with wheels that are installed on the sides of the rack cabinet. After the
rack is in its final location and will not be moved more than 2 m (6 ft), you can remove the outriggers.

6 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
DANGER

Always relocate the rack cabinet with the outriggers installed. Keep the outriggers and install them
if you have to move the rack to another location in the future.

Figure 4. Removing and installing the outriggers

Chapter 2. Installing a rack cabinet 7


To remove the outriggers, use the 6 mm hex wrench that comes in the hardware kit to remove the four
bolts that attach each outrigger to the rack cabinet. Keep the outriggers and bolts for future use if you
have to move the rack.

Notes:
1. Before you attach an expansion rack to a standard rack or another expansion rack, you must remove the
outriggers from the racks so that the racks fit together correctly.
2. You can install or remove the outriggers on a rack cabinet with or without side covers.

Install the outriggers before you move the rack cabinet to another location. Use the 6 mm hex wrench that
comes in the hardware kit to install the four bolts that attach each outrigger to the rack cabinet.

Installing the front stabilizer bracket, recirculation prevention plate


(optional) and securing the rack to the floor surface
See the 42U 1100 mm Enterprise V2 Dynamic Rack and Expansion Rack Unpacking Instructions for
information about how to unpack and locate the rack.

Statement 2

DANGER

• Always lower the leveling pads on the rack cabinet.


• Always install stabilizer brackets on the rack cabinet.
• Always install servers and optional devices starting from the bottom of the rack cabinet.
• Always install the heaviest devices in the bottom of the rack cabinet.

The procedure in this section describes the following tasks:


• Lowering the leveling pads
• Installing the recirculation prevention plate (optional)
• Installing the front stabilizer bracket
• Bolting the rack cabinet to the floor surface for added stability

Complete the following steps:


Step 1. Use the open-end wrench that comes with the hardware kit to lower each of the four leveling pads
just enough so that they touch the floor. The rack casters support the weight of the rack cabinet.
The pads prevent the rack from rolling.

8 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Leveling pad

Rear swivel caster


Leveling pad

Front fixed caster

Figure 5. Lowering the leveling pads

Step 2. Hand-tighten the thumbscrews on the front fixed casters.


Step 3. Remove both outriggers from the sides of the rack by removing the four bolts on each side with a 6
mm hex wrench. Save the outriggers for use in the future if you have to move the rack cabinet
to another location.

Chapter 2. Installing a rack cabinet 9


Figure 6. Removing the outriggers

Step 4. Remove the front door if you are installing the recirculation prevention plate or front stabilizer
bracket. For more information about removing the front door, see “Removing and installing
a front door” on page 17.
Step 5. The following sub-steps are optional. Complete the appropriate steps for your rack cabinet.

Note: The recirculation prevention plate prevents hot air recirculation from beneath the rack by
sealing the open space between the rack bottom and the floor surface. The plate also seals the
front cable egress if the rack front-side foam seal kit is not installed.
• If this is not a stand-alone rack and you are not installing the front stabilizer bracket, attach the
recirculation prevention plate with the four screws and hex wrench from the hardware kit.

10 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Recirculation
prevention plate

Figure 7. Installing the recirculation prevention plate and no front stabilizer bracket

• If this is a stand-alone rack cabinet, complete the following steps:


1. Align the four holes in the recirculation prevention plate with the four holes in the rack
cabinet.

Chapter 2. Installing a rack cabinet 11


Recirculation
prevention plate

Stabilizer bracket

Figure 8. Installing the recirculation prevention plate and the front stabilizer bracket

2. Position the front stabilizer bracket in front of the recirculation prevention plate and align
the screw holes.
3. Use the four screws and the hex wrench that come in the hardware kit to secure the front
stabilizer bracket and recirculation prevention plate to the rack cabinet.
4. Tighten the screws until the stabilizer bracket is flush against the recirculation prevention
plate (if it is used) or flush against the rack (if the recirculation prevention plate is not used).
Step 6. If this is a stand-alone rack cabinet and you are not installing the recirculation prevention plate,
attach the front stabilizer to the front of the rack cabinet with the screws and hex wrench that
come with the hardware kit.

12 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Stabilizer bracket

Figure 9. Installing the front stabilizer and no recirculation prevention plate

Note: If required by local building codes, each stand-alone rack can be bolted to the floor with
a fastener in each corner.
Step 7. Bolt the rack to the floor surface by using the following methods:
• If a front stabilizer bracket or stabilizer plate is installed, bolt the rack to the floor surface through
the holes in the front stabilizer by using two bolts and washers.

Chapter 2. Installing a rack cabinet 13


Bolts

Stabilizer bracket

Figure 10. Bolting the front stabilizer to the floor surface

Bolt the rear of the rack to the floor surface through the holes in the lower frame by using two
bolts and washers.

Bolts

Rack rear
Figure 11. Bolting the rear of the rack to the floor surface

• If a front stabilizer bracket or stabilizer plate is not installed, bolt the front of the rack to the floor
surface through the holes in the lower frame by using two bolts and washers.

14 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Bolts

Rack front
Figure 12. Bolting the front of the rack to the floor surface

Bolt the rear of the rack to the floor surface through the holes in the lower frame by using two
bolts and washers. See Figure 11 “Bolting the rear of the rack to the floor surface” on page 14.
Step 8. Reinstall the front door if you removed it in step Step 4 on page 10.

Figure 13. Rack cabinet with front stabilizer installed

Chapter 2. Installing a rack cabinet 15


Removing and installing the side covers
The standard rack comes with the side covers installed. Remove the side covers from the rack before
you install or remove optional devices.
To remove the side covers from a standard rack, complete the following steps:
a. Unlock the button lock on the top of a side cover.

Lock

Release handles

Side cover

Figure 14. Removing a side cover

b. Press down on both release handles, and tilt the top of the side cover slightly toward you; then, lift the
side cover up and away from the ridge on the bottom of the rack cabinet.
c. Repeat this procedure to remove the second side cover.
To install a side cover, align the side cover with the ridge in the bottom side of the rack cabinet and press
down. Press in on both release handles and then rotate the top of the side cover toward the rack. Lock the
side cover to secure it to the rack cabinet.

16 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Lock

Side
cover

Figure 15. Installing a side cover

Removing and installing a front door


All racks come with the front door installed. Remove the front door when you install and remove devices in
the rack, if part of the rack is obstructed by the door as you install devices.
To remove the front door from the rack cabinet, complete the following steps:
a. Unlock and open the door.

Chapter 2. Installing a rack cabinet 17


Hinge
pins

Figure 16. Removing the front door

b. Holding the door firmly with one hand, lift both hinge pins until they lock in the open position. This
releases the door from the hinges.
c. Grasp the door firmly with both hands and pull it away from the hinges; then, set the door aside.
To install the front door on the rack cabinet, complete the following steps:
a. Grasp the door firmly with both hands, align the door with the hinges, and slide the door into place.
b. Holding the door with one hand, push the hinge pins down to the closed position.

Removing and installing a rear door


All racks come with the rear door installed. Remove the rear door when you install and remove devices in the
rack, if part of the rack is obstructed by the door as you install devices.
To remove a rear door from the rack cabinet, complete the following steps:
a. Unlock and open the door.

18 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Hinge
pins

Figure 17. Removing a rear door

b. Holding the door firmly with one hand, lift both hinge pins until they lock in the open position. This
releases the door from the hinges.
c. Grasp the door firmly with both hands and pull it away from the hinges; then, set the door aside.
To install a rear door on the rack cabinet, complete the following steps:
a. Grasp the door firmly with both hands, align the door with the hinges, and slide the door into place.
b. Holding the door with one hand, push the hinge pins down to the closed position.

Reversing a front door


To reverse a front door on a rack cabinet so that the hinges are on the right side, complete the following steps:
Step 1. Remove the front door according to “Removing and installing a front door” on page 17.
Step 2. Remove the doorstop on the top-right side of the rack cabinet by removing the screw.

Chapter 2. Installing a rack cabinet 19


Hinge pin

Hinge
Screw

Screw Doorstop

Figure 18. Removing the doorstop and door hinges

Step 3. Remove the door hinges from the rack:


a. Remove the hinge pin by placing a small screwdriver under the end of the retainer spring,
press the spring out, and pull the hinge pin out of the hinge.

20 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Hinge
pin

Hinge
pin
Retainer
spring

Door hinge
(oriented for
left side rack
mount) Door hinge
(oriented for
right side rack
mount)

Figure 19. Removing and installing a door hinge pin

b. Use a Phillips screwdriver to remove the hinge screw.


Repeat step Step 3 on page 20 to remove the other hinge.
Step 4. Attach the doorstop to the top-left side of the rack. Use the empty screw hole from where you
removed the top hinge.

Chapter 2. Installing a rack cabinet 21


Screw

Doorstop Hinge pin

Hinge

Screw

Figure 20. Installing the doorstop and door hinges

Step 5. Install the top and bottom hinges on the right side of the rack cabinet:
a. Orient the hinge to install it on the right side of the rack cabinet, as shown in the following
illustration.

22 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Hinge
pin

Hinge
pin
Retainer
spring

Door hinge
(oriented for
left side rack
mount) Door hinge
(oriented for
right side rack
mount)

Figure 21. Removing and installing a door hinge pin

b. Align the screw hole in the hinge with the screw hole on the right side of the rack cabinet. For
the top hinge, use the empty screw hole from where you removed the doorstop.
c. Attach the hinge to the rack flange with the screw.
d. Partially insert the hinge pins in the hinges.
Step 6. Remove the front door latch from the left side of the rack cabinet and attach it to the right side
of the rack cabinet.

Chapter 2. Installing a rack cabinet 23


Figure 22. Moving the front door latch

Step 7. Install the door on the right side:


a. Carefully rotate the door 180°.

24 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Hinge
pins

Figure 23. Rotating and installing the door

b. Grasp the door firmly with both hands, align the door with both hinges, and slide the door
into place.
c. Holding the door with one hand, push the hinge pins down to the closed position.
Step 8. Remove the Lenovo logo from the bottom of the door; then, snap it into place near the top of
the door.

Chapter 2. Installing a rack cabinet 25


Figure 24. Moving the Lenovo logo

Attaching racks in a suite


Expansion racks come with all the hardware that is required for you to attach racks together and form a
suite. A hex wrench and screws come with the expansion-rack hardware kit. You need one standard rack to
form a suite. You have to remove the doors before you can attach the racks together.

Note: Before you attach an expansion rack to a standard rack or another expansion rack, you must remove
the outriggers from the racks so that the racks fit together correctly.

To attach racks together in a suite, complete the following steps:


Step 1. Remove the front and rear doors. For more information, see “Removing and installing a front
door” on page 17.
Step 2. On the side of the standard rack cabinet where you are attaching the expansion rack, remove the
side cover. For more information, see “Removing and installing the side covers” on page 15.
Step 3. Where the two racks come together at the top front, align the screw holes of an attachment bracket
(which comes with the expansion rack cabinet) with the holes in the standard rack and expansion
rack (see the following illustration). Secure the bracket to the racks with four screws. Do not fully
tighten the screws. Repeat this step for the bottom front attachment bracket; then, tighten all of
the bracket screws.

26 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Figure 25. Attaching standard and expansion racks to each other to form a suite

Step 4. Repeat step Step 3 on page 26 to attach the rear top and bottom attachment brackets.
Step 5. Repeat this procedure to attach additional expansion racks to the suite.

Chapter 2. Installing a rack cabinet 27


28 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 3. Installing optional devices
There are many servers and optional devices that you can install in the standard and expansion racks. Always
read the documentation that comes with your server or optional device for detailed installation instructions.

Installation guidelines

Statement 1

DANGER

Electrical current from power, telephone, and communication cables is hazardous.

To avoid a shock hazard:


• Do not connect or disconnect any cables or perform installation, maintenance, or reconfiguration
of this product during an electrical storm.
• Connect all power cords to a properly wired and grounded electrical outlet.
• Connect to properly wired outlets any equipment that will be attached to this product.
• When possible, use one hand only to connect or disconnect signal cables.
• Never turn on any equipment when there is evidence of fire, water, or structural damage.
• Disconnect the attached power cords, telecommunications systems, networks, and modems
before you open the device covers, unless instructed otherwise in the installation and
configuration procedures.
• Connect and disconnect cables as described in the following table when installing, moving, or
opening covers on this product or attached devices.

Table for Safety Statement 1 that explains the steps to connect and disconnect cables.

To Connect: To Disconnect:
1. Turn everything OFF. 1. Turn everything OFF.
2. First, attach all cables to devices. 2. First, remove power cords from outlet.
3. Attach signal cables to connectors. 3. Remove signal cables from connectors.
4. Attach power cords to outlet. 4. Remove all cables from devices.
5. Turn device ON.

Rack Safety Information, Statement 2

© Copyright Lenovo 2015, 2016 29


DANGER

• Always lower the leveling pads on the rack cabinet.


• Always install stabilizer brackets on the rack cabinet.
• Always install servers and optional devices starting from the bottom of the rack cabinet.
• Always install the heaviest devices in the bottom of the rack cabinet.

Statement 4

Three graphic illustrations for safety practices when lifting.

CAUTION: Use safe practices when lifting.

18 kg (39.7 lb) 32 kg (70.5 lb) 55 kg (121.2 lb)

Statement 26

CAUTION:
Do not place any object on top of rack-mounted devices.

Installing devices on the rack-mounting flanges


For optional devices that require threaded holes for mounting, you must install either cage nuts or clip
nuts. Use cage nuts in the square mounting holes provided in the rack-mounting flanges in the main
horizontal 42U compartment. Use clip nuts in the round holes provided in the six 1U rear vertical-mounting
compartments. For detailed information about the mounting requirements for a device, see the instructions
that come with the device.

30 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Note: The rack cabinet comes with a supply of cage nuts and clip nuts, and devices that require them
come with the applicable cage nuts or clip nuts.

Installing threaded rails


Step 1. If a device has threaded holes or device rails that have threaded holes, you must install the device
on the rail-mounting flanges on the inside of the rack-mounting flanges. For detailed information
about how to use threaded rails, see the device documentation.

Installing cage nuts


Step 1. Install cage nuts in the rack-mounting flanges with either the cage-nut-insertion tool or a flat-blade
screwdriver. The cage-nut-insertion tool comes with the rack and some optional devices.

Using the cage-nut-insertion tool


To install a cage nut with the cage-nut-insertion tool, complete the following steps.

Cage nut

Cage-nut-insertion tool
Rack mounting
flange

Figure 26. Installing cage nuts with the cage-nut-insertion tool

Step 1. Determine the hole in which you want to install the cage nut.
Step 2. From the inside of the rack mounting flange, insert one edge of the cage nut into the hole.
Step 3. Push the tool through the hole and hook the other edge of the cage nut.
Step 4. Pull the tool and the cage nut back through the hole to complete the installation of the cage nut.

Using a flat-blade screwdriver


To install a cage nut with a flat-blade screwdriver, complete the following steps.

Chapter 3. Installing optional devices 31


Figure 27. Installing cage nuts with a flat-blade screwdriver

Step 1. Determine the hole in which you want to install the cage nut.
Step 2. Hold the cage nut in one hand and compress the cage-nut clip with a flat-blade screwdriver.
Step 3. With the clip compressed, push the edge of the cage nut fully into the hole from the inside of
the rack-mounting flange.
Step 4. Release the screwdriver pressure on the clip to lock the cage nut into place.

Installing clip nuts


Step 1. Install clip nuts by sliding them over the mounting holes in the rear vertical 1U mounting
compartments as shown in the following illustration.

M6 clip nut

Figure 28. Installing clip nuts on the rack-mounting flanges

Installing devices vertically in the rack cabinet


You can use the space on the sides and in the rear of the rack cabinet to vertically mount power distribution
units (PDUs) and console switches.

For more information about installing a device vertically in the rack cabinet, see the documentation that
comes with your PDU or console switch.

32 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Installing a 1U PDU or console switch vertically in the rack side area
The rack cabinet comes with space on the sides to vertically mount PDUs and console switches. Each rack
has six locations, three on each side of the rack cabinet. To install a device in the side area, you must use
flange nuts and the M6 button-head cap screws that come in the hardware kit.

To install a 1U PDU or console switch vertically in the rack side area, complete the following steps:
Step 1. Attach the two mounting brackets to the sides of the PDU or console switch. For more information,
see the documentation that comes with the device.
Step 2. Align the holes in the mounting bracket with the holes in the rack flange.

Flange nuts

M6 button-head
cap screws

Figure 29. Installing a 1U PDU or console switch vertically in the rack side area

Step 3. Secure the PDU or console switch to the rack with four flange nuts on the rack flange and four M6
button-head cap screws on the mounting bracket side.

Installing a 1U PDU or console switch vertically in a rack side pocket


The rack cabinet comes with rear vertical side pockets that you can use to vertically mount PDUs and
console switches. Each rack has six locations, three on each side of the rack cabinet. The rear vertical side
pockets have round holes in the rack-mounting flanges. You must install clip nuts in the holes before you
install a device.

Chapter 3. Installing optional devices 33


To install 1U PDU or console switch vertically in a rack side pocket, complete the following steps:
Step 1. Attach the two mounting brackets to the sides of the PDU or console switch. For more information,
see the documentation that comes with the device.
Step 2. Install four clip nuts on the rack flanges as shown in the illustration.

M6 clip nuts

Figure 30. Installing a 1U PDU or console switch vertically in a rack side pocket

Step 3. Carefully slide the PDU or console switch into the side pocket and secure the device with four
M6 screws.

Installing a 0U PDU vertically in the rear of a rack cabinet


Step 1. To install a 0U PDU vertically in the rear of a rack cabinet, orient the PDU vertically and insert the
two pegs on the PDU into the keyhole slots in the side of the rack cabinet (see the following
illustration). Push down to secure the PDU in position.

The following illustration shows one way to install a 0U PDU in the rear of the rack cabinet. You can
install up to four 0U PDUs vertically in the rack cabinet, depending on your rack configuration.

34 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Figure 31. Installing a 0U PDU vertically in the rear of the rack cabinet

Chapter 3. Installing optional devices 35


36 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 4. Managing cables
Always read the instructions that come with your server or optional device for detailed cable-management
information. Use the following general guidelines when you cable servers or other devices that you install
in a rack cabinet.

Statement 8

DANGER

• Plug power cords from devices in the rack cabinet into electrical outlets that are located near
the rack cabinet and are easily accessible.
• Each rack cabinet might have more than one power cord. Be sure to disconnect all power cords
in the rack cabinet before servicing any device in the rack cabinet.
• Install an emergency-power-off switch if more than one power device (power distribution unit or
uninterruptible power supply) is installed in the same rack cabinet.
• Connect all devices installed in a rack cabinet to power devices installed in the same rack
cabinet. Do not plug a power cord from a device installed in one rack cabinet into a power
device installed in a different rack cabinet.

• Do not run cables in front of or behind other devices that will prevent service access to those devices.
• Do not bend cables beyond the specified limits.
• Label all cables so that they are clearly distinguishable from each other.
• When you install devices that are mounted on slide rails, such as servers, observe the following
precautions:
– Run the cables neatly along equipment cable-management arms and secure the cables to the arms,
using provided cable straps.
– Leave enough extra cable so that you can fully extend the device without straining the cables.
– Secure the cables so that you can retract the device without pinching or cutting the cables.
• When you install devices that are mounted on fixed rails, observe the following precautions:
– Run the cables neatly along the posts or side rails in the rack cabinet out of the way of other installed
devices.
– Secure the cables with the provided cable straps.
• Make sure that the cables cannot be pinched or cut by the rack cabinet rear door or other devices.
• Run internal cables that connect devices in adjoining racks through the large openings in the rear of
the rack cabinet.
• Run external cables through the bottom of the rack cabinet or through the cable-access opening in the
top of the rack.

Front-to-rear cable channels


You can route cables from the front to the rear of the rack cabinet by using the cable channels on the sides
of the rack. There are two cable channels on each side of the rack cabinet.

© Copyright Lenovo 2015, 2016 37


Before you use a cable channel, remove the cable channel cap. You can use a flat-blade screwdriver or a
similar tool to pry the cap off the end of the channel. If a cable channel is not being used, keep the cap in
place to prevent hot air recirculation from the rear of the rack to the front of the rack.

Front to
rear cable
channels

Cable
channels
caps

Front to
rear cable
channels

Cable
channels
caps

Figure 32. Removing the caps from the front-to-rear cable channels

Using the cable-access bar in the bottom of the rack cabinet


The cable-access bar on the bottom rear of the rack cabinet keeps the external cables in place.

To route external cables through the opening in the bottom rear of the rack, complete the following steps:
Step 1. Remove the four screws that attach the cable-access bar to the rack cabinet, as shown in the
following illustration.

38 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Cable-access
bar

M6 screws

Figure 33. Routing cables using the cable-access bar

Step 2. Route the cables through the opening.


Step 3. Reattach the cable-access bar to the rack cabinet with the four screws that you removed in step
Step 1 on page 38. Make sure that you do not pinch or cut any cables.

Using the cable-access openings in the top of the rack


Step 1. Use the front and rear rectangular cable-access openings on the top of the rack cabinet to route
external cables and to control the flow of air inside the rack.
Step 2. To adjust a cable-access cover, use a Phillips or flat-blade screwdriver to loosen the two screws on
the sides of the cover. Then, slide the cable-access cover to the position that you want, based on
the requirements for your rack configuration.

Chapter 4. Managing cables 39


Rear
Screws cable-access
Front
cover
cable-access
cover

Screws

Figure 34. Location of the cable-access openings

Use the following guidelines to adjust the size of the cable-access openings:

Top front cable-access opening


Slide the cable-access cover as far forward as possible to close off the open area so no hot
exhaust air can recirculate back through the rack and exhaust out of the top of the rack.

Note: The front opening is very close to the front of the rack and the air inlet to the servers in
the rack.
Top rear cable-access opening
Slide the cover all the way open or closed, or in any intermediate position. Leaving the cover
open provides extra exhaust area for components near the top and bottom of the rack;
however, in some configurations, this shortens the hot air recirculation path from the rear to
the front.

For information about adjusting the air flow in the rack if a Rear Door Heat eXchanger is installed on
the rack cabinet, see the Installation and Maintenance Guide that comes with the heat exchanger.

40 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Mounting an overhead cable tray
Step 1. The rack cabinet comes with pre-drilled holes in the top that you can use to attach an overhead
cable tray (not provided by Lenovo) to the top of the rack suite.

Pre-drilled
holes

Pre-drilled
holes

Figure 35. Pre-drilled holes in the top of the rack cabinet

Chapter 4. Managing cables 41


42 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 5. Moving a rack cabinet
Before you move the rack cabinet to another location, read the important guidelines in this chapter.

When you move a rack cabinet, observe the following safety guidelines.

© Copyright Lenovo 2015, 2016 43


Statement 8

DANGER

• Plug power cords from devices in the rack cabinet into electrical outlets that are located near
the rack cabinet and are easily accessible.
• Each rack cabinet might have more than one power cord. Be sure to disconnect all power cords
in the rack cabinet before servicing any device in the rack cabinet.
• Install an emergency-power-off switch if more than one power device (power distribution unit or
uninterruptible power supply) is installed in the same rack cabinet.
• Connect all devices installed in a rack cabinet to power devices installed in the same rack
cabinet. Do not plug a power cord from a device installed in one rack cabinet into a power
device installed in a different rack cabinet.

Statement 11

CAUTION:
Removing components from the upper positions in the rack cabinet improves rack stability during
relocation. Follow these general guidelines whenever you relocate a populated rack cabinet within a
room or building:
• Reduce the weight of the rack cabinet by removing equipment starting at the top of the rack
cabinet. When possible, restore the rack cabinet to the configuration of the rack cabinet as you
received it. If this configuration is not known, you must do the following:
– Remove all devices in the 22U position and above.
– Ensure that the heaviest devices are installed in the bottom of the rack cabinet.
– Ensure that there are no empty U-levels between devices installed in the rack cabinet below the
22U level.
• If the rack cabinet you are relocating is part of a suite of rack cabinets, detach the rack cabinet
from the suite.
• Inspect the route that you plan to take to eliminate potential hazards.
• Verify that the route that you choose can support the weight of the loaded rack cabinet. Refer to
the documentation that comes with your rack cabinet for the weight of a loaded rack cabinet.
• Verify that all door openings are at least 760 x 2083 mm (30 x 82 in.)

44 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
• Ensure that all devices, shelves, drawers, doors, and cables are secure.
• Ensure that the four leveling pads are raised to their highest position.
• Ensure that there is no stabilizer bracket installed on the rack cabinet.
• Do not use a ramp inclined at more than ten degrees.
• Once the rack cabinet is in the new location, do the following:
– Lower the four leveling pads.
– Install stabilizer brackets on the rack cabinet.
– If you removed any devices from the rack cabinet, repopulate the rack cabinet from the lowest
position to the highest position.
If a long distance relocation is required, restore the rack cabinet to the configuration of the rack
cabinet as you received it. Pack the rack cabinet in the original packaging material, or equivalent.
Also, lower the leveling pads to raise the casters off of the pallet and strap the rack cabinet to the
pallet.

Make sure that a load of 75 kg (165 lb) or more is placed at the bottom of a configured rack that is
not bolted to the floor.

To move the rack cabinet to another location, complete the following general steps:
Step 1. Follow the safety guidelines in this chapter.
Step 2. Know the weight of the rack cabinet. To help determine the weight of the rack, see “Size and
weight specifications” on page 5. A general guideline is to assume a weight of 23 kg (50 lb) per
rack U-space.
Step 3. Use the following weight limit guidelines:
• If the rack cabinet is empty, at least two people are required to move the rack.
• If the rack cabinet weight is between 142 and 227 kg (between 313 and 500 lb), three or four
people are required to move the rack.
• If the rack cabinet weight is greater than 227 kg (500 lb), professional movers are required to
move the rack.
Step 4. Install the outriggers on both sides of the rack cabinet.

DANGER

Always relocate the rack cabinet with the outriggers installed. Keep the outriggers and
install them if you have to move the rack to another location in the future.

Use the 6 mm hex wrench that comes in the hardware kit to install the four bolts that attach each
outrigger to the rack cabinet. Make sure that you tighten the bolts securely.

Chapter 5. Moving a rack cabinet 45


Figure 36. Installing the outriggers

Step 5. Carefully move the rack cabinet to the new location by using the safety guidelines in this chapter.

46 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Chapter 6. Parts listing
The replaceable components that are available for the racks are described in this chapter.

For an updated parts listing, go to http://publib.boulder.ibm.com/infocenter/


systemx/documentation/index.jsp.

Field replaceable units (FRUs) must be replaced only by a trained service technician, unless they are
classified as customer replaceable units (CRUs).

Tier 1 CRU Replacement of Tier 1 CRUs is your responsibility. If Lenovo installs a Tier 1 CRU at your
request without a service contract, you will be charged for the installation.
Tier 2 CRU You may install a Tier 2 CRU yourself or request Lenovo to install it, at no additional charge,
under the type of warranty service that is designated for your product.
FRU FRUs must be installed only by trained service technicians.

For information about getting service and assistance, see Appendix A “Getting help and
technical assistance” on page 51. For information about the terms of the warranty, go to
https://support.lenovo.com/documents/LNVO-DOCS.

© Copyright Lenovo 2015, 2016 47


1

11 7

10

Figure 37. 42U rack and expansion rack parts

Table 3. Parts listing for 42U 1100 mm Enterprise V2 Dynamic Rack and 42U 1100 mm Enterprise V2 Dynamic Expansion
Rack
FRU part number
CRU part number CRU part number (trained service
Index Description (Tier 1) (Tier 2) technician only)
1 Front / rear door 90Y3056
2 Side cover 90Y3065
3 Adjustable foot 90Y3063
4 Fixed caster, front 90Y3061
5 Swivel caster, rear 90Y3062

48 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Table 3. Parts listing for 42U 1100 mm Enterprise V2 Dynamic Rack and 42U 1100 mm Enterprise V2 Dynamic Expansion
Rack (continued)
FRU part number
CRU part number CRU part number (trained service
Index Description (Tier 1) (Tier 2) technician only)
6 Hardware and tool kit (includes tools, 90Y3064
screws, washers, cage nuts, and
fasteners)
7 Outrigger (side stabilizer) 90Y3066
8 Front stabilizer 90Y3059
9 Baying kit 90Y3060
10 Keys, door and side cover 90Y3058
11 Latch, door 90Y3057

Chapter 6. Parts listing 49


50 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Appendix A. Getting help and technical assistance
If you need help, service, or technical assistance or just want more information about Lenovo products, you
will find a wide variety of sources available from Lenovo to assist you.

Use this information to obtain additional information about Lenovo and Lenovo products, and determine
what to do if you experience a problem with your Lenovo system or optional device.

Note: This section includes references to IBM web sites and information about obtaining service. IBM is
Lenovo's preferred service provider for the System x, Flex System, and NeXtScale System products.

Before you call


Before you call, make sure that you have taken these steps to try to solve the problem yourself.

If you believe that you require warranty service for your Lenovo product, the service technicians will be able
to assist you more efficiently if you prepare before you call.
• Check all cables to make sure that they are connected.
• Check the power switches to make sure that the system and any optional devices are turned on.
• Check for updated software, firmware, and operating-system device drivers for your Lenovo product. The
Lenovo Warranty terms and conditions state that you, the owner of the Lenovo product, are responsible
for maintaining and updating all software and firmware for the product (unless it is covered by an
additional maintenance contract). Your service technician will request that you upgrade your software and
firmware if the problem has a documented solution within a software upgrade.
• If you have installed new hardware or software in your environment, check http://www.lenovo.com/
serverproven/ to make sure that the hardware and software is supported by your product.
• Go to http://www.lenovo.com/support to check for information to help you solve the problem.
• Gather the following information to provide to the service technician. This data will help the service
technician quickly provide a solution to your problem and ensure that you receive the level of service
for which you might have contracted.
– Hardware and Software Maintenance agreement contract numbers, if applicable
– Machine type number (Lenovo 4-digit machine identifier)
– Model number
– Serial number
– Current system UEFI and firmware levels
– Other pertinent information such as error messages and logs
• Go to http://www.ibm.com/support/ entry/portal/Open_service_request to submit an Electronic Service
Request. Submitting an Electronic Service Request will start the process of determining a solution to
your problem by making the pertinent information available to the service technicians. The IBM service
technicians can start working on your solution as soon as you have completed and submitted an
Electronic Service Request.

You can solve many problems without outside assistance by following the troubleshooting procedures
that Lenovo provides in the online help or in the Lenovo product documentation. The Lenovo product
documentation also describes the diagnostic tests that you can perform. The documentation for most
systems, operating systems, and programs contains troubleshooting procedures and explanations of error
messages and error codes. If you suspect a software problem, see the documentation for the operating
system or program.

© Copyright Lenovo 2015, 2016 51


Using the documentation
Information about your Lenovo system and preinstalled software, if any, or optional device is available in the
product documentation. That documentation can include printed documents, online documents, readme
files, and help files.

See the troubleshooting information in your system documentation for instructions for using the diagnostic
programs. The troubleshooting information or the diagnostic programs might tell you that you need
additional or updated device drivers or other software. Lenovo maintains pages on the World Wide Web
where you can get the latest technical information and download device drivers and updates. To access
these pages, go to http://www.lenovo.com/support.

Getting help and information from the World Wide Web


Up-to-date information about Lenovo products and support is available on the World Wide Web.

On the World Wide Web, up-to-date information about Lenovo systems, optional devices, services,
and support is available at http://www.lenovo.com/support. The most current version of the product
documentation is available in the following product-specific Information Centers:
• Flex System products:
http://pic.dhe.ibm.com/infocenter/ flexsys/information/index.jsp
• System x products:
http://publib.boulder.ibm.com/infocenter/ systemx/documentation/index.jsp
• NeXtScale System products:
http://pic.dhe.ibm.com/infocenter/ nxtscale/documentation/index.jsp

How to send DSA data


You can use the Enhanced Customer Data Repository to send diagnostic data to Lenovo.

Before you send diagnostic data to Lenovo, read the terms of use at http://www.ibm.com/de/support/
ecurep/terms.html.

You can use any of the following methods to send diagnostic data:
• Standard upload: http://www.ibm.com/de/support/ ecurep/send_http.html
• Standard upload with the system serial number: http://www.ecurep.ibm.com/app/ upload_hw
• Secure upload: http://www.ibm.com/de/support/ ecurep/send_http.html#secure
• Secure upload with the system serial number: https://www.ecurep.ibm.com/ app/upload_hw

Creating a personalized support web page


You can create a personalized support web page by identifying Lenovo products that are of interest to you.

To create a personalized support web page, go to http://www.ibm.com/support/ mynotifications. From this


personalized page, you can subscribe to weekly email notifications about new technical documents, search
for information and downloads, and access various administrative services.

52 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Software service and support
Through IBM Support Line, you can get telephone assistance, for a fee, with usage, configuration, and
software problems with your Lenovo products.

For more information about Support Line and other IBM services, see http://www.ibm.com/services
or see http://www.ibm.com/planetwide for support telephone numbers. In the U.S. and Canada, call
1-800-IBM-SERV (1-800-426-7378).

Hardware service and support


IBM is Lenovo's preferred service provider for the System x, Flex System and NeXtScale System products.

You can receive hardware service through your Lenovo reseller or from IBM. To locate a reseller authorized
by Lenovo to provide warranty service, go to http://www.ibm.com/partnerworld and click Business Partner
Locator. For IBM support telephone numbers, see http://www.ibm.com/planetwide. In the U.S. and
Canada, call 1-800-IBM-SERV (1-800-426-7378).

In the U.S. and Canada, hardware service and support is available 24 hours a day, 7 days a week. In the
U.K., these services are available Monday through Friday, from 9 a.m. to 6 p.m.

Taiwan product service


Use this information to contact product service for Taiwan.

Appendix A. Getting help and technical assistance 53


54 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Appendix B. Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult
your local Lenovo representative for information on the products and services currently available in your area.

Any reference to a Lenovo product, program, or service is not intended to state or imply that only that
Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service
that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any other product, program, or service.

Lenovo may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not give you any license to these patents. You can send
license inquiries, in writing, to:
Lenovo (United States), Inc.
1009 Think Place - Building One
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow
disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply
to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
Lenovo may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo
or third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.

Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.

Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this Lenovo product, and use of those Web sites is at your own risk.

Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been
made on development-level systems and there is no guarantee that these measurements will be the same
on generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

© Copyright Lenovo 2015, 2016 55


Trademarks
Lenovo, the Lenovo logo, Flex System, System x, NeXtScale System, and x Architecture are trademarks of
Lenovo in the United States, other countries, or both.

Intel and Intel Xeon are trademarks of Intel Corporation in the United States, other countries, or both.

Internet Explorer, Microsoft, and Windows are trademarks of the Microsoft group of companies.

Linux is a registered trademark of Linus Torvalds.

Other company, product, or service names may be trademarks or service marks of others.

Important notes
Processor speed indicates the internal clock speed of the microprocessor; other factors also affect
application performance.

CD or DVD drive speed is the variable read rate. Actual speeds vary and are often less than the possible
maximum.

When referring to processor storage, real and virtual storage, or channel volume, KB stands for 1 024 bytes,
MB stands for 1 048 576 bytes, and GB stands for 1 073 741 824 bytes.

When referring to hard disk drive capacity or communications volume, MB stands for 1 000 000 bytes,
and GB stands for 1 000 000 000 bytes. Total user-accessible capacity can vary depending on operating
environments.

Maximum internal hard disk drive capacities assume the replacement of any standard hard disk drives
and population of all hard-disk-drive bays with the largest currently supported drives that are available
from Lenovo.

Maximum memory might require replacement of the standard memory with an optional memory module.

Each solid-state memory cell has an intrinsic, finite number of write cycles that the cell can incur. Therefore,
a solid-state device has a maximum number of write cycles that it can be subjected to, expressed as total
bytes written (TBW). A device that has exceeded this limit might fail to respond to system-generated
commands or might be incapable of being written to. Lenovo is not responsible for replacement of a
device that has exceeded its maximum guaranteed number of program/erase cycles, as documented in
the Official Published Specifications for the device.

Lenovo makes no representations or warranties with respect to non-Lenovo products. Support (if any) for
the non-Lenovo products is provided by the third party, not Lenovo.

Some software might differ from its retail version (if available) and might not include user manuals or all
program functionality.

Recycling information
Lenovo encourages owners of information technology (IT) equipment to responsibly recycle their
equipment when it is no longer needed. Lenovo offers a variety of programs and services to assist
equipment owners in recycling their IT products. For information on recycling Lenovo products, go to:
http://www.lenovo.com/recycling.

56 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Particulate contamination
Attention: Airborne particulates (including metal flakes or particles) and reactive gases acting alone or in
combination with other environmental factors such as humidity or temperature might pose a risk to the
device that is described in this document.

Risks that are posed by the presence of excessive particulate levels or concentrations of harmful gases
include damage that might cause the device to malfunction or cease functioning altogether. This
specification sets forth limits for particulates and gases that are intended to avoid such damage. The limits
must not be viewed or used as definitive limits, because numerous other factors, such as temperature
or moisture content of the air, can influence the impact of particulates or environmental corrosives and
gaseous contaminant transfer. In the absence of specific limits that are set forth in this document, you must
implement practices that maintain particulate and gas levels that are consistent with the protection of human
health and safety. If Lenovo determines that the levels of particulates or gases in your environment have
caused damage to the device, Lenovo may condition provision of repair or replacement of devices or
parts on implementation of appropriate remedial measures to mitigate such environmental contamination.
Implementation of such remedial measures is a customer responsibility.

Table 4. Limits for particulates and gases


Contaminant Limits
Particulate • The room air must be continuously filtered with 40% atmospheric dust spot efficiency (MERV
9) according to ASHRAE Standard 52.21.
• Air that enters a data center must be filtered to 99.97% efficiency or greater, using
high-efficiency particulate air (HEPA) filters that meet MIL-STD-282.
• The deliquescent relative humidity of the particulate contamination must be more than 60%2.
• The room must be free of conductive contamination such as zinc whiskers.
Gaseous • Copper: Class G1 as per ANSI/ISA 71.04-19853
• Silver: Corrosion rate of less than 300 Å in 30 days
1ASHRAE 52.2-2008 - Method of Testing General Ventilation Air-Cleaning Devices for Removal Efficiency by
Particle Size. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
2 The deliquescent relative humidity of particulate contamination is the relative humidity at which the dust absorbs

enough water to become wet and promote ionic conduction.


3ANSI/ISA-71.04-1985. Environmental conditions for process measurement and control systems: Airborne
contaminants. Instrument Society of America, Research Triangle Park, North Carolina, U.S.A.

Telecommunication regulatory statement


This product may not be certified in your country for connection by any means whatsoever to interfaces of
public telecommunications networks. Further certification may be required by law prior to making any such
connection. Contact a Lenovo representative or reseller for any questions.

Appendix B. Notices 57
Electronic emission notices
When you attach a monitor to the equipment, you must use the designated monitor cable and any
interference suppression devices that are supplied with the monitor.

Federal Communications Commission (FCC) statement


Note: This equipment has been tested and found to comply with the limits for a Class A digital device,
pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against
harmful interference when the equipment is operated in a commercial environment. This equipment
generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance
with the instruction manual, may cause harmful interference to radio communications. Operation of this
equipment in a residential area is likely to cause harmful interference, in which case the user will be required
to correct the interference at his own expense.

Properly shielded and grounded cables and connectors must be used in order to meet FCC emission limits.
Lenovo is not responsible for any radio or television interference caused by using other than recommended
cables and connectors or by unauthorized changes or modifications to this equipment. Unauthorized
changes or modifications could void the user's authority to operate the equipment.

This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1)
this device may not cause harmful interference, and (2) this device must accept any interference received,
including interference that might cause undesired operation.

Industry Canada Class A emission compliance statement


This Class A digital apparatus complies with Canadian ICES-003.

Avis de conformité à la réglementation d'Industrie Canada


Cet appareil numérique de la classe A est conforme à la norme NMB-003 du Canada.

Australia and New Zealand Class A statement


Attention: This is a Class A product. In a domestic environment this product may cause radio interference
in which case the user may be required to take adequate measures.

European Union EMC Directive conformance statement


This product is in conformity with the protection requirements of EU Council Directive 2014/30/EU on the
approximation of the laws of the Member States relating to electromagnetic compatibility. Lenovo cannot
accept responsibility for any failure to satisfy the protection requirements resulting from a non-recommended
modification of the product, including the installation of option cards from other manufacturers.

This product has been tested and found to comply with the limits for Class A equipment according to
European Standards harmonized in the Directives in compliance. The limits for Class A equipment were
derived for commercial and industrial environments to provide reasonable protection against interference
with licensed communication equipment.

Lenovo, Einsteinova 21, 851 01 Bratislava, Slovakia

Warning: This is a Class A product. In a domestic environment this product may cause radio interference
in which case the user may be required to take adequate measures.

58 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Germany Class A statement
Deutschsprachiger EU Hinweis: Hinweis für Geräte der Klasse A EU-Richtlinie zur
Elektromagnetischen Verträglichkeit

Deutschsprachiger EU Hinweis: Hinweis für Geräte der Klasse A EU-Richtlinie zur


Elektromagnetischen Verträglichkeit Dieses Produkt entspricht den Schutzanforderungen der
EU-Richtlinie 2014/30/EU (früher 2004/108/EC) zur Angleichung der Rechtsvorschriften über die
elektromagnetische Verträglichkeit in den EU-Mitgliedsstaaten und hält die Grenzwerte der Klasse A der
Norm gemäß Richtlinie.

Um dieses sicherzustellen, sind die Geräte wie in den Handbüchern beschrieben zu installieren und zu
betreiben. Des Weiteren dürfen auch nur von der Lenovo empfohlene Kabel angeschlossen werden.
Lenovo übernimmt keine Verantwortung für die Einhaltung der Schutzanforderungen, wenn das Produkt
ohne Zustimmung der Lenovo verändert bzw. wenn Erweiterungskomponenten von Fremdherstellern ohne
Empfehlung der Lenovo gesteckt/eingebaut werden.

Deutschland:

Einhaltung des Gesetzes über die elektromagnetische Verträglichkeit von Betriebsmittein Dieses
Produkt entspricht dem „Gesetz über die elektromagnetische Verträglichkeit von Betriebsmitteln“ EMVG
(früher „Gesetz über die elektromagnetische Verträglichkeit von Geräten“). Dies ist die Umsetzung der
EU-Richtlinie 2014/30/EU (früher 2004/108/EC) in der Bundesrepublik Deutschland.

Zulassungsbescheinigung laut dem Deutschen Gesetz über die elektromagnetische Verträglichkeit


von Betriebsmitteln, EMVG vom 20. Juli 2007 (früher Gesetz über die elektromagnetische
Verträglichkeit von Geräten), bzw. der EMV EU Richtlinie 2014/30/EU (früher 2004/108/EC), für
Geräte der Klasse A.

Dieses Gerät ist berechtigt, in Übereinstimmung mit dem Deutschen EMVG das EG-Konformitätszeichen
- CE - zu führen. Verantwortlich für die Konformitätserklärung nach Paragraf 5 des EMVG ist die Lenovo
(Deutschland) GmbH, Meitnerstr. 9, D-70563 Stuttgart.

Informationen in Hinsicht EMVG Paragraf 4 Abs. (1) 4: Das Gerät erfüllt die Schutzanforderungen nach
EN 55024 und EN 55022 Klasse A.

Nach der EN 55022: „Dies ist eine Einrichtung der Klasse A. Diese Einrichtung kann im Wohnbereich
Funkstörungen verursachen; in diesem Fall kann vom Betreiber verlangt werden, angemessene Maßnahmen
durchzuführen und dafür aufzukommen.“

Nach dem EMVG: „Geräte dürfen an Orten, für die sie nicht ausreichend entstört sind, nur mit besonderer
Genehmigung des Bundesministers für Post und Telekommunikation oder des Bundesamtes für Post und
Telekommunikation betrieben werden. Die Genehmigung wird erteilt, wenn keine elektromagnetischen
Störungen zu erwarten sind.“ (Auszug aus dem EMVG, Paragraph 3, Abs. 4). Dieses Genehmigungsverfahren
ist nach Paragraph 9 EMVG in Verbindung mit der entsprechenden Kostenverordnung (Amtsblatt 14/93)
kostenpflichtig.

Anmerkung: Um die Einhaltung des EMVG sicherzustellen sind die Geräte, wie in den Handbüchern
angegeben, zu installieren und zu betreiben.

Appendix B. Notices 59
Japanese electromagnetic compatibility statements

Japan VCCI Class A statement

Japanese Electrical Appliance and Material Safety Law statement (for detachable AC power cord)

JEITA harmonics guideline - Japanese Statement for AC power consumption (W)

JEITA harmonics guideline - Japanese Statement of Compliance for Products Less than or Equal
to 20A per phase

JEITA harmonics guideline - Japanese Statement of Compliance for Products More than 20A

Korea Communications Commission (KCC) statement

This is electromagnetic wave compatibility equipment for business (Type A). Sellers and users need to pay
attention to it. This is for any areas other than home.

60 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Russia Electromagnetic Interference (EMI) Class A statement

People's Republic of China Class A electronic emission statement

Taiwan Class A compliance statement

Appendix B. Notices 61
Taiwan BSMI RoHS declaration

62 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
Index
0U PDU, installing in rear of rack 34 installing and removing 18
1U PDU doorstop
installing in side of rack 33 installing 19
installing in side pocket of rack 33 removing 19
DSA, sending data 52

A
assistance, getting 51 E
attaching racks in a suite 26 electronic emission Class A notice 58
attention notices 1 European Union EMC Directive conformance statement 58
Australia Class A statement 58

F
B
FCC Class A notice 58
bolting rack to floor 8 floor layout, planning 6
floor tiles, cutting holes 6
C front door
installing and removing 17
cable channels, front-to-rear 37 front door, reversing 19
cable tray, mounting 41 front stabilizer, installing 8
cable-access bar 38 front-to-rear cable channels 37
cable-access covers FRU part numbers 47
adjusting 39
removing 39
cable-access opening 39 G
cables, managing 37
gaseous contamination 57
cage nuts, installing 31
Germany Class A statement 59
cage-nut-insertion tool 31
Canada Class A electronic emission statement 58
caps, removing from cable channel 37
caution statements 1
H
China Class A electronic emission statement 61 hardware service and support telephone numbers 53
Class A electronic emission notice 58 help
clip nuts, installing 32–33 from the World Wide Web 52
console switch from World Wide Web 52
installing in side of rack 33 sending diagnostic data 52
installing in side pocket of rack 33 sources of 51
contamination, particulate and gaseous 57 hinge pins
creating a personalized support web page 52 installing 19
CRU part numbers 47 removing 19
custom support web page 52
cutting holes in floor tiles 6
I
important notices 1, 56
D information center 52
danger statements 1 installing
documentation cage nuts 31
using 52 clip nuts 32–33
documentation updates, obtaining 1 devices vertically in rack 32
door devices with threaded rails 31
reversing front 19 doorstop 19
door latch, removing 19 front door 17
door, front front stabilizer 8
installing and removing 17 hinge pins 19
door, rear optional devices 29

© Copyright Lenovo 2015, 2016 63


outriggers 6, 43 rear door, installing and removing 18
rear door 18 recirculation prevention plate, installing 8
recirculation prevention plate 8 removing
side covers 15 door latch 19
installing rack, tasks overview 1 doorstop 19
front door 17
hinge pins 19
J outriggers 6
rear door 18
Japanese electromagnetic compatibility statements 60
side covers 15
reversing front door 19

K Russia Class A electronic emission statement 61

Korea Class A electronic emission statement 60


S
L safety iii
Safety Information 1
leveling pads safety statements iii–iv
how to lower 8 sending diagnostic data 52
service and support
before you call 51
M hardware 53
software 53
managing cables 37 side covers, installing and removing 15
moving a rack 43 size and weight specifications of rack 5
software service and support telephone numbers 53
statements and notices 1
N suite of racks, attaching 26
New Zealand Class A statement 58 support web page, custom 52
notes 1
notes, important 56
notices 55 T
electronic emission 58 Taiwan BSMI RoHS declaration 62
FCC, Class A 58 Taiwan Class A electronic emission statement 61
notices and statements 1 Taiwan product service 53
telecommunication regulatory statement 57
telephone numbers 53
O threaded rails, installing devices with 31
optional devices, installing 29 trademarks 56
outriggers, removing and installing 6, 43
overhead cable tray, mounting 41
U
United States FCC Class A notice 58
P
particulate contamination 57
parts listing 47 W
People's Republic of China Class A electronic emission
weight and size specifications of rack 5
statement 61
planning floor layout 6
product service, Taiwan 53

R
rack
bolting to floor 8
moving 43
size and weight specifications 5
rack, installing tasks overview 1

64 42U Dynamic Expansion Rack and 42U 1100 mm Enterprise V2 Dynamic Rack Installation Guide
ThinkSystem 18.5-inch LCD Console
Product Guide

The ThinkSystem 18.5" LCD Console is a flat-panel console that offer a convenient way to manage space-
constrained rack environments from a single console. This densely packed 1U solution let you easily set up
and control rack-mounted servers. It offers additional space savings by allowing the mounting of a KVM
switch in the space behind the console. The console includes an 18.5-inch LCD display and US English
keyboard, and is based on the Vertiv CLRA Local Rack Access Console, model CLRA19KMM.
The ThinkSystem 18.5" LCD Console is shown in the following figure.

Figure 1. ThinkSystem 18.5" LCD Console

Did you know?


The ThinkSystem 18.5" LCD Console, when used with a Lenovo KVM console switch provides a
convenient way to locally access all systems installed in a rack. Consuming only 1U of rack space, it uses
the least amount of space possible and ensures that your rack aisles remain free of equipment and cables.

Click here to check for updates


ThinkSystem 18.5-inch LCD Console 1
Part number information
The following table lists ordering part number and feature code for the console.

Table 1. Ordering information


Part number Feature code Description
4XF7A84188 BTY0 ThinkSystem 18.5" LCD Console

The part number for the console includes the following items:
One console unit with a 18.5-inch LCD display, cable management arm, and integrated USB
keyboard
Rail slides and mounting hardware
1.8m 10A/250V C13-C14 power cable
The following figure shows the rear of the console.

Figure 2. Rear of the ThinkSystem 18.5" LCD Console

Features
The features of the ThinkSystem 18.5" LCD Console include:
Integrated 18.5-inch high-resolution display supports selectable display methods and resolutions
The flat-panel display extends well above the keyboard and offers an easy viewing angle
Installs in a rack and consumes only 1U of rack space
Supports the mounting of a KVM switch behind the console to save space
Tool-less installation for ease of use and installation in minutes
Includes slides and integrated cable-management arm so the unit can close and slide back into the
rack
Provides easy local access and control of a server

ThinkSystem 18.5-inch LCD Console 2


Compatible with Console Managers for control and access of multiple servers
Testing to ensure easy integration into Lenovo rack solutions
A complete, ready-to-install package
Able to be shipped installed in the rack
Includes a cable-management arm (CMA)
Includes a USB-attached 100-key US English keyboard with touchpad pointing device and number
keypad
Includes two passthrough USB 2.0 ports accessible from the front, for the use of optical media,
secure access keys, or other USB devices

Technical specifications
The following table lists the specifications of the ThinkSystem 18.5" LCD Console.

Table 2. Specifications
Feature Value
Display
Display type 18.5" LCD display, 16:9 aspect ratio
Display resolution 1920 x 1080, 60Hz maximum resolution
1366 x 768, 60Hz standard resolution
800 x 600, 60Hz minimum resolution
Scaling choices Full Screen, Aspect, 1:1
Connector VGA
Display active area 410 mm x 230 mm
Diagonal viewable image 18.5 inches
Pixel pitch 300 x 300
Viewing angle 160° Horizontal (typical)
150° Vertical (typical)
Luminance 250 cd/m2
Keyboard and mouse
Keyboard Full size 100 keyboard with number keypad
Pointing device Integrated touchpad
Connector USB
USB ports
Ports 2x USB 2.0 ports, passthrough
Power source
Power 22W maximum, 17W nominal, <1W standby
Power supply input 110-240 Vac auto-ranging
Power connector IEC C14

Supported servers
The ThinkSystem 18.5" LCD Console can be used with a server or KVM console switch with VGA and USB
local console ports.

Warranty
ThinkSystem 18.5-inch LCD Console 3
Warranty
The ThinkSystem 18.5" LCD Console has a 3 year warranty. When ordered as a feature code as part of a
supported Lenovo rack cabinet, the console assumes the rack cabinet’s base warranty and any warranty
upgrades.

Physical specifications
The ThinkSystem 18.5" LCD Console has the following physical specifications.
Height (rack units): 1U
Dimensions (console only): 481mm x 526mm x 43mm
Weight (console only): 8.3kg
Supported rack post distances (measured outside-to-outside):
Console only: 620mm - 920mm
Including space for KVM switch: 730mm - 920mm

Operating environment
The ThinkSystem 18.5" LCD Console is supported in the following environment:
Temperature
0°C - 50°C, operation
-20°C - 60°C, storage
Humidity
10% - 80%, operation
5% - 95%, storage
Altitude
0m to 3000m, operation
-15m to 10,000m, storage

Agency approvals
The ThinkSystem 18.5" LCD Console conforms to the following regulations:
UL
CE
CCC
PSE
RCM

Rack cabinets
ThinkSystem 18.5-inch LCD Console 4
Rack cabinets
The following table lists the supported rack cabinets.

Table 3. Rack cabinets


Model Description
7D6DA007WW ThinkSystem 42U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6DA008WW ThinkSystem 42U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA009WW ThinkSystem 48U Onyx Primary Heavy Duty Rack Cabinet (1200mm)
7D6EA00AWW ThinkSystem 48U Pearl Primary Heavy Duty Rack Cabinet (1200mm)
93604PX 42U 1200mm Deep Dynamic Rack
93614PX 42U 1200mm Deep Static Rack
93634PX 42U 1100mm Dynamic Rack
93634EX 42U 1100mm Dynamic Expansion Rack
93074RX 42U Standard Rack (1000mm)

For specifications about these racks, see the Lenovo Rack Cabinet Reference, available from:
https://lenovopress.com/lp1287-lenovo-rack-cabinet-reference
For more information, see the list of Product Guides in the Rack cabinets category:
https://lenovopress.com/servers/options/racks

Rack KVM console switches


Certain rack console switches can be mounted behind the ThinkSystem 18.5" LCD Console in the same 1U
space, as listed in the following table.

Table 4. KVM console switches


Part number Feature code Description Mounted in 1U space
1754D2X 1754HC2 6695 Global 4x2x32 Console Manager (GCM32) Supported
1754D1X 1754HC1 6694 Global 2x2x16 Console Manager (GCM16) Supported
1754A2X 1754HC4 0726 Local 2x16 Console Manager (LCM16) Supported
1754A1X 1754HC3 0725 Local 1x8 Console Manager (LCM8) Supported
1754A1T 1754HC5 B38H ThinkSystem Analog 1x8 KVM Switch No support
1754D1T 1754HC6 B38J ThinkSystem Digital 2x1x16 KVM Switch No support

For more information, see the Lenovo Press Product Guides in the KVM Switches & Consoles category:
https://lenovopress.com/servers/options/kvm

Related publications and links


For more information, see these resources:
ThinkSystem 18.5" LCD Console User Manuals:
https://www.vertiv.com/en-us/support/avocent-support-lenovo/
ServerProven web site:
http://www.lenovo.com/us/en/serverproven/

Related product families


ThinkSystem 18.5-inch LCD Console 5
Related product families
Product families related to this document are the following:
KVM Switches & Consoles

ThinkSystem 18.5-inch LCD Console 6


Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your
local Lenovo representative for information on the products and services currently available in your area. Any
reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product,
program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any
Lenovo intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify
the operation of any other product, program, or service. Lenovo may have patents or pending patent applications
covering subject matter described in this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to:

Lenovo (United States), Inc.


8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not
affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express
or implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information
contained in this document was obtained in specific environments and is presented as an illustration. The result
obtained in other operating environments may vary. Lenovo may use or distribute any of the information you supply
in any way it believes appropriate without incurring any obligation to you.

Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials
for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was
determined in a controlled environment. Therefore, the result obtained in other operating environments may vary
significantly. Some measurements may have been made on development-level systems and there is no guarantee
that these measurements will be the same on generally available systems. Furthermore, some measurements may
have been estimated through extrapolation. Actual results may vary. Users of this document should verify the
applicable data for their specific environment.

© Copyright Lenovo 2025. All rights reserved.

This document, LP1487, was created or updated on March 17, 2023.


Send us your comments in one of the following ways:
Use the online Contact us review form found at:
https://lenovopress.lenovo.com/LP1487
Send your comments in an e-mail to:
comments@lenovopress.com
This document is available online at https://lenovopress.lenovo.com/LP1487.

ThinkSystem 18.5-inch LCD Console 7


Trademarks
Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other
countries, or both. A current list of Lenovo trademarks is available on the Web at
https://www.lenovo.com/us/en/legal/copytrade/.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ServerProven®
ThinkSystem®
Other company, product, or service names may be trademarks or service marks of others.

ThinkSystem 18.5-inch LCD Console 8


Dell Unity XT: Introduction to the Platform
A Detailed Review
October 2022

H17782.5

White Paper

Abstract
This white paper introduces the Dell Unity XT series platform which
includes Unity XT 380/F, 480/F, 680/F, and 880/F models. It also
describes the Dell Unity XT systems and the similarities and
differences between the All-Flash and Hybrid variants.

Dell Technologies
Copyright

The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2019-2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks
of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.
Published in the USA October 2022 H17782.5.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.

2 Dell Unity XT: Introduction to the Platform


A Detailed Review
Contents

Contents
Executive summary.......................................................................................................................4

Introduction ...................................................................................................................................7

Dell Unity Family Overview ...........................................................................................................8

Hardware Overview .......................................................................................................................9

Dell UnityVSA ..............................................................................................................................25

Dell Unity Cloud Edition..............................................................................................................27

Conclusion...................................................................................................................................28

References ...................................................................................................................................29

Dell Unity XT: Introduction to the Platform 3


A Detailed Review
Executive summary

Executive summary

Overview In this constantly changing world of increasing complexity and scale, the need for an
easy-to-use intelligent storage system has only grown greater. Customers using new
applications and solutions require dependable storage and are often tasked with the
challenge of “doing more with less”. The Dell Unity family addresses this challenge by
packaging a powerful storage system into a cost and space-efficient profile. Some of Dell
Unity’s highlight features include:
• Dual-Active Architecture – Dell Unity uses both Storage Processors (SP) to serve
host I/O and run data operations in an active/active manner to make efficient use of
all available hardware resources and optimizing performance, cost, and density in
customer data centers.
• Truly Unified Offering – Dell Unity delivers a full block and file unified environment
in a single 2U enclosure. Use the same Pool to provision and host LUNs,
Consistency Groups, NAS Servers, File Systems, and Virtual Volumes alike. The
Unisphere management interface offers a consistent look and feel whether you are
managing block resources, file resources, or both.
• A Modern, Simple Interface – Unisphere, Dell Unity’s management interface has
been built with the modern-day data center administrator in mind. Using browser-
native HTML5, Unisphere can be used across various Operating Systems and web
browsers without the need of additional plug-ins. The interface has been designed
to mimic the practical flow of an administrator’s daily life, organizing provisioning
and management functions into easy-to-find categories and sections.
• Flexible Deployment Options – With Dell Unity, a deployment offering exists for a
range of different use cases and budgets, from the virtual offering of Dell UnityVSA
to the purpose-built Dell Unity platform. The purpose-built Dell Unity system can be
configured as an All Flash system with only solid-state drives, or as a Hybrid
system with a mix of solid state and spinning media to deliver the best on both
performance and economics.
• Inline Data Reduction – Data reduction technologies play a critical role in
environments in which storage administrators are attempting to do more with less.
Dell Unity Data Reduction aids in this effort by attempting to reduce the amount of
physical storage needed to save a dataset, which helps reduce the Total Cost of
Ownership of a Dell Unity storage system. Dell Unity Data Reduction provides
space savings by using data deduplication and compression. Data reduction is
easy to manage, and once enabled, is intelligently controlled by the storage
system.
• Optional I/O Modules – A diverse variety of connectivity is supported on the
purpose-built Dell Unity platform. Also, I/O Modules that support iSCSI and NAS
may be used for both simultaneously.
• Expanded File System – At its heart, the Dell Unity File System is a 64-bit based
file system architecture that provides increased maximums to keep pace with the
modern data center. Provision file systems and VMware NFS Datastores in sizes as
large as 256TB, and enjoy creating multiple millions of files per directory and
subdirectories per directory.

4 Dell Unity XT: Introduction to the Platform


A Detailed Review
Executive summary

• Native Data Protection – Security and availability of data are critical concerns for
many customers, and Dell Unity offers multiple solutions to address this need.
Unified Snapshots provide point-in-time copies of block and file data that can be
used for backup and restoration purposes. Asynchronous Replication offers an IP-
based replication strategy within a system or between two systems. Synchronous
Block Replication benefits FC environments that are close together and require a
zero-data loss schema. Data at Rest Encryption ensures user data on the system is
protected from physical theft and can stand in the place of drive disposal
processes, such as shredding.
• VMware Integration – Discovery of a VMware environment has never been easier,
with Dell Unity’s VMware Aware Integration (VAI). Use VAI to retrieve the ESXi host
and vCenter environment details into Unisphere for efficient management of your
virtualization environment. Support for VMware vStorage APIs for Storage
Awareness (VASA) and later enables the provisioning and use of VMware Virtual
Volumes (vVols), a virtualization storage technology delivered by VMware’s ESXi.
Dell Unity supports vVols for both block and file configurations.
• Multiple Management Paths – Configure and manage your Dell Unity system in
the way you are most comfortable. The Unisphere GUI is browser-based and
provides a graphical view of your system and its resources. Use Unisphere CLI
(UEMCLI) over SSH or over a Windows host to run CLI commands against the
system. Dell Unity also has a full REST API library available. Any function possible
in Unisphere is also possible using Dell Unity REST API. Developing scripts or
integrating management of your Dell Unity system into existing frameworks has
never been easier.
For hardware details about the X00/F and X50/F Dell Unity models, see the Dell Unity:
Introduction to the Platform white paper available on the Dell Technologies Info Hub.

For a software overview on all Dell Unity Family systems, see the Dell Unity: Operating
Environment (OE) Overview white paper available on the Dell Technologies Info Hub.

Audience This white paper is intended for IT administrators, storage architects, partners, Dell
employees and any other individuals involved in the evaluation, acquisition, management,
operation, or design of a Dell networked storage environment using the Dell Unity XT
Series family of storage systems.

Revisions Date Description

June 2019 Initial release – OE 5.0

June 2021 OE 5.1 update

April 2022 OE 5.2 update and rebranding

June 2022 DC NEBS Support for Unity XT 380/F, 480/F

October 2022 Embedded network port depopulation

We value your Dell Technologies and the authors of this document welcome your feedback on this
feedback document. Contact the Dell Technologies team by email.

Dell Unity XT: Introduction to the Platform 5


A Detailed Review
Executive summary

Author: Ryan Meyer

Note: For links to other documentation for this topic, see the Dell Unity XT Info Hub.

6 Dell Unity XT: Introduction to the Platform


A Detailed Review
Introduction

Introduction
This white paper provides an overview of the Dell Unity XT Series platform relating
specifically to hardware and includes information about the available virtual deployments
of Dell Unity. For information about using software features on the Dell Unity platform, the
Dell Unity: Operating Environment (OE) Overview white paper on Dell Technologies Info
Hub provides an overview on available software and explains other product integration
into the platform. Also, step-by-step instructions for using software features within Dell
Unity can be found in Unisphere Online Help.

Terminology • Dynamic Host Configuration Protocol (DHCP) – A protocol used to handle the
allocation and administration of IP address space from a centralized server to
devices on a network.
• Fibre Channel Protocol – A protocol used to perform Internet Protocol (IP) and
Small Computer Systems Interface (SCSI) commands over a Fibre Channel
network.
• File System – A storage resource that can be accessed through file sharing
protocols such as SMB or NFS.
• Fully Automated Storage Tiering for Virtual Pools (FAST VP) – A feature that
relocates data to the most appropriate disk type depending on activity level to
improve performance while reducing cost.
• FAST Cache – A feature that allows Flash drives to be configured as a large
capacity secondary cache for the Pools on the system.
• Internet Small Computer System Interface (iSCSI) – Provides a mechanism for
accessing block-level data storage over network connections.
• Logical Unit Number (LUN) – A block-level storage device that can be shared out
using a protocol such as iSCSI.
• Network Attached Storage (NAS) Server – A file-level storage server used to host
file systems. A NAS Server is required in order to create file systems that use SMB
or NFS shares, and VMware NFS Datastores and VMware Virtual Volumes (File).
• Network File System (NFS) – An access protocol that allows data access from
Linux/UNIX hosts on a network.
• Pool – A repository of drives from which storage resources such as LUNs and file
systems can be created.
• REpresentational State Transfer (REST) API – A lightweight communications
architecture style that enables the execution of discrete actions against web
services.
• Server Message Block (SMB) – A network file sharing protocol, sometimes
referred to as CIFS, used by Microsoft Windows environments. SMB is used to
provide access to files and folders from Windows hosts on a network.
• Snapshot – A point-in-time view of data stored on a storage resource. A user can
recover files from a snapshot, restore a storage resource from a snapshot, or
provide access to a host.

Dell Unity XT: Introduction to the Platform 7


A Detailed Review
Dell Unity Family Overview

• Software Defined Storage – A storage architecture where the software storage


stack is decoupled from the physical storage hardware.
• Storage Policy Based Management (SPBM) – Using storage policies to dictate
where a VM will be stored, as opposed to choosing a datastore manually.
• Storage Processor (SP) – A storage node that provides the processing resources
for performing storage operations and servicing I/O between storage and hosts.
• Unisphere – An HTML5 graphical user interface that is used to manage Dell Unity
systems.
• Unisphere Command Line Interface (UEMCLI) – An interface that allows a user
to perform tasks on the storage system by typing commands instead of using the
graphical user interface.
• Virtual Storage Appliance (VSA) – A storage node that runs as a virtual machine
instead of on purpose-built hardware.
• vSphere API for Array Integration (VAAI) – A VMware API that allows storage-
related tasks to be offloaded to the storage system.
• vSphere API for Storage Awareness (VASA) – A VMware API that provides
additional insight about the storage capabilities in vSphere.
• Virtual Volumes (vVols) – A VMware storage framework which allows VM data to
be stored on individual Virtual Volumes. This allows for data services to be applied
at a VM-granularity level while utilizing Storage Policy Based Management (SPBM).

Dell Unity Family Overview

Figure 1. Unity XT

Unity XT Hybrid and All Flash storage systems implement an integrated architecture for
block, file, and VMware vVols with concurrent support for native NAS, iSCSI, and Fibre
Channel protocols based on the powerful family of Intel processors. Each system
leverages dual storage processors, full 12-Gb SAS back-end connectivity and patented
multi-core architected operating environment to deliver unparalleled performance and
efficiency. Additional storage capacity is added using Disk Array Enclosures (DAEs). Dell
Unity successfully meets many storage requirements of today's IT professionals:

Dell Unity is Simple

Dell Unity solutions set new standards for storage systems with compelling simplicity,
modern design, affordable prices, and flexible deployments - to meet the needs of
resource-constrained IT professionals in large or small companies.

8 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

Dell Unity is Modern

Dell Unity has a modern 2U architecture designed for all-flash, designed to support the
high density SSD's including 3D NAND TLC (triple level cell) drives. Dell Unity includes
automated data lifecycle management to lower costs, integrated copy data management
to control local point-in-time snapshots, built-in encryption and remote replication, and
deep ecosystem integration with VMware and Microsoft.

Dell Unity is Affordable

Our dual-active controller system was designed to optimize the performance, density, and
cost of your storage to deliver all-flash or hybrid configurations for much less than you
thought possible.

Dell Unity is Flexible

Dell Unity is available as a virtual storage appliance, purpose-built all flash or hybrid
configurations, or as converged systems - with one Dell Unity operating environment that
connects them all together.

For a full workflow on installing a brand-new Dell Unity system in a data center, see the
Dell Unity Quick Start Installation video on Dell Technologies Info Hub.

Hardware Overview

Dell Unity Family The purpose-built Dell Unity system is offered in multiple different physical hardware
– Available models in both Hybrid configurations and All-Flash configurations. For Hybrid systems,
Models the platform starts with the Dell Unity 300, and scales up to the Dell Unity 880 while for
All-Flash systems, the platform starts with the Dell Unity 300F and scales up to the Unity
XT 880F. The models share several similarities in form factor and connectivity, but scale
in processing and memory capabilities (See Table 1, Table 2, and Table 3).

For software-defined offerings, Dell Unity Family offers a virtual deployment of Dell Unity
called Dell UnityVSA which can be installed on applicable VMware ESXi hosts. There is
also the option of a dual-SP deployment of Dell UnityVSA called Dell UnityVSA HA which
provides greater resiliency against disaster. Lastly, there is a cloud-specific deployment of
Dell Unity called Dell Unity Cloud Edition that customers can leverage for file
synchronization and disaster recovery operations in the cloud. More information about
these available virtual deployments can be found in the sections Dell UnityVSA and Dell
Unity Cloud Edition.

Additionally, the system limits will change depending on the Dell Unity model. More
information about system limits can be found in the Dell Unity Simple Support Matrix on E-
Lab Navigator.

Note that this white paper document focuses specifically on the Unity XT Series systems
which include the Unity XT 380/F, 480/F, 680/F, and 880/F models. For more information
about other Dell Unity models, see the white paper Dell Unity: Introduction to the Platform
on the Dell Technologies Info Hub.

Dell Unity XT: Introduction to the Platform 9


A Detailed Review
Hardware Overview

Table 1. Dell Unity X00/F Model Comparison

DELL UNITY DELL UNITY DELL UNITY DELL UNITY


Model
300 / 300F 400 / 400F 500 / 500F 600 / 600F

PROCESSOR Intel E5-2603 v3 Intel E5-2630 v3 Intel E5-2660 v3 Intel E5-2680 v3


(PER SP) 6c/1.6GHz 8c/2.4GHz 10c/2.6GHz 12c/2.5GHz

MEMORY 24 GB / SP 48 GB / SP 64 GB / SP 128 GB / SP

MAX DRIVES 150 250 500 1000

MAX CAPACITY 2.34 PB 3.9 PB 7.8 PB 9.7 PB


(RAW)

Table 2. Dell Unity X50F Model Comparison

DELL UNITY DELL UNITY DELL UNITY DELL UNITY


Model
350F 450F 550F 650F

PROCESSOR Intel E5-2603 v4 Intel E5-2630 v4 Intel E5-2660 v4 Intel E5-2680 v4


(PER SP) 6c/1.7GHz 10c/2.2GHz 14c/2.0GHz 14c/2.4GHz

MEMORY 48 GB / SP 64 GB / SP 128 GB / SP 256 GB / SP

MAX DRIVES 150 250 500 1000

MAX CAPACITY 2.4 PB 4.0 PB 8.0 PB 16.0 PB


(RAW)

Table 3. Unity XT X80/F Model Comparison

DELL UNITY DELL UNITY DELL UNITY DELL UNITY


Model
380 / 380F 480 / 480F 680 / 680F 880 / 880F

PROCESSOR 1x Intel E5-2603 v4 2x Intel Xeon Silver 2x Intel Xeon Silver 2x Intel Xeon
(PER SP) 6c/1.7GHz 4108 4116 Gold 6130
8c/1.8GHz 12c/2.1GHz 16c/2.1GHz

MEMORY 64 GB / SP 96 GB / SP 192 GB / SP 384 GB / SP

MAX DRIVES 500 750 1000 1500

MAX CAPACITY 2.4 PB 4.0 PB 8.0 PB 16.0 PB


(RAW)

Drive Model Comparison


Multiple drive types are supported on the Dell Unity system. All Flash models support
Flash drives, while Hybrid Dell Unity models support Flash, SAS, and NL-SAS drives. All
drives operate at 12Gb/s speeds. SAS and NL-SAS drives utilize a 4KB drive formatting
size, while Flash drives utilize a 520-byte block size. A list of all supported drives can be
found on Dell Online Support.

Data-in-Place Conversions
Dell Unity OE Version 5.2 introduced the ability to perform both offline and online data-in-
place (DIP) conversions which allows users to convert physical Unity XT 480/F and 680/F

10 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

systems to any higher model of the same type without losing any data or system
configurations. Unity XT 380/F systems are exempt from DIP conversions because the
Unity XT 380/F systems use a different physical chassis than the 480/F, 680/F, and 880/F
models. The DIP process involves swapping the storage processors in a given system
with new storage processors of a higher model while reusing the same I/O modules,
SFPs, and power supplies from the replaced storage processors. For Unity XT system
that use low-line power (100v-120v) and are being upgraded to an 880/F model, a step-up
transformer is required since Unity XT 880/F systems only support high-line power (200v-
240v). If installing a step-up transformer within a rack, the step-up transformer will require
additional rack space.

This conversion process supports both offline and online procedures and is fully customer
installable. The estimated time for a full data-in-place conversion is 150 minutes. For an
online conversion, each storage processor is upgraded one at a time and data remains
accessible during the procedure. For an offline conversion, data will be inaccessible
during the procedure as the system is completely powered down and both storage
processors are upgraded simultaneously. Typically, the offline conversion will complete
faster as both storage processors upgrade simultaneously. Customers can choose online
or offline conversion based on their preference. The target model must be the same type
as the source model. For example, you can convert from a Unity XT 480 to Unity XT 880,
but not from a Unity XT 480 to a Unity XT 880F system.

For more information about the Dell Unity and Unity XT model data-in-place conversions,
see the technical guide titled Dell Unity Family Data-in-Place Conversion Guide on Dell
Online Support.

I/O Module Conversions


Dell Unity OE version 5.2 introduced the ability to perform an online conversion of the
16Gb fibre channel I/O module to the 32Gb fibre channel I/O module. The 32Gb I/O
module was introduced in Dell Unity OE version 5.1. The I/O module conversion feature
allows customers to upgrade their existing 16Gb fibre channel I/O module and benefit
from a 32Gb fibre channel environment while data remains online and accessible. The
process involves replacing the existing I/O module one storage processor at a time with
the new I/O module. The procedure is Command-Line Interface (CLI) driven using the
svc_change_hw_config service script and it is recommended to use Dell Deployment
Services to perform the upgrade on behalf of the customer. The upgrade procedure is
supported for Unity XT systems, including the 380/F, 480/F, 680/F, and 880/F models.

Disk Processor Dell Unity’s Disk Processor Enclosure (DPE) for Unity XT Series models utilize a 25-drive
Enclosure (DPE) 2U DPE using 2.5” drives. Note, though, that the Unity XT 380/F uses a different physical
– 380/F chassis than the 480/F, 680/F, and 880/F models. The following figures and related
information are specific to the 380/F model. For information about the DPE for the 480/F,
680/F, and 880/F models, see section 3.3 titled Disk Processor Enclosure (DPE) – 480/F,
680/F, 880/F.

Dell Unity XT: Introduction to the Platform 11


A Detailed Review
Hardware Overview

Figure 2. 25-Drive 2U DPE (380/F)

For 380/F systems, on the front of the DPEs (see Figure 2) are LEDs for both the
enclosure and drives to indicate status and faults. The first four drives of the DPE are
known as system drives and contain copies of data used by the operating environment.
While they can be used in Pools to hold user data, the entire formatted capacity of the
system drives will not be available as some space is reserved for the system. These
drives should not be moved within the DPE or relocated to another enclosure and should
be replaced immediately in the event of a fault. A system drive cannot be used as a
traditional pool hot spare for a non-system drive. For this reason, the minimum number of
drives in a system is five with system drives configured in a RAID 1/0 (1+1 or 2+2)
configuration including a non-system drive traditional pool hot spare.

The rear of the DPE reveals the Storage Processors (SP) and their on-board connectivity.
Each Storage Processor has 2x 12Gb SAS ports, used for connecting additional storage
and each SAS port has a 4-lane configuration. For front-end connectivity, the SPs have 2x
10GbE BaseT ports which can auto-negotiate between 10Gb/1Gb/100Mb, and 2x
Converged Network Adapter (CNA) ports. These CNA ports can be configured to serve
16Gb/8Gb/4Gb Fibre Channel using either multi-mode or single mode FC SFPs, 10GbE
Optical using SFP+ connectors or TwinAx cables in active or passive mode, or 1GbE
BaseT using RJ45 SFPs. For optical connections, the CNAs feature full iSCSI offload
which relieves the Storage Processor from handling TCP/IP network stack operations. For
management and service, each SP has a dedicated 1GbE BaseT management port and a
dedicated 1GbE BaseT service port; both ports operate at 1Gb/100Mb/10Mb speeds. In
Dell Unity OE version 5.1, management port settings can be customized to match the
environment by manually changing MTU, port speed and/or duplex settings. The range of
these settings include MTU of 1280-9000, port speeds of 1Gbps, 100Mbps, or 10Mbps,
and advertised duplex of full, half, or auto. These settings can be changed using
svc_network service command.

The DPE on 380/F systems is internally connected to Bus 0 which is the same bus that
the first SAS expansion port is connected to. Therefore, the DPE is recognized by the
system as “Bus 0 Enclosure 0” while the first DAE connected to the first SAS expansion
port would be “Bus 0 Enclosure 1”. Furthermore, this means that the twenty-five drives in
front of the DPE are internally recognized as “Bus 0 Enclosure 0 Drive 0” – “Bus 0
Enclosure 0 Drive 24”.

For a detailed description of the hardware on Unity XT 380/F systems, see the Unity XT
Hardware Information Guide on Dell Online Support.

12 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

Figure 3. Rear of DPE (380/F)

Unity XT 380 and Unity XT 380F Storage Processors manufactured in the second half of
2022 and later have been redesigned and no longer include 10GbE BaseT embedded
ports. For the 380 models, this removes 2x 10GbE BaseT ports per SP or 4x total per
system. 10GbE BaseT front-end connectivity is still supported by using the 4-port 10GbE
I/O module described in the section I/O Module Options – 380/F, 480/F, 680/F, 880/F of
this document. Unisphere can be used to confirm if the 10GbE BaseT embedded ports
are present on a system. In Unisphere, navigate to SYSTEM > System View > Enclosures
> Rear and review the ports on the DPE.

Storage Processor – 380/F


The Unity XT 380/F system is powered by an Intel® Xeon® Processor utilizing Intel’s
Broadwell architecture, with six cores per Storage Processor. Each purpose-built system
contains two Storage Processors (SPs), which are used for high availability and load
balancing purposes.

M.2 SSD – 380/F


An M.2 SSD device is located inside each Storage Processor and serves as a backup
device in the event of an SP failure (Figure 4). In the event of an SP failure, the memory
contents of the SP’s cache are written to the M.2 SSD device so it can be recovered once
the SP is restored. If the M.2 SSD device itself encounters a failure, cache data can be
recovered from the peer Storage Processor. The M.2 SSD device also holds the boot
image that is used to run the operating environment.

Figure 4. M.2 SSD Device (380/F)

Cooling Modules – 380/F


Cooling modules or fan packs (Figure 5) are used to provide cool airflow to the Storage
Processor’s interior. There are five counter-rotating cooling modules in a Storage
Processor for 380/F systems. A Storage Processor can tolerate a single cooling module
fault; in which case the surviving fans will increase their speed to compensate for the
faulted module. If a second cooling module faults, the Storage Processor will gracefully
save cache content and shut down to prevent overheating.

Dell Unity XT: Introduction to the Platform 13


A Detailed Review
Hardware Overview

Figure 5. Cooling Module (380/F)

Battery Backup Unit (BBU) – 380/F


The Battery Backup Unit (BBU) provides power to the Storage Processor if cabinet power
is lost. The BBU (Figure 6) is designed to power the SP long enough for the system to
store SP write cache content to the M.2 SSD device before powering down. The BBU
includes sensors which communicate its charge and health status to the SP. In the event
the BBU is discharged, the SP will disable write cache until the BBU has recharged. In the
event the BBU has faulted or cannot sustain enough charge, an alert will be generated.

Figure 6. Battery Backup Unit (380/F)

Baffle – 380/F
The baffle (Figure 7) directs airflow within the Storage Processor. Cool air drawn in from
the cooling modules is directed to the processor and DIMMs for effective thermal
management.

14 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

Figure 7. Baffle (380/F)

Dual-Inline Memory Module (DIMM) – 380/F


There are four Dual-Inline Memory Module (DIMM) slots on a Storage Processor for a
380/F system. These are filled with up with four 16GB DIMMs. An example DIMM is
represented in Figure 8. DIMMs use error-correcting code (ECC) to protect against data
corruption. If a DIMM is faulted, the system will boot into Service Mode so the faulted
DIMM can be replaced.

Figure 8. Dual-Inline Memory Module (DIMM) (380/F)

Power Supply – 380/F


There are two power supply modules in a Disk Processor Enclosure (DPE), one per
Storage Processor. A single power supply is capable of powering the entire DPE. Power
supplies can be replaced without having to remove the Storage Processor or shutdown
the system. Power supplies are offered for AC power and in Dell Unity OE version 5.2, a
NEBS compliant DC variant power supply was introduced for Dell Unity XT 380/F and
480/F models. DC power supplies are not available for the 680/F and 880/F. For Unity XT
380 and 480 DC powered systems, 600GB and 1.8TB 10k SAS NEBS certified as well as
800GB 3WPD SSD NEBS certified drives are available. For Unity XT 380F and 480F DC
powered systems, 1.92TB and 3.84TB 1WPD SSD NEBS drives are available.

For more information about Dell Unity DC-powered systems, see the technical paper
called Dell Unity DC-Powered Enclosures Installation & Operation Guide.

Dell Unity XT: Introduction to the Platform 15


A Detailed Review
Hardware Overview

Figure 9. Power Supply (380/F)

Disk Processor Dell Unity’s Disk Processor Enclosure (DPE) for Unity XT Series models use a 25-drive
Enclosure (DPE) 2U DPE using 2.5” drives. Note, though, that the Unity XT 480/F, 680/F, and 880/F
– 480/F, 680/F, models use a different physical chassis than the 380/F. The following figures and related
880/F information are specific to 480/F, 680/F, and 880/F models. For information about the
DPE for the 380/F model, see the section Disk Processor Enclosure (DPE) – 380/F.

Figure 10. 25-Drive 2U DPE (480/F, 680/F, 880/F)

For 480/F, 680/F, and 880/F systems, on the front of the DPEs (see Figure 10) are LEDs
for both the enclosure and drives to indicate status and faults. The first four drives of the
DPE are known as system drives, and contain data used by the operating environment.
While they can be used in Pools to hold user data, the entire formatted capacity of the
system drives will not be available as some space is reserved for the system. These
drives should not be moved within the DPE or relocated to another enclosure and should
be replaced immediately in the event of a fault. A system drive cannot be used as a
traditional pool hot spare for a non-system drive. For this reason, the minimum number of
drives in a system is 5 with system drives configured in a RAID 1/0 (1+1 or 2+2)
configuration including a non-system drive traditional pool hot spare.

The rear of the DPE reveals the Storage Processors (SP) and their connectivity options
(see Figure 11). Each SP has 1x 1GbE management port, 1x 1GbE service port, 1x 4-port
mezzanine card (optional), 2x I/O module slots (optional), and 2x 12Gb SAS ports, used
for connecting additional storage and each SAS port has a 4-lane configuration. For
management and service, each SP has a dedicated 1GbE BaseT management port and a
dedicated 1GbE BaseT service port; both ports can operate at 1Gb/100Mb/10Mb speeds.
In Dell Unity OE version 5.1, management port settings can be customized to match the
environment by manually changing MTU, port speed and/or duplex settings. The range of
these settings include MTU of 1280 through 9000, port speeds of 1Gbps, 100Mbps, or
10Mbps, and advertised duplex of full, half, or auto. These settings can be changed using
svc_network service command. For front-end connectivity, the SPs have the option of a
4-port mezzanine card which have the option of being a 4-port 25GbE Optical, 4-port

16 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

10GbE BaseT, or blank filler based on how the system is ordered. For the 4-port 25GbE
Optical option, the port speed is based on the SFP installed in each of the ports. You can
mix the types of SFPs on the same card as needed. For the 4-port 10GbE option, the
ports can auto-negotiate between 10Gb/1Gb/100Mb speeds as needed. The 4-port card
slots can be populated at a later point in time if the system is ordered with blank fillers for
those slots.

The DPE on 480/F, 680/F, and 880/F systems is internally connected to Bus 99 which is
the separate bus than the first SAS expansion port is connected to which is Bus 0.
Therefore, the DPE is recognized by the system as “Bus 99 Enclosure 0” while the first
DAE connected to the first SAS expansion port would be “Bus 0 Enclosure 0”. This is
different than X00/F, X50F, and 380/F systems. Furthermore, this means that the twenty-
five drives in front of the DPE for 480/F, 680/F, and 880/F systems are internally
recognized as “Bus 99 Enclosure 0 Drive 0” – “Bus 99 Enclosure 0 Drive 24”. Although in
Unisphere, the drives are seen “DPE Drive 0” – “DPE Drive 24”.

For a detailed description of hardware for 480/F, 680/F, and 880/F systems, see the Unity
XT Hardware Information Guide on Dell Online Support.

Figure 11. Rear of DPE (480/F, 680/F, 880/F)

Storage Processor – 480/F, 680/F, 880/F


The purpose-built Unity XT platform for 480/F, 680/F, and 880/F systems are powered by
an Intel® Xeon® Processor utilizing Intel’s Skylake architecture, depending on the system
model and the core count will vary between 8 to 18 cores per CPU with two CPUs per
Storage Processor. Each purpose-built system contains two Storage Processors (SP),
which are used for high availability and load balancing purposes.

M.2 SSD – 480/F, 680/F, 880/F


There are two M.2 SSD devices, one connected using SATA protocol and one connected
using NVMe protocol, located inside each Storage Processor for 480/F, 680/F, and 880/F
systems. The devices serve two separate purposes; one as a backup device in the event
of an SP failure (Figure 12) and one as a boot device for the system operating
environment (Figure 13). In the event of an SP failure, the memory contents of the SP’s
cache are written to the M.2 NVMe SSD device so the data can be recovered once the SP
is restored. If the M.2 NVMe SSD device itself encounters a failure, cache data can be
recovered from the peer Storage Processor. The M.2 SATA SSD device holds the boot
image that is used to boot the operating environment.

Dell Unity XT: Introduction to the Platform 17


A Detailed Review
Hardware Overview

Figure 12. M.2 NVMe SSD Device (480/F, 680/F, 880/F)

Figure 13. M.2 SATA SSD Device (480/F, 680/F, 880/F)

Cooling Modules – 480/F, 680/F, 880/F


Cooling modules or fan packs are used to provide cool airflow to the Storage Processor’s
interior. There are six counter-rotating cooling modules in a Storage Processor for 480/F,
680/F, and 880/F systems. A Storage Processor can tolerate a single cooling module
fault; the surviving fans will increase their speed to compensate for the faulted module. If
a second cooling module faults, the Storage Processor will gracefully save write cache
content and shut down.

Figure 14. Cooling Module (480/F, 680/F, 880/F)

18 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

Battery Backup Unit (BBU) – 480/F, 680/F, 880/F


The Battery Backup Unit (BBU) provides power to the Storage Processor if cabinet power
is lost. The BBU is designed to power the SP long enough for the system to store SP
cache content to the M.2 SSD devices before powering down. The BBU includes sensors
which communicate its charge and health status to the SP. In the event the BBU is
discharged, the SP will disable cache until the BBU has recharged. In the event the BBU
has faulted or cannot sustain enough charge, an alert will be generated.

Figure 15. Battery Backup Unit (480/F, 680/F, 880/F)

Baffle – 480/F, 680/F, 880/F


The baffle directs airflow within the Storage Processor. Cool air drawn in from the cooling
modules is directed to the processor and DIMMs for effective thermal management.

Figure 16. Baffle (480/F, 680/F, 880/F)

Dell Unity XT: Introduction to the Platform 19


A Detailed Review
Hardware Overview

Dual-Inline Memory Module (DIMM) – 480/F, 680/F, 880/F


There are twenty-four Dual-Inline Memory Module (DIMM) slots on a Storage Processor.
These are filled with up to 12 DIMMs depending on model. An example DIMM is
represented in Figure 17. DIMMs are between 16 and 32GB in size and use error-
correcting code (ECC) to protect against data corruption. If a DIMM is faulted, the system
will boot into Service Mode so the faulted DIMM can be replaced.

Figure 17. Dual-Inline Memory Module (DIMM) (480/F, 680/F, 880/F)

Power Supply – 480/F, 680/F, 880/F


There are two power supply modules in a Disk Processor Enclosure (DPE). A single
power supply is capable of powering the entire DPE. Power supplies can be replaced
without having to remove the Storage Processor. Power supplies are offered for AC
power only.

Dell Unity OE version 5.2 introduced DC variant power supplies for Dell Unity XT 380/F
and 480/F models which are NEBS compliant. DC power supplies are not available for the
680/F and 880/F. For Unity XT 380 and 480 DC powered systems, 600GB and 1.8TB 10k
SAS NEBS drives as well as 800GB 3WPD SSD NEBS certified drives are available. For
Unity XT 380F and 480F DC powered systems, 1.92TB and 3.84TB 1WPD SSD NEBS
drives are available.

For more information about Dell Unity DC-powered systems, see the technical paper
called Dell Unity DC-Powered Enclosures Installation & Operation Guide.

Figure 18. Power Supply (480/F, 680/F, 880/F)

I/O Module Each Storage Processor on Unity XT systems can support up to two I/O modules. I/O
Options – 380/F, modules provide additional connectivity. For the two Storage Processors in a DPE, the I/O
480/F, 680/F, Modules configured must match between SPs. Note that Fibre Channel over Ethernet
880/F (FCoE) and Fibre Channel over IP (FCIP) are not supported on the Dell Unity platform.

20 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

The Unity XT Series systems support the following I/O modules:


• 12Gb SAS (Unity XT 480/F, 680/F, 880/F only)
• 25GbE Optical (4-Port)
• 16Gb Fibre Channel (4-Port)
• 10GbE BaseT (4-Port)
• 32Gb Fibre Channel (4-port)
The 12Gb SAS (4-Port) I/O module is used to provide additional backend connectivity to
Disk Array Enclosures. Each SAS port supports up to 10 DAEs and up to a maximum of
250 drives. This module is required when utilizing high-bandwidth x8 SAS lane
connections for the 80-drive DAE.

Figure 19. 12Gb SAS I/O Module

The 16Gb Fibre Channel (4-Port) I/O module offers frontend connectivity at 16Gbps
speeds and can auto-negotiate to 8Gbps and 4Gbps speeds depending on the SFPs
installed. There are ordering options for single-mode SFPs and multi-mode SFPs
configurations depending on the use case in a datacenter environment. Single-mode
SFPs only operate at 16Gb speeds and are not compatible with multi-mode connections.
Single-mode connections are usually used for long distance synchronous replication use
cases to remote sites while multi-mode is typically used for transmitting data over shorter
distances in local-area SAN networks and connections within buildings. For upgrading a
16Gb Fibre Channel I/O module to a 32Gb Fibre Channel I/O module, see the section I/O
Module Conversions for more information.

Figure 20. 16Gb Fibre Channel I/O Module

The 10GbE BaseT (4-Port) I/O module operates at up to 10Gb/s speeds and is used for
frontend host access and supports both iSCSI and NAS protocols. The I/O module can

Dell Unity XT: Introduction to the Platform 21


A Detailed Review
Hardware Overview

also auto-negotiate to 1Gbps and 100Mbps speeds as needed. The ports on an individual
Ethernet I/O module, and the on-board Ethernet ports or Mezz card Ethernet ports
support link aggregation, fail safe networking (FSN), and VLAN tagging. Link aggregation
can be configured across all available Ethernet ports as needed.

Figure 21. 10GbE BaseT I/O Module

The 25GbE Optical I/O module runs at a fixed speed of 25Gbps given it is using 25Gb
SFPs. The I/O module also supports 10Gb SFPs to run at 10Gbps speeds. The Optical
I/O module ports support SFP+ and TwinAx (active or passive mode) connections. Note
that different SFPs and/or TwinAx cables can be mixed on the same I/O module and are
hot swappable.

Figure 22. 25GbE Optical I/O Module

The 32Gb Fibre Channel (4-port) I/O module provides frontend host connectivity for
speeds up to 32Gbps with various different SFPs. The 32Gb multi-mode SFP is capable
of auto-negotiating to 32Gbps, 16Gbps, and 8Gbps. Meanwhile, the 16Gb multi-mode
SFP is capable of auto-negotiating to 16Gbps, 8Gbps, and 4Gbps speeds. A single-mode
SFP is also supported which operates only at 16Gbps and is generally used for long
distance synchronous replication use cases. The 32Gb I/O module can have different
SFP types per port. For example, port 0 could have a 32Gbps SFP while ports 1-3 could
have a 16Gbps SFP so long as the SAN supports both speeds. When using multiple SFP
types, it is recommended to ensure the peer storage processor has the same SFPs
inserted into each port.

22 Dell Unity XT: Introduction to the Platform


A Detailed Review
Hardware Overview

Figure 23. 32Gb Fibre Channel I/O Module

Disk Array The purpose-built Unity XT Series systems have three different DAE configuration
Enclosure (DAE) options:
Options – 380/F, • 25-Drive 2U DAE using 2.5” drives
480/F, 680/F,
880/F • 15-Drive 3U DAE using 3.5” drives
• 80-Drive 3U DAE using 2.5” drives
25-Drive, 2.5” 2U DAE
The 25-drive, 2.5” 2U DAE holds up to twenty-five 2.5” drives (Figure 24). The back of the
DAE includes LEDs to indicate power and fault status. There are also LEDs to indicate
bus and enclosure IDs.

Figure 24. 25-Drive 2.5” 2U DAE (Front)

The 25-drive 2.5” 2U DAE can be powered by AC and is attached to the DPE by mini-SAS
HD connectors (Figure 25).

Figure 25. 25-Drive 2.5” 2U DAE (Rear)

3.5.2 15-Drive, 3.5” 3U DAE


The 15-drive 3.5” 3U DAE is available for Unity XT Hybrid systems and can be powered
by AC power and is attached to the DPE by mini-SAS HD connectors (Figure 26).

Dell Unity XT: Introduction to the Platform 23


A Detailed Review
Hardware Overview

Figure 26. 15-Drive 3.5” 3U DAE (Front)

The back of the DAE includes LEDs to indicate power and fault status (Figure 27). There
are also LEDs to indicate bus and enclosure IDs.

Figure 27. 15-Drive 3.5” 3U DAE (Rear)

80-Drive, 2.5” 3U DAE


The 80-drive 2.5” 3U DAE is available for Unity XT Hybrid and All Flash systems and can
be powered using AC power and is attached to the DPE by mini-SAS HD connectors
(Figure 28). A high-bandwidth x8 lane SAS connectivity option to the DPE is also
available for models that support the 4-port 12Gb SAS I/O module which include the Unity
XT 480/F, 680/F, and 880/F. For supported drive types/sizes on the 80-drive DAE, see the
Dell Unity Drive Support Matrix on Dell Online Support.

In terms of operating power, the 80-drive DAE operates from 200 to 240V AC at 47 to 63
Hz with a max power consumption of 1,611 VA (1,564 W). For a full listing of power
requirements and related hardware information, see the Unity XT Hardware Information
Guide on Dell Online Support.

24 Dell Unity XT: Introduction to the Platform


A Detailed Review
Dell UnityVSA

Figure 28. 80-Drive 2.5" 3U DAE

Dell UnityVSA
Dell Unity is offered in a Virtual Storage Appliance version known as Dell UnityVSA. Dell
UnityVSA is a Software Defined Storage (SDS) solution that runs atop the VMware ESXi
Server platform. Dell UnityVSA provides a flexible storage option for environments that do
not require purpose-built storage hardware such as test/development or remote
office/branch office (ROBO) environments. Users can quickly provision a Dell UnityVSA
on general purpose server hardware, which can result in reduced infrastructure costs and
a quicker rate of deployment.

In Dell Unity OE version 4.5, a High Availability (HA) version of the Dell UnityVSA was
introduced, also known as Dell UnityVSA Dual-SP. Dell UnityVSA Dual-SP is an
enhanced version of the single-SP Dell UnityVSA solution. This is accomplished by
adding HA functionality whereby Dell UnityVSA Dual-SP can recover from an SP or host
failure which significantly increases the system’s applicable use case scenarios and
enables non-disruptive upgrades (NDU). Dell UnityVSA Dual-SP is only available with
Professional Edition (PE) licenses. In OE version 5.1, Professional Edition licenses come
in capacity choices of 10TB, 25TB, 50TB, or 350TB options. Additionally, the Dell
UnityVSA Dual-SP can be deployed as a 2-core CPU / 12GB memory per-SP or a 12-
core CPU / 96GB memory per-SP system.

Overview Dell UnityVSA retains the ease-of-use and ease-of-management found in the purpose-
built Dell Unity product. Its feature set and data services are designed to be on par with
the rest of the Dell Unity family. There are some main differences in functionality support,
which stem from the virtual nature of the Dell UnityVSA deployment.

Dell Unity XT: Introduction to the Platform 25


A Detailed Review
Dell UnityVSA

Dell UnityVSA Dell UnityVSA can run on any server that supports VMware ESXi and meets minimum
Hardware hardware requirements. If local storage is used, a hardware RAID controller on the ESXi
Requirements server is recommended to be used to configure redundant storage for Dell UnityVSA. If
storage is being provided from a redundant storage system or server SAN, a RAID
controller on the ESXi server is not required. A full description of the minimum server
requirements for a single Dell UnityVSA instance is detailed in Table 4.

Table 4. Dell UnityVSA Single-SP Server and VM Requirements

Minimum requirement Recommended requirement

ESXi requirements

ESXi host configuration ESXi 6.5+ ESXi 6.5+

Hardware processor Xeon E5 Series Dual Core CPU Xeon Silver 4110 or higher
64-bit x86 Intel 2GHz+ (SSE4.2
or greater)

Hardware memory 20GB minimum for ESXi 6.5 36GB minimum for ESXi 6.5+

Hardware network 1x 1GbE (management and IO 1x 10GbE (management and IO


traffic go through the same traffic go through the same physical
physical port) port)

Hardware RAID RAID Controller: 512MB NV RAID Controller: 512MB NV cache


cache and battery backed and battery backed recommended
recommended

Disk Any disk type as system disks SSD as system disks

Datastore requirements

VMware datastore (NFS and No particular requirement Full-SSD datastore


VMFS supported)

Dell UnityVSA SP requirements

Virtual processor cores 2 (2GHz+) 2 (2GHz+)

Virtual system memory 12GB 12GB

Virtual network adapters 6 (4 adapters for I/O, 1 for 6 (4 adapters for I/O, 1 for
Unisphere, and 1 for system Unisphere, and 1 for system use)
use)

Dell UnityVSA HA has similar physical requirements as Dell UnityVSA Single-SP on a per
SP basis. In terms of VMware requirements, a vCenter is mandatory in addition to the
configuration of internal networks. To comply with best practices, Dell UnityVSA HA
requires a separate ESXi host for each SP that is deployed. The white paper titled Dell
UnityVSA provides further detail on the best practices and the exact VMware
requirements. A full description of recommended server requirements for both the 2-core
and 12-core CPU deployments of the Dell UnityVSA HA are outlined below.

26 Dell Unity XT: Introduction to the Platform


A Detailed Review
Dell Unity Cloud Edition

Table 5. Dell UnityVSA HA Hardware Requirements

2-core Dell UnityVSA 12-core Dell UnityVSA

ESXi requirements

ESXi host configuration ESXi 6.5+ with both SPs on ESXi 6.5+ with both SPs on
separate ESXi hosts separate ESXi hosts

Hardware processor Xeon Silver 4110 or higher Xeon Silver 4110 or higher

Hardware memory 36GB for ESXi 6.5 or later per 120GB for ESXi 6.5 or later per host
host

Hardware network 3 x 10 GbE (1 physical port for 3 x 10 GbE (1 physical port for SP
SP management and IO ports, 2 management and IO ports, 2 for
for inter-SP network) inter-SP network)

Hardware RAID RAID card 512MB NV cache, RAID card 512MB NV cache, battery
battery backed recommended backed recommended

Disk No particular disk type SSD as system disks

Switch requirements

Hardware switch 10GbE port support 10GbE port support

Datastore requirements

VMware datastores (NFS and One full-SSD shared datastore One full-SSD shared datastore and
VMFS supported) and a separate full-SSD local a separate full-SSD local swap
swap datastore datastore

Dell UnityVSA individual SP requirements

Virtual processor cores 2 (2GHz+) for each SP 12 (2GHz+) for each SP

Virtual system memory 12GB for each SP 96GB for each SP

Virtual network adapters 9 for each SP (4 ports for I/O, 1 9 for each SP (4 ports for I/O, 1 for
for Unisphere, 1 for system use, Unisphere, 1 for system use, and 3
and 3 for internal for internal communication)
communication)

vCenter Required Required

VLANs 3 (1 for Common Messaging 3 (1 for Common Messaging


Interface (CMI) SP-to-SP Interface (CMI) SP-to-SP
communication, 1 for Heartbeat communication, 1 for Heartbeat 0,
0, and 1 for Heartbeat 1) and 1 for Heartbeat 1)
VLANs must be unique and not VLANs must be unique and not used
used elsewhere on the network elsewhere on the network

For more information about the Dell UnityVSA and Dell UnityVSA HA, see the white paper
titled Dell UnityVSA available on the Dell Technologies Info Hub.

Dell Unity Cloud Edition


As customers select a cloud-operating model to support their applications, elasticity and
scalability of public clouds and enterprise file capabilities such as tiering, quotas, and
snapshots are top requirements. Customers are looking to leverage the cloud for file
synchronization and disaster recovery operations.

Dell Unity XT: Introduction to the Platform 27


A Detailed Review
Conclusion

Dell Unity Cloud Edition addresses these requirements with support for VMC (VMware
Cloud) on AWS (Amazon Web Services). Dell Unity Cloud Edition can be easily deployed
in a VMware Cloud SDDC (Software-Defined Data Center) to provide native file services
such as NFS and SMB. Dell Unity Cloud Edition also enables disaster recover between
on premise deployed Dell Unity systems and VMware Cloud-based appliances.

Dell Unity Cloud Edition is a virtualized storage appliance that has a rich feature set,
comparable to the rest of the Dell Unity Family. Because of its ease of use and quick
deployment time, this makes Dell Unity Cloud Edition the ideal candidate for test/dev
environments or production deployments into VMC on AWS.

Dell Unity Cloud Edition supports the same deployment options as Dell UnityVSA. In OE
version 5.1 this includes the increased capacity limit of up to 350TB and the 2-core /
12GB memory and 12-core / 96GB memory Dual-SP deployment options.

For more information about Dell Unity Cloud Edition and its benefits, see the paper titled
Dell Unity Cloud Edition with VMware Cloud on AWS on the Dell Technologies Info Hub.

Conclusion
The Dell Unity product family sets a new standard for storage by delivering compelling
simplicity, a modern design, and enterprise features at an affordable price and compact
footprint. Dell Unity meets the needs of resource-constrained IT professionals in both
large and small companies. The purpose-built Dell Unity system is offered in All Flash and
Hybrid models, providing flexibility for differing use cases and budgets. The converged
offering through the Converged Infrastructure Portfolio delivers industry-leading
converged infrastructure powered by Dell Unity. The Dell UnityVSA and Dell Unity Cloud
Edition offers a dynamic deployment model that allows you to start for free and grow as
business needs evolve.

The Dell Unity system was designed with ease-of-use at the forefront. The modern design
of the management interfaces is built with best practices in mind, making it easy to
provision storage intelligently without having to micromanage every detail. A software
feature set built with the same mindset allows for automation and “set it and forget it” style
upkeep. Truly, an IT generalist can set up, configure, and manage a Dell Unity system
without needing to become a storage expert. A strong support ecosystem offers various
media for learning and troubleshooting, backed by the quality support model of the Dell
brand. Lastly, users looking to refresh their existing Dell infrastructure can use the easy-
to-use native migration capabilities of the Dell Unity platform.

With simplified ordering, all-inclusive software, new differentiated features, internet-


enabled management, and a modern design, Dell Unity is where powerful meets
simplicity.

28 Dell Unity XT: Introduction to the Platform


A Detailed Review
References

References

Dell The following documentation on the Dell Technologies Info Hub provides other
Technologies information related to this document. Access to these documents depends on your login
documentation credentials. If you do not have access to a document, contact your Dell Technologies
representative.
• Dell Unity: Best Practices Guide
• Dell Unity: Cloud Tiering Appliance (CTA)
• Dell Unity: Compression
• Dell Unity: Compression for File
• Dell Unity: Data at Rest Encryption
• Dell Unity: Data Integrity
• Dell Unity: Data Reduction
• Dell Unity: DR Access and Testing
• Dell Unity: Dynamic Pools
• Dell Unity: FAST Technology Overview
• Dell Unity: File-Level Retention (FLR)
• Dell Unity: High Availability
• Dell Unity: Introduction to the Platform
• Dell Unity: NAS Capabilities
• Dell Unity: MetroSync
• Dell Unity: MetroSync and Home Directories
• Dell Unity: MetroSync and VMware vSphere NFS Datastores
• Dell Unity: Migration Technologies
• Dell Unity: OpenStack Best Practices for Ocata Release
• Dell Unity: Performance Metrics
• Dell Unity: Replication Technologies
• Dell Unity: Snapshots and Thin Clones
• Dell Unity: Operating Environment (OE) Overview
• Dell Unity: Unisphere Overview
• Dell Unity: Virtualization Integration
• Dell UnityVSA
• Dell Unity Cloud Edition with VMware Cloud on AWS
• Dell Unity Data Reduction Analysis
• Dell Unity: Migrating to Dell Unity with SAN Copy

Dell Unity XT: Introduction to the Platform 29


A Detailed Review
References

• Dell Unity Storage with Microsoft Hyper-V


• Dell Unity Storage with Microsoft SQL Server
• Dell Unity Storage with Microsoft Exchange Server
• Dell Unity Storage with VMware vSphere
• Dell Unity Storage with Oracle Databases
• Dell Unity 350F Storage with VMware Horizon View VDI
• Dell Unity: 3,000 VMware Horizon Linked Clone VDI Users
• Dell Storage with VMware Cloud Foundation

30 Dell Unity XT: Introduction to the Platform


A Detailed Review
SPECIFICATION SHEET

DELL UNITY XT HFA AND


AFA STORAGE
(DC POWER – NEBS* COMPLIANT)
Simplify the path to IT transformation and unlock the full potential of your data capital with Dell Unity XT storage arrays
that are designed for performance, optimized for efficiency, and built to simplify your multi-cloud journey. Unity XT arrays
feature up to 2X more IOPS for both HFAs and AFAs, more memory, and up to 50% more drives than previous Dell Unity
models. These cost-efficient storage systems are equipped with dual-active controllers and include a rich set of all-
inclusive enterprise-class software. Unity XT AFAs are available with a Future Proof guaranteed 3:1 data reduction rate
while the Unity XT HFAs are ideal for workloads that don’t require the speed and low latency of NVMe architectures.

Architecture
Unity XT storage systems implement an integrated unified architecture for block, file, and VMware vVols with concurrent
support for native NAS, iSCSI, and Fibre Channel protocols. Each system leverages dual-active storage processors, full
12Gb SAS back-end connectivity and Dell’s patented multicore architected operating environment to deliver unparalleled
performance & efficiency with multicloud interoperability. Additional storage capacity is added via Disk Array Enclosures
(DAEs).
*DC products comply with NEBS Level 3 and ETSI requirements and are tested to the following standards: GR-63-CORE, GR-1089-CORE & ETSI EN 300 386, EN 300 132-
2, EN 300 753, EN 300 019

Physical Specifications
380/380F 480/480F
Min/Max Drive Count Min. 6 SSDs or 10 HDDs / Max. 500 Min. 6 SSDs or 10 HDDs / Max. 750

Array Enclosure A 2U Disk Processor Enclosure (DPE) with twenty five 2.5” drives
. All models support 2.5“ drives in 2U twenty five drive and 3U eighty
Drive Enclosure (DAE - Disk Array Enclosure) drive trays; and 3.5” drives in 3U fifteen drive trays.
Dell Unity systems are powered by 2 power supplies (PS) per DPE/DAE. Each power supply can
provide power to the entire module if the peer PS has been removed or is faulted. DPE power
Standby Power System during a power failure is provided by a Battery Back Up (BBU) module. BBU is located within the
SP enclosure and provides power to a single module (power zone)

RAID Options 1/0, 5, 6


2 x dual-socket Intel CPUs, 32 cores per Array,
CPU per Array 2 x Intel CPUs, 12 cores per Array, 1.7GHz
1.8GHz
System Memory/Cache per Array 128 GB 192 GB
Max FAST Cache per Array* Up to 800 GBs Up to 1.2 TBs
A
Total Cache Up to 928 GBs Up to 1.39 TBs
B
Max Mezzanine cards per Array NA 2
C
Max IO Modules per Array 4 4
4 x 4 lane 12Gb/s SAS ports for BE (back end)
Embedded SAS IO Ports per Array 4 x 4 lane 12Gb/s SAS ports for BE Connection
Connection

DELL UNITY XT

© 2021 Dell Inc. or its subsidiaries.


Spec Sheet 380/380F 480/480F
8 x 4 lane or 4 x 8 lane 12Gb/s SAS ports (for
Optional SAS IO ports per Array NA
BE Connection)
Base 12 Gb/s SAS BE Buses per Array 2 x 4 Lane 2 x 4 Lane
Max 12 Gb/s SAS BE Buses per Array 2 x 4 Lane 6 x 4 Lane; or 2 x 4 lane and 2 x 8 lane
Max FE (front end) Total Ports per Array (all
20 24
types)
Max Initiators per Array 1,024 2,048
Max FC Ports per Array 20 16
D
4 ports: 8/16 Gb FC , 10GbE IP/iSCSI, or 1Gb
Embedded CNA ports per Array NA
RJ45
1 Gbase-T/iSCSI Max Total Ports per Array 20 24
20 – 10GbE
10/25 GbE/iSCSI Max Total Ports per Array 24
16 – 25GbE
Max Raw CapacityE 2.4 PBs 4.0 PBs
Max SAN Hosts 512 1,024
Max Number of Pools 20 30
Max Number of LUNs per Array 1,000 1,500
Max LUN Size 256 TB 256 TB
Max File Systems per Array 1000 1500
Max File System Size 256 TB 256 TB
Max attached snapshots per Array (Block)
1000 1500

OS Support See the Dell Simple Support Matrix on dell.com


A
Specific to Hybrid Arrays
B
One Mezzanine card per Storage Processor (SP), mirrored.
C
Two IO Modules per Storage Processor (SP), mirrored.
D
16Gb available in both single mode and multimode.
E
Maximum raw capacity will vary based on drive sizes available at time of purchase.

2 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Connectivity
Spec Sheet
Connectivity options via Mezzanine cards and IO modules for both the file for NFS/SMB connectivity and the block
storage for FC and iSCSI host connectivity (see above table for number of modules supported per SP).

Connectivity Options
Type Description Details
Converged Network Adapter (CNA) Two embedded CNA ports (File & On 380/380F systems only, there are 2 CNA ports per SP, which
Ports Block) can be used for 8/16Gb FC, 10GbE IP/iSCSI, or 1Gb
Four port 10Gbase-T Ethernet IP/iSCSI module with four
Four-Port 10Gbase-T Module (File &
Mezzanine card* or IO Module 10Gbase-T Ethernet ports with copper connection to Ethernet
Block)
switch
Four port 10GbE IP/iSCSI module with choice of SFP+ optical
Four-Port 10 Gb/s Optical Module (File
Mezzanine card* or IO Module connection or active/passive twinax copper connection to
& Block)
Ethernet switch
Four port 10GbE IP/iSCSI module with choice of SFP+ optical
Four-Port 25 Gb/s Optical Module (File
Mezzanine card* or IO Module connection or passive twinax copper connection to Ethernet
& Block)
switch
Four port FC module with four ports auto-negotiating to 4/8/16 or
Four-Port 32 Gb/s Fibre Channel 8/16/32 Gbps; uses single mode or multimode optical SFP and
Mezzanine card* or IO Module
Module (Block only) OM2/OM3/OM4 cabling to connect directly to host HBA or FC
switch
Four port SAS module, used for back-end storage (DAE)
connectivity to Storage Processors. Each SAS port has 4
lanes/port @ 12Gbps, delivering 48Gbps nominal throughput.
IO Module Four-Port 12 Gb/s SAS V3.0 Module*
Also available for an installed 80 drive DAE is 8 lane connectivity
utilizing a pair of SAS ports to deliver high bandwidth for added
performance.
* For 480/480F models

Maximum Cable Lengths


Shortwave optical OM4: 125 meters (16 Gb) 190 meters (8 Gb), 400 meters (4 Gb), and 500 meters (2 Gb)
Back-end (Drive) Connectivity
Each storage processor connects to one side of each of two redundant pairs of four-lane x 12 Gb/s Serial Attached SCSI
(SAS) buses, providing continuous drive access to hosts in the event of a storage processor or bus fault. All models
require four “system” drives and support a platform specific maximum number of disks (see Physical Specifications table
above). 107 GBs per system drive on the Dell Unity XT 380 models and 150 GBs on the Dell Unity XT 480, 680, and 880
models is consumed by the operating environment software and data structures.

Disk Array Enclosure (DAE)


25 X 2.5” Drive DAE

FLASH & SAS


Drive Types Supported

Controller Interface 12 Gb SAS

3 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Spec Sheet
Hybrid Systems: Supported Media

System Usage/ Nominal Formatted DPE 25 X 2.5”


Type Interface
Category Purpose Capacity Capacity* 25 Drive Drive DAE
All-Flash or
Hybrid SSD (SAS)
Mixed Pool
800 GB 733.5 GB 12 Gb SAS ✓ ✓
10K HDD
Hybrid
(SAS)
Mixed Pool 600 GB 536.7 GB 12 Gb SAS ✓ ✓
10K HDD
Hybrid
(SAS)
Mixed Pool 1.8 TB 1650.8 GB 12 Gb SAS ✓ ✓
*GB = Base2 GiB (GiB = 1024x1024x1024)
All drives are 520 bytes/sector.
All drives are non-SED. Data at Rest Encryption is done via the storage controller

All-Flash Systems: Supported Media

System Usage/ Nominal Formatted DPE 25 X 2.5”


Type Interface
Category Purpose Capacity Capacity* 25 Drive Drive DAE

All-Flash SSD (SAS) All-Flash 1.92 TB 1751.9 GB 12 Gb SAS ✓ ✓


All-Flash SSD (SAS) All-Flash 3.84 TB 3503.9 GB 12 Gb SAS ✓ ✓
*GB = Base2 GiB (GiB = 1024x1024x1024)
All drives are 520 bytes/sector.
All drives are non-SED. Data at Rest Encryption is done via the storage controller

Dell Unity OE Protocols and Software Facilities


Support is provided for a wide variety of protocols and advanced features available via various software suites, plug-ins,
drivers and packs.

Protocols and Facilities Supported


Access-based Enumeration (ABE) for SMB Address Resolution Protocol (ARP) Block Protocols: iSCSI, Fibre Channel (FCP
protocol SCSI-3)
Container Storage Interface (CSI) Driver Controller based Data at Rest Encryption DFS Distributed File System (Microsoft) as
(D@RE), with self-managed keys Leaf node or Standalone Root Server
Direct Host Attach for Fibre Channel and iSCSI Dynamic Access Control (DAC) with claims Fail-Safe Networking (FSN)
support
Internet Control Message Protocol (ICMP) Kerberos Authentication Key Management Interoperability Protocol
(KMIP) compliant external key manager for
D@RE
LDAP (Lightweight Directory Access Protocol) LDAP SSL Link Aggregation for File (IEEE 802.3ad)
Lock Manager (NLM) v1, v2, v3, and v4 Management & Data Ports IPv4 and/or IPv6 NAS Servers Multi-protocol for UNIX and SMB
clients (Microsoft, Apple, Samba)
Network Data Management Protocol (NDMP) Network Information Service (NIS) Client Network Status Monitor (NSM) v1 Network
v1-v4, 2-way & 3-way Status Monitor (NSM) v1
Network Time Protocol (NTP) client NFS v3/v4 Secure Support NT LAN Manager (NTLM)
Portmapper v2 REST API: Open API that uses HTTP requests Restriction of Hazardous Substances (RoHS)
to provide management compliance
RSVD v1 for Microsoft Hyper-V Simple Home Directory access for SMB SMI-S v1.6.1 compatible Dell Unity Block &
protocol File client
Simple Mail Transfer Protocol (SMTP) Simple Network Management Protocol v2c & Virtual LAN (IEEE 802.1q)
v3 (SNMP)
VMware® Virtual Volumes (vVols) 2.0 VMware® vRealize™ Orchestrator (vRO)
Plug-in

4 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Security & Compliance (applies to all Dell
Spec Sheet
Unity XT systems, except Dell UnityVSA)

Department of Defense Information Network Approved Products List (DODIN APL) – Dell Unity O.E. v5.2 Listed.
Common Criteria
Controller based Data at Rest Encryption (D@RE) with self-managed keys
KMIP compliant external key manager for D@RE
FIPS 140-2 Level 1 validation
IPv6 and dual stack (IPv4) modes of operation
Native SHA2 certificate
Security Technical Implementation Guide /Security Requirements Guide (STIG/SRG)
TLS 1.2 support and TLS 1.0/1.1 disablement
File-Level Retention: Enterprise FLR-E and Compliance FLR-C with requirements for SEC rule 17a-4(f)

Software
All Inclusive Base Software Management Software:
• Unisphere: Element Manager
• Unisphere Central: Consolidated dashboard and alerting
• CloudIQ: Cloud-based storage analytics
• Thin Provisioning
• Dynamic Pools supported on all Unity XT platforms
• Inline Data Reduction: Zero Detect/Deduplication/Compression supported on all Unity
XT platforms
• Host Groups
• Proactive Assist: Configure remote support, online chat, open a service request, etc.
• Quality of Service (Block and VVols)
• Dell Storage Analytics Adapter for VMware® vRealize™
• File & Block Tiering / Archiving to Public/Private Cloud (Cloud Tiering Appliance)
• File-Level Retention (FLR-E & FLR-C)
Unified Protocols:
• File
• Block
• VVols
Local Protection:
• Controller Based Encryption (optional), with self-managed or external key management
• Local Point-In-Time Copies (Snapshots and Thin Clones)
• AppSync Basic
• Dell Common Event Enabler; AntiVirus Agent, Event Publishing Agent
Remote Protection:
• Native Asynchronous Block & File Replication
• Native Synchronous Block & File Replication
• MetroSync Manager (optional software to automate synchronous file replication and
failover sessions)
• Snapshot Shipping
• Dell RecoverPoint Basic
Migration:
• Native Block & File migration from legacy Dell VNX
• SAN Copy Pull: Integrated Block migration from 3rd party arrays
Performance Optimization for Hybrid Arrays:
• FAST Cache
• FAST VP
Interface Protocols NFSv3, NFSv4, NFSv4.1; CIFS (SMB 1), SMB 2, SMB 3.0, SMB 3.02, and SMB 3.1.1; FTP and
SFTP; FC, iSCSI and VMware Virtual Volumes (VVols) 2.0
Optional Solutions • AppSync Advanced
• Connectrix SAN
• Dell Data Protection Hardware & Software platforms
• Dell RecoverPoint Advanced
• Dell RP4VM
• PowerPath Migration Enabler
• PowerPath Multipathing
• Unity XT metro node
• VPLEX
Note: For more details on software licensing, please contact your sales representative

5 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Virtualization Solutions
Spec Sheet
Dell Unity offers support for a wide variety of protocol and advanced features available via various software suites and
packs including but not limited to:
• OpenStack Cinder Driver: For provisioning and managing block volumes within an OpenStack environment
• OpenStack Manila Driver: For managing shared file systems within an OpenStack environment
• Dell Virtual Storage Integrator (VSI) for VMware vSphere™: For provisioning, management, and cloning
• VMware Site Recovery Manager (SRM) Integration: Managing failover and failback making disaster recovery rapid and reliable
• Virtualization API Integration: VMware: VAAI and VASA. Hyper-V: Offloaded Data Transfer (ODX) and Offload Copy for File
• Ansible Module for Unity

Electrical Specifications
All power figures shown represent a worst case product configuration with max normal values operating in an ambient
temperature environment of 20°C to 25°C.
The chassis power numbers provided may increase when operating in a higher ambient temperature environment.

Disk Processor Enclosure (DPE)


380/380F DPE 25 2.5” SFF drives 480/480F DPE 25 2.5” SFF drives
and four IO modules and four IO modules
POWER
DC Line Voltage -39 to -72 V DC (Nominal -48V or -60V power systems)
25.7 A max at -39 V DC 27.6 A max at -39 V DC
DC Line Current (operating
20.5 A max at -48 V DC 22.1 A max at -48 V DC
maximum)
13.9 A max at -72 V DC 14.9 A max at -72 V DC
1001.4 W max at -39 V DC 1078 W max at -39 V DC
Power Consumption
982.2 W max at -48 V DC 1059 W max at -48 V DC
(operating maximum)
999.6 W max at -72 V DC 1075 W max at -72 V DC
3.61 x 106 J/hr, (3,150 Btu/hr) max at -39 V DC 3.88 x 106 J/hr, (3678 Btu/hr) max at -39 V DC
Heat Dissipation (operating
3.54 x 106 J/hr, (3,088 Btu/hr) max at -48 V DC 3.81 x 106 J/hr, (3613 Btu/hr) max at -48 V DC
maximum)
3.60 x 106 J/hr, (3,142 Btu/hr) max at -72 V DC 3.87 x 106 J/hr, (3668 Btu/hr) max at -72 V DC

In-rush Current 40 A peak, per requirement in EN300 132-2 Sect. 4.7 limit curve

DC Protection 50 A fuse in each power supply

DC Inlet Type Positronics PLBH3W3M4B0A1/AA

Mating DC connector Positronics PLBH3W3F0000/AA; Positronics Inc., www.connectpositronics.com

Ride-through Time 1 ms min at -50 V input

Current Sharing ± 5 percent of full load, between power supplies

DIMENSIONS

Weight kgs/lbs empty 24.60/54.11 empty 25.90/57.10


Vertical size 2 NEMA units 2 NEMA units
Height cm/inches 8.88/3.5 8.72/3.43
Width cm/inches 44.76/17.62 44.72/17.61
Depth cm/inches 61.39/24.17 79.55/31.32

Note: Power consumption values for DPEs and DAEs are based on fully populated enclosures (power supplies, drives and I/O modules).

6 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Disk Array Enclosure (DAE)
Spec Sheet
25 X 2.5” Drive DAE
POWER

DC Line Voltage -39 to -72 V DC (Nominal -48V or -60V power systems)

11.0 max at -39 V DC


DC Line Current (operating maximum) 9.10 A max at -48 V DC
6.2 A max at -72 V DC

428 W max at -39 V DC


Power Consumption (operating maximum) 437 W max at -48 V DC
448 W max at -72 V DC

1.54 x 106 J/hr, (1,460 Btu/hr) max at -39 V DC


Heat Dissipation (operating maximum) 1.57 x 106 J/hr, (1,491 Btu/hr) max at -48 V DC
1.61 x 106 J/hr, (1,529 Btu/hr) max at -72 V DC

In-rush Current 40 A peak, per requirement in EN300 132-2 Sect. 4.7 limit curve

DC Protection 50 A fuse in each power supply

DC Inlet Type Positronics PLBH3W3M4B0A1/AA


Positronics PLBH3W3F0000/AA; Positronics Inc.,
Mating DC Connector www.connectpositronics.com

Ride-through Time 1 ms min at -50 V input


Current Sharing ± 5 percent of full load, between power supplies
WEIGHT AND DIMENSIONS

Weight kg/lbs Empty: 10.0/22.1


Full: 20.23/44.61
Vertical size 2 NEMA units
Height cm/inches 8.46/3.40
Width cm/inches 44.45/17.5
Depth cm/inches 33.02/13
Note: Power consumption values for DPEs and DAEs are based on fully populated enclosures (power supplies, drives and I/O modules).

7 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Operating environment
Spec Sheet
The Dell Unity XT 480/480F models meet ASHRAE Equipment Class A3 and the 380/380F models meet ASHRAE
Equipment Class A4.

Description Specification
Recommended Range Operation The limits under which equipment will operate 18°C to 27°C (64.4°F to 80.6°F) at 5.5°C
the most reliably while still achieving (59°F) dew.
reasonably energy-efficient data center
operation.
Continuous Allowable Range Operation Data center economization techniques (e.g. 5°C to 35°C (50°F to 95°F) at 20% to 80%
free cooling) may be employed to improve relative humidity with 21°C (69.8°F) maximum
overall data center efficiency. These dew point (maximum wet bulb temperature).
techniques may cause equipment inlet De-rate maximum allowable dry bulb
conditions to fall outside the recommended temperature at 1°C per 300m above 950m
range but still within the continuously allowable (1°F per 547 ft above 3117 ft).
range. Equipment may be operated without
any hourly limitations in this range.
Improbable Operation (Excursion Limited) During certain times of the day or year, 35°C to 40°C (with no direct sunlight on the
equipment inlet conditions may fall outside the equipment) at -12°C dew point and 8% to 85%
continuously allowable range but still within the relative humidity with 24°C dew point
expanded improbable range. Equipment (maximum wet bulb temperature). Outside the
operation is limited to ≤ 10% of annual continuously allowable range (10°C to 35°C),
operating hours in this range. the system can operate down to 5°C or up to
40°C for a maximum of 10% of its annual
operating hours. For temperatures between
35°C and 40°C (95°F to 104°F), de-rate
maximum allowable dry bulb temperature by
1°C per 175m above 950m (1°F per 319 ft
above 3117 ft).

Exceptional Operation (Excursion Limited) During certain times of the day or year, 40°C to 45°C (with no direct sunlight on the
ASHRAE 4 only equipment inlet conditions may fall outside the equipment) at -12°C dew point and 8% to 90%
continuously allowable range but still within the relative humidity with 24°C dew point
expanded exceptional range. Equipment (maximum wet bulb temperature). Outside the
operation is limited to ≤ 1% of annual continuously allowable range (10°C to 35°C),
operating hours in this range. the system can operate down to 5°C or up to
45°C for a maximum of 1% of its annual
operating hours. For temperatures between
35°C and 45°C (95°F to 104°F), de-rate
maximum allowable dry bulb temperature by
1°C per 125m above 950m (1°F per 228 ft
above 3117 ft).

Temperature Gradient 20°C / hour (36°F / hour)

Altitude Max Operating 3050m (10,000ft)

Statement of Compliance
Dell Information Technology Equipment is compliant with all currently applicable regulatory requirements for
Electromagnetic Compatibility, Product Safety, and Environmental Regulations where placed on market.
Detailed regulatory information and verification of compliance is available at the Dell Regulatory Compliance
website. http://dell.com/regulatory_compliance

8 DELL UNITY XT

© 2022 Dell Inc. or its subsidiaries.


Spec Sheet

Learn more about Dell


Unity XT solutions Contact a Dell Expert

9 DELL UNITY XT
Copyright © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are
© 2022 ofDell
trademarks Dell Inc.
Inc. oror
its its subsidiaries.
subsidiaries. Other trademarks may be
trademarks of their respective owners. V2
ZINARA-UNITYXT480 Unity XT 480

ZINARA-UNITYXT480

Index Page

System Cabinet Layout 1

General Information 1

System Tier Information 1

Pool Configuration 2

Power Information 3

I/O Configuration 3

Software Suites 3

Additional Information 3
Cabinet 1

General Information
Model Number Unity XT 480 Usable Capacity 247.84 TB
Raw Capacity 307.05 TB Total Drive Count 85
Unit Count 9U Workload 80% Read - 20% Write
IOPS 75487 Effective Capacity 743.52 TB
Block Size 8K Configuration Type New Solution

System Tier Information


Flash Count SAS Count NL-SAS Count

400GB FLASH 2 4 1.8TB 10K 28 12TB 7.2K 15


3.2TB FLASH 3 33

Total 37 28 15

Usable Capacity 80.48 TB 39.87 TB 127.49 TB

Page 1
Fast Cache

Drive Type 400GB FLASH

Count 4

Hot Spare Count 1

Form Factor 2.5

Total 5

Pool Configuration
Pool1 (Dynamic Pool)

Usable Effective
Tier Form Factor Drive Type RAID Type Spare Policy Drive Count IOPS
Capacity Capacity

FLASH 2.5 3.2TB FLASH 3 RAID 5 (12+1) 1/32 33 80.48 TB 241.44 TB 72187

SAS 2.5 1.8TB 10K RAID 5 (12+1) 1/32 28 39.87 TB 119.61 TB 2625

NL-SAS 3.5 12TB 7.2K RAID 6 (12+2) 1/32 15 127.49 TB 382.47 TB 675

Total 76 247.84 TB 743.52 TB 75487

Advanced Deduplication Ratio 3:1

Page 2
Power Information
Power Summary Environmental

Power Consumption 1.538 kVA Operating Temp Below 26C

Region United States Heat Dissipation 4495 Btu/Hr

Input Voltage V~208 Sound Pressure 58 db

Phases Single Phase Sound Power 7.4 bels

Site Circuit Breaker 30 Weight 616 lbs

Peak Inrush Current 115 A Dimensions 24 in Width x 41.33 in Depth x


75 in Height

Clearance 42 in Front x 36 in Rear x


18 in Top

Energy Cost

Annual Energy Cost

GHG Emissions 12.964 tonnes/yr

Currency

PUE 2

Emission Factor 1238.39 lb/MWh

Local Utility Rate

I/O Configuration
Mezz Card None
Additional I/O - Slot 1 4 x 32Gb FC
Additional I/O - Slot 2 4 x 32Gb FC

Software Suites

Unity Hybrid Base Software • Unisphere Suite


• File, Block, VVols
• Snapshots
• Encryption
• Anti-virus
• Native Replication
• FAST Cache & FAST VP

Additional Information
System Drive Tier Extreme Performance Date Created 18 Dec 2024
System Drives Used For Data No Midrange Sizer Version 8.7.3
Raw Percentage of Flash 32 % Reserved NVMe Slots No
Operating System Version Unity 5.2

Page 3
Legal Disclaimer
Dell EMC provides Midrange Sizer as a tool to help estimate, size and analyze storage capacity and IOPS requirements for
Unity, SC Series and PowerVault array. Midrange Sizer is provided solely for estimating storage sizing. Results are based on
ordering requirements and best practices of product capabilities and limitations. Midrange Sizer uses a set of sizing assumptions
along with provided inputs which may cause results to vary greatly due to many specific factors such as the environment
setup/infrastructure, use-case and workload.

Midrange Sizer’s output is not a guarantee that a storage array will meet all actual data storage sizing requirements or any
expected savings.

Please carefully read all of the terms of this disclaimer. By viewing and utilizing Midrange Sizer, you acknowledge that you have
read, understand and agree to this disclaimer.

* TB/GB label represents capacity in base-2 i.e. 1TB = 1024*1024*1024*1024 bytes


* True effective capacity and usable capacity are reported which is calculated after all spare space is accounted for.

Page 4

You might also like