B UCSM GUI Infrastructure Management Guide 4 2
B UCSM GUI Infrastructure Management Guide 4 2
B UCSM GUI Infrastructure Management Guide 4 2
2
First Published: 2021-06-24
Last Modified: 2023-01-09
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2021–2023 Cisco Systems, Inc. All rights reserved.
CONTENTS
PREFACE Preface xi
Audience xi
Conventions xi
Related Cisco UCS Documentation xiii
Documentation Feedback xiii
CHAPTER 2 Overview 3
Renumbering a Chassis 48
Turning on the Locator LED for a Chassis 49
Turning off the Locator LED for a Chassis 49
Creating a Zoning Policy from Inventory 50
Viewing the POST Results for a Chassis 50
Acknowledging an IO Module 53
Resetting an I/O Module 54
Resetting an I/O Module from a Peer I/O Module 54
Viewing Health Events for an I/O Module 55
Viewing the POST Results for an I/O Module 56
Determining the Boot Order of a Cisco UCS S3260 Server Node 131
Shutting Down a Cisco UCS S3260 Server Node 131
Shutting Down a Cisco UCS S3260 Server Node from the Service Profile 132
Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Server administration
• Storage administration
• Network administration
• Network security
Conventions
Text Type Indication
GUI elements GUI elements such as tab titles, area names, and field labels appear in this font.
Main titles such as window, dialog box, and wizard titles appear in this font.
TUI elements In a Text-based User Interface, text the system displays appears in this font.
System output Terminal sessions and information that the system displays appear in this
font.
string A nonquoted set of characters. Do not use quotation marks around the string or
the string will include the quotation marks.
!, # An exclamation point (!) or a pound sign (#) at the beginning of a line of code
indicates a comment line.
Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the
document.
Tip Means the following information will help you solve a problem. The tips information might not be
troubleshooting or even an action, but could be useful information, similar to a Timesaver.
Timesaver Means the described action saves time. You can save time by performing the action described in the
paragraph.
Caution Means reader be careful. In this situation, you might perform an action that could result in equipment
damage or loss of data.
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to ucs-docfeedback@external.cisco.com. We appreciate your feedback.
Table 1: New Features and Changed Behavior in Cisco UCS Manager, Release 4.2(3b)
Table 2: New Features and Changed Behavior in Cisco UCS Manager, Release 4.2(2a)
Support for BIOS password reset Cisco UCS Manager now supports • Resetting the BIOS Password
resetting the BIOS password in for a Blade Server, on page 102
Recover Server actions.
• Resetting the BIOS Password
for a Rack-Mount Server, on
page 123
• Resetting the BIOS Password
for a S3X60 Server, on page
140
Table 3: New Features and Changed Behavior in Cisco UCS Manager, Release 4.2(1l)
Cisco UCS M6 Servers Cisco UCS Manager now supports Rack Server Power Management,
Cisco UCS C225 M6 Server on page 88
Power Capping in Cisco UCS, on
page 62
Table 4: New Features and Changed Behavior in Cisco UCS Manager, Release 4.2(1i)
Cisco UCS M6 Servers Cisco UCS Manager now supports Rack Server Power Management,
Cisco UCS C245 M6 Server on page 88
Power Capping in Cisco UCS, on
page 62
Table 5: New Features and Changed Behavior in Cisco UCS Manager, Release 4.2(1d)
Cisco UCS M6 Servers Cisco UCS Manager now supports Rack Server Power Management,
Cisco UCS Cisco UCS C220 M6 on page 88
Server and Cisco UCS C240 M6
and
Server.
Power Capping in Cisco UCS, on
page 62
Rack Server Discovery policy The Rack Server Discovery Policy Rack Server Discovery Policy, on
applies to a new rack-mount server page 39
and an already added or discovered
rack-mount server that is
decommissioned or
recommissioned.
Guide Description
Cisco UCS Manager Getting Started Guide Discusses Cisco UCS architecture and Day 0
operations, including Cisco UCS Manager initial
configuration and configuration best practices.
Cisco UCS Manager Infrastructure Management Guide Discusses physical and virtual infrastructure
components used and managed by Cisco UCS
Manager.
Cisco UCS Manager Firmware Management Guide Discusses downloading and managing firmware,
upgrading through Auto Install, upgrading through
service profiles, directly upgrading at endpoints
using firmware auto sync, managing the capability
catalog, deployment scenarios, and
troubleshooting.
Guide Description
Cisco UCS Manager Server Management Guide Discusses the new licenses, registering Cisco UCS
domain with Cisco UCS Central, power capping,
server boot, server profiles, and server-related
policies.
Cisco UCS Manager Storage Management Guide Discusses all aspects of storage management, such
as SAN and VSAN in Cisco UCS Manager.
Cisco UCS Manager Network Management Guide Discusses all aspects of network management, such
as LAN and VLAN connectivity in Cisco UCS
Manager.
Cisco UCS Manager System Monitoring Guide Discusses all aspects of system and health
monitoring, including system statistics in Cisco
UCS Manager.
Cisco UCS S3260 Server Integration with Cisco UCS Discusses all aspects of management of UCS
Manager S-Series servers that are managed through Cisco
UCS Manager.
Topic Description
I/O Module Management Overview of I/O Modules and procedures to manage them.
Power Management in Cisco UCS Overview of UCS Power Management policies, Global Power
policies, and Power Capping.
Blade Server Management Overview of Blade Servers and procedures to manage them.
S3X60 Server Node Management Overview of S3X60 Server Node and procedures to manage
them.
Topic Description
Architectural Simplification
The simplified architecture of Cisco UCS reduces the number of required devices and centralizes switching
resources. By eliminating switching inside a chassis, network access-layer fragmentation is significantly
reduced. Cisco UCS implements Cisco unified fabric within racks and groups of racks, supporting Ethernet
and Fibre Channel protocols over 10 Gigabit Cisco Data Center Ethernet and Fibre Channel over Ethernet
(FCoE) links. This radical simplification reduces the number of switches, cables, adapters, and management
points by up to two-thirds. All devices in a Cisco UCS domain remain under a single management domain,
which remains highly available through the use of redundant components.
High Availability
The management and data plane of Cisco UCS is designed for high availability and redundant access layer
fabric interconnects. In addition, Cisco UCS supports existing high availability and disaster recovery solutions
for the data center, such as data replication and application-level clustering technologies.
Scalability
A single Cisco UCS domain supports multiple chassis and their servers, all of which are administered through
one Cisco UCS Manager. For more detailed information about the scalability, speak to your Cisco representative.
Flexibility
A Cisco UCS domain allows you to quickly align computing resources in the data center with rapidly changing
business requirements. This built-in flexibility is determined by whether you choose to fully implement the
stateless computing feature. Pools of servers and other system resources can be applied as necessary to respond
to workload fluctuations, support new applications, scale existing software and business services, and
accommodate both scheduled and unscheduled downtime. Server identity can be abstracted into a mobile
service profile that can be moved from server to server with minimal downtime and no need for additional
network configuration.
With this level of flexibility, you can quickly and easily scale server capacity without having to change the
server identity or reconfigure the server, LAN, or SAN. During a maintenance window, you can quickly do
the following:
• Deploy new servers to meet unexpected workload demand and rebalance resources and traffic.
• Shut down an application, such as a database management system, on one server and then boot it up
again on another server with increased I/O capacity and memory resources.
As shown in the figure above, the primary components included within Cisco UCS are as follows:
• Cisco UCS Manager—Cisco UCS Manager is the centralized management interface for Cisco UCS.
For more information on Cisco UCS Manager, see Introduction to Cisco UCS Manager in Cisco UCS
Manager Getting Started Guide
• Cisco UCS Fabric Interconnects—The Cisco UCS Fabric Interconnect is the core component of Cisco
UCS deployments, providing both network connectivity and management capabilities for the Cisco UCS
system. The Cisco UCS Fabric Interconnects run the Cisco UCS Manager control software and consist
of the following components:
• Cisco UCS 6536 Fabric Interconnect, Cisco UCS 6400 Series Fabric Interconnects, Cisco UCS
6332 Series Fabric Interconnects, Cisco UCS 6200 Series Fabric Interconnects, and Cisco UCS
Mini
• Transceivers for network and storage connectivity
• Expansion modules for the various Fabric Interconnects
For more information on Cisco UCS Fabric Interconnects, see Cisco UCS Fabric Infrastructure Portfolio,
on page 8.
• Cisco UCS I/O Modules and Cisco UCS Fabric Extender—IO modules are also known as Cisco FEX
or simply FEX modules. These modules serve as line cards to the FIs in the same way that Cisco Nexus
Series switches can have remote line cards. IO modules also provide interface connections to blade
servers. They multiplex data from blade servers and provide this data to FIs and do the same in the reverse
direction. In production environments, IO modules are always used in pairs to provide redundancy and
failover.
Important The 40G backplane setting is not applicable for 22xx IOMs.
• Cisco UCS Blade Server Chassis—The Cisco UCS 5100 Series Blade Server Chassis is a crucial
building block of Cisco UCS, delivering a scalable and flexible architecture for current and future data
center needs, while helping reduce total cost of ownership.
• Cisco UCS Blade and Rack Servers—Cisco UCS Blade servers are at the heart of the UCS solution.
They come in various system resource configurations in terms of CPU, memory, and hard disk capacity.
The Cisco UCS rack-mount servers are standalone servers that can be installed and controlled individually.
Cisco provides Fabric Extenders (FEXs) for the rack-mount servers. FEXs can be used to connect and
manage rack-mount servers from FIs. Rack-mount servers can also be directly attached to the fabric
interconnect.
Small and Medium Businesses (SMBs) can choose from different blade configurations as per business
needs.
• Cisco UCS I/O Adapters—Cisco UCS B-Series Blade Servers are designed to support up to two network
adapters. This design can reduce the number of adapters, cables, and access-layer switches by as much
as half because it eliminates the need for multiple parallel infrastructure for both LAN and SAN at the
server, chassis, and rack levels.
Note Cisco UCS Manager Release 4.2(3b) introduces Cisco UCS 6536 Fabric
Interconnect to the Cisco UCS 6500 Series Fabric Interconnects.
Note Cisco UCS Manager Release 4.1 introduces the Cisco UCS 64108 Fabric
Interconnect to the Cisco UCS 6400 Series Fabric Interconnects.
Note The Cisco UCS 6100 Series Fabric Interconnects and Cisco UCS 2104 I/O Modules have reached end
of life.
Expansion Modules
The Cisco UCS 6200 Series supports expansion modules that can be used to increase the number of 10G,
FCoE, and Fibre Channel ports.
• The Cisco UCS 6248 UP has 32 ports on the base system. It can be upgraded with one expansion module
providing an additional 16 ports.
• The Cisco UCS 6296 UP has 48 ports on the base system. It can be upgraded with three expansion
modules providing an additional 48 ports.
40/100-Gbps Ethernet ports and four unified ports that can support 40/100-Gbps Ethernet ports or 16 Fiber
Channel (FC) ports after breakout at 8/16/32-Gbps FC speeds. The 16 FC ports after breakout can operate as
an FC Uplink or FC storage port. The switch also supports two ports (Port 9 and Port 10) at 1-Gbps speed
using QSA, and all 36 ports can breakout for 10 or 25 Gbps Ethernet connectivity. All Ethernet ports can
support FCoE.
Port breakout is supported for Ethernet ports (1-32) and Unified ports (33-36). These 40/100G ports are
numbered in a 2-tuple naming convention. The process of changing the configuration from 40G to 10G, or
from 100G to 25G is called breakout, and the process of changing the configuration from [4X]10G to 40G or
from [4X]25G to 100G is called unconfigure.
When you break out a 40G port into 10G ports or a 100G port into 25G ports, the resulting ports are numbered
using a 3-tuple naming convention. For example, the breakout ports of the second 40-Gigabit Ethernet port
are numbered as 1/31/1, 1/31/2, 1/31/3, and 1/31/4.
FC breakout is supported on ports 36 through 33 when each port is configured with a four-port breakout cable.
For example: Four FC breakout ports on the physical port 33 are numbered as 1/33/1, 1/33/2, 1/33/3, and
1/33/4.
Note Fibre Channel support is only available through the configuration of the Unified Ports (36-33) as Fibre
Channel breakout port.
The following image shows the rear view of the Cisco UCS 6536 fabric interconnect:
Figure 3: Cisco UCS 6536 Fabric Interconnect Rear View
The following image shows the rear view of the Cisco UCS 6536 fabric interconnect that include Ports and
LEDs:
Figure 4: Cisco UCS 6536 Fabric Interconnect Rear View
• Fibre Channel breakout ports are supported, and Fiber Channel direct ports are not supported.
• FC breakout port can be configured from 1/36 through 1/33. FC breakout ports (36-33) cannot be
configured unless the previous ports are FC breakout ports. Configuration of a single (individual) FC
breakout port is also supported.
• If the breakout mode for any of the supported Fabric Interconnect ports (1-36) is an Ethernet breakout,
the Fabric Interconnect does not lead to a reboot.
• If the breakout mode for any of the supported Fabric Interconnect ports (36-33) is a Fibre Channel uplink
breakout, the Fabric Interconnect leads to a reboot.
The Cisco UCS 64108 Fabric Interconnect also has one network management port, one RS-232 serial console
port for setting the initial configuration, and one USB port for saving or loading configurations. The FI also
includes L1/L2 ports for connecting two fabric interconnects in a high-availability configuration.
The Cisco UCS 64108 Fabric Interconnect also contains a CPU board that consists of:
7 Beacon LED
The Cisco UCS 64108 Fabric Interconnect has two power supplies (redundant as 1+1) and three fans (redundant
as 2+1).
Note The Cisco UCS 6454 Fabric Interconnect supported 8 unified ports (ports 1 - 8) with Cisco UCS Manager
4.0(1) and 4.0(2), but with release 4.0(4) and later it supports 16 unified ports (ports 1 - 16).
The Cisco UCS 6454 Fabric Interconnect also has one network management port, one console port for setting
the initial configuration, and one USB port for saving or loading configurations. The FI also includes L1/L2
ports for connecting two fabric interconnects for high availability.
The Cisco UCS 6454 Fabric Interconnect also contains a CPU board that consists of:
• Intel Xeon D-1528 v4 Processor, 1.6 GHz
• 64 GB of RAM
• 8 MB of NVRAM (4 x NVRAM chips)
• 128 GB SSD (bootflash)
1 Ports 1-16 (Unified Ports 10/25 Gbps 2 Ports 17-44 (10/25 Gbps Ethernet or FCoE)
Ethernet or FCoE or 8/16/32 Gbps Fibre
Note When using Cisco UCS Manager
Channel)
releases earlier than 4.0(4), ports
Note When using Cisco UCS 9-44 are 10/25 Gbps Ethernet or
Manager releases earlier than FCoE.
4.0(4), only ports 1-8 are
Unified Ports.
3 Ports 45-48 (1/10/25 Gbps Ethernet or 4 Uplink Ports 49-54 (40/100 Gbps Ethernet
FCoE) or FCoE)
Each of these ports can be 4 x 10/25 Gbps
Ethernet or FCoE uplink ports when using
an appropriate breakout cable.
The Cisco UCS 6454 Fabric Interconnect chassis has two power supplies and four fans. Two of the fans
provide front to rear airflow.
Figure 8: Cisco UCS 6454 Fabric Interconnect Front View
1 Power supply and power cord connector 2 Fans 1 through 4, numbered left to right, when
facing the front of the chassis.
Note • The Cisco UCS 6454 Fabric Interconnect supported 8 unified ports (ports 1 - 8) with Cisco UCS
Manager 4.0(1) and 4.0(2), but with Release 4.0(4) and later releases, it supports 16 unified ports
(ports 1 - 16).
When you configure a port on a Fabric Interconnect, the administrative state is automatically set
to enabled. If the port is connected to another device, this may cause traffic disruption. The port
can be disabled and enabled after it has been configured.
The following table summarizes the port support for second, third, fourth, and fifth generation of Cisco UCS
Fabric Interconnects.
Item Cisco UCS Cisco UCS Cisco UCS Cisco UCS Cisco UCS Cisco UCS Cisco UCS
6248 UP 6296 UP 6332 6332-16UP 6454 64108 6536
Form factor 1 RU 2 RU 1 RU 1 RU 1 RU 2 RU 1 RU
Number of 32 48 — 16 16 16 4
Unified
This FI ports 1-16 Note 16
Ports
supported 8 breakout
unified ports
ports (ports (4x4)
1 - 8) with
Cisco UCS
Manager
4.0(1) and
4.0(2), but
with
Release
4.0(4) and
later it
supports 16
unified
ports (ports
1 - 16).
Unified 1/10 Gbps 1/10 Gbps — 1/10 Gbps or 10/25 Gbps 10/25 Gbps 10/25/40/100
Port Speeds or or 4/8/16-Gbps or or Gbps FC
1/2/4/8-Gbps 1/2/4/8-Gbps FC 8/16/32-Gbps 8/16/32-Gbps
FC FC FC FC
Unified Ports 1-32 Ports 1-48 None Ports 1-16 Ports 1-16 Ports 1-16 Ports 33-36
Port Range
Compatibility UCS 2204, UCS 2204, UCS 2204, UCS 2204, UCS 2204, UCS 2204, UCS 2408,
with the UCS 2208 UCS 2208 UCS 2208, UCS 2208, UCS 2208, UCS 2208, UCS 2304,
IOM UCS 2304, UCS 2304, UCS 2408 UCS 2408 UCS
UCS UCS 2304V2 2304V2
2304V2
Expansion 1 (16 port) 3 (16 port) None None None None None
Slots
Fan 2 4 4 4 4 3 6
Modules
Note Cisco UCS Manager does not support connection of FEX, chassis, blade, IOM, or adapters (other than
VIC adapters) to the uplink ports of Fabric Interconnect.
Starting with Cisco UCS Manager Release 4.2(3b), configuring the Ethernet breakout ports will not lead to
Fabric Interconnect reboot.
The following image shows the rear view of the Cisco UCS 64108 fabric interconnect, and includes the ports
that support breakout port functionality:
Figure 9: Cisco UCS 64108 Fabric Interconnect Rear View
1 Ports 1-16. Unified Ports can operate as 2 Ports 1-96. Each port can operate as either a
10/25 Gbps Ethernet or 8/16/32 Gbps 10 Gbps or 25 Gbps Ethernet or FCoE SFP28
Fibre Channel. FC ports are converted in port.
groups of four.
Unified ports:
• 10/25 Gbps Ethernet or FCoE
• 8/16/32 Gbps Fibre Channel
7 Beacon LED
Starting with Cisco UCS Manager Release 4.2(3b), Ethernet breakout ports configuration will not lead to
Fabric Interconnect reboot.
Starting with Cisco UCS Manager Release 4.1(3a), you can connect Cisco UCS Rack servers with VIC 1455
and 1457 adapters, to the uplink ports 49 to 54 (40/100 Gbps Ethernet or FCoE) in Cisco UCS 6454 Fabric
Interconnects.
Note Cisco UCS Manager does not support connection of FEX, chassis, blade, IOM, or adapters (other than
VIC 1455 and 1457 adapters) to the uplink ports of Fabric Interconnect.
The following image shows the rear view of the Cisco UCS 6454 fabric interconnect, and includes the ports
that support breakout port functionality:
Figure 10: Cisco UCS 6454 Fabric Interconnect Rear View
1 Ports 1-16 (Unified Ports 10/25 Gbps 2 Ports 17-44 (10/25 Gbps Ethernet or FCoE)
Ethernet or FCoE or 8/16/32 Gbps Fibre
Channel)
3 Ports 45-48 (1/10/25 Gbps Ethernet or 4 Uplink Ports 49-54 (40/100 Gbps Ethernet
FCoE) or FCoE)
Cisco UCS 6400 Series Fabric Interconnects do not support the following software features:
• Chassis Discovery Policy in Non-Port Channel Mode—Cisco UCS 6400 Series Fabric Interconnects
support only Port Channel mode.
• Chassis Connectivity Policy in Non-Port Channel Mode—Cisco UCS 6400 Series Fabric Interconnects
support only Port Channel mode.
• Multicast Hardware Hash—Cisco UCS 6400 Series Fabric Interconnects do not support multicast hardware
hash.
• Service Profiles with Dynamic vNICS—Cisco UCS 6400 Series Fabric Interconnects do not support
Dynamic vNIC Connection Policies.
• Multicast Optimize—Cisco UCS 6400 Series Fabric Interconnects do not support Multicast Optimize
for QoS.
• NetFlow—Cisco UCS 6400 Series Fabric Interconnects do not support NetFlow related configuration.
• Port profiles and DVS Related Configurations—Cisco UCS 6400 Series Fabric Interconnects do not
support configurations related to port profiles and distributed virtual switches (DVS).
Configuration of the following software features has changed for Cisco UCS 6400 Series Fabric Interconnects:
• Unified Ports—Cisco UCS 6400 Series Fabric Interconnects support up to 16 unified ports, which can
be configured as FC. These ports appear at the beginning of the module.
• VLAN Optimization—On Cisco UCS 6400 Series Fabric Interconnects, you can configure VLAN port
count optimization through port VLAN (VP) grouping when the PV count exceeds 16000. The following
table illustrates the PV Count with VLAN port count optimization enabled and disabled on Cisco UCS
6400 Series Fabric Interconnect, Cisco UCS 6300 Series Fabric Interconnects, and Cisco UCS 6200
Series Fabric Interconnects.
When a Cisco UCS 6400 Series Fabric Interconnect is in Ethernet switching mode:
• The Fabric Interconnect does not support VLAN Port Count Optimization Enabled
• The Fabric Interconnect supports 16000 PVs, similar to EHM mode, when set to VLAN Port Count
Optimization Disabled
• Limited Restriction on VLAN—Cisco UCS 6400 Series Fabric Interconnects reserve 128 additional
VLANs for system purposes.
1 Port lane switch button, port lane LEDs, 2 Ports 1–12 and ports 15–26 can operate as
and L1 and L2 ports. 40-Gbps QSFP+ ports, or as 4 x 10-Gbps
SFP+ breakout ports.
Ports 1 - 4 support Quad to SFP or SFP+
(QSA) adapters to provide 1-Gbps/10 Gbps
operation.
Ports 13 and 14 can operate as 40-Gbps
QSFP+ ports. They cannot operate as 4 x
10-Gbps SFP+ breakout ports.
1 Power supply and power cord connector 2 Fans1 through 4, numbered left to right, when
facing the front of the chassis.
configuration, and two USB ports for saving or loading configurations. The switch also includes an L1 port
and an L2 port for connecting two fabric interconnects to provide high availability. The switch mounts in a
standard 19-inch rack, such as the Cisco R Series rack.
Cooling fans pull air front-to-rear. That is, air intake is on the fan side and air exhaust is on the port side.
Figure 13: Cisco UCS 3223-16UP Fabric Interconnect Rear View
1 Port lane switch button, port lane LEDs, and 2 Ports 1–16 are Unified Ports (UP) that operate
L1 and L2 ports. either as 1- or 10-Gbps SFP+ fixed Ethernet
ports; or as 4-, 8-, or 16-Gigabit Fibre Channel
ports.
3 Ports 17–34 operate either as 40-Gbps QSFP+ 4 Ports 35–40 operate as 40-Gbps QSFP+ ports.
ports, breakout mode for 4 x 10-Gigabit SFP+
breakout ports, or QSA for 10G.
1 Power supply and power cord connector 2 Fans1 through 4, numbered left to right, when
facing the front of the chassis.
Note When you configure a port on a fabric interconnect, the administrative state is automatically set to
enabled. If the port is connected to another device, this may cause traffic disruption. You can disable
the port after it has been configured.
The following table summarizes the second and third generation ports for the Cisco UCS fabric interconnects.
Item Cisco UCS 6324 Cisco UCS Cisco UCS Cisco UCS Cisco UCS
6248 UP 6296 UP 6332 6332-16UP
Description Fabric Interconnect 48–Port Fabric 96–Port Fabric 32–Port Fabric 40–Port Fabric
with 4 unified ports Interconnect Interconnect Interconnect Interconnect
and 1 scalability port
Form factor 1 RU 1 RU 2 RU 1 RU 1 RU
Fan Modules 4 2 5 4 4
Note Cisco UCS 6300 Series Fabric Interconnects support breakout capability for ports. For more information
on how the 40G ports can be converted into four 10G ports, see Port Breakout Functionality on Cisco
UCS 6300 Series Fabric Interconnects, on page 26.
Port Modes
The port mode determines whether a unified port on the fabric interconnect is configured to carry Ethernet
or Fibre Channel traffic. You configure the port mode in Cisco UCS Manager. However, the fabric interconnect
does not automatically discover the port mode.
Changing the port mode deletes the existing port configuration and replaces it with a new logical port. Any
objects associated with that port configuration, such as VLANs and VSANS, are also removed. There is no
restriction on the number of times you can change the port mode for a unified port.
Port Types
The port type defines the type of traffic carried over a unified port connection.
By default, unified ports changed to Ethernet port mode are set to the Ethernet uplink port type. Unified ports
changed to Fibre Channel port mode are set to the Fibre Channel uplink port type. You cannot unconfigure
Fibre Channel ports.
Changing the port type does not require a reboot.
Ethernet Port Mode
When you set the port mode to Ethernet, you can configure the following port types:
• Server ports
• Ethernet uplink ports
• Ethernet port channel members
• FCoE ports
• Appliance ports
• Appliance port channel members
• SPAN destination ports
• SPAN source ports
Note For SPAN source ports, configure one of the port types and then configure
the port as SPAN source.
Note For SPAN source ports, configure one of the port types and then configure
the port as SPAN source.
a 2-tuple naming convention. For example, the second 40G port is numbered as 1/2. The process of changing
the configuration from 40G to 10G is called breakout and the process of changing the configuration from
[4X]10G to 40G is called unconfigure.
When you break out a 40G port into 10G ports, the resulting ports are numbered using a 3-tuple naming
convention. For example, the breakout ports of the second 40-Gigabit Ethernet port are numbered as 1/2/1,
1/2/2, 1/2/3, 1/2/4.
The following image shows the front view for the Cisco UCS 6332 series fabric interconnects, and includes
the ports that may support breakout port functionality:
Figure 15: Cisco UCS 6332 Series Fabric Interconnects Front View
The following image shows the front view for the Cisco UCS 6332-16UP series fabric interconnects, and
includes the ports that may support breakout port functionality:
Figure 16: Cisco UCS 6332-16UP Series Fabric Interconnects Front View
The following image shows the rear view of the Cisco UCS 6300 series fabric interconnects.
Figure 17: Cisco UCS 6300 Series Fabric Interconnects Rear View
1 Power supply
2 Four fans
3 Power supply
4 Serial ports
Cisco UCS 6300 Series Fabric Breakout Configurable Ports Ports without breakout functionality support
Interconnect Series
Important Up to four breakout ports are allowed if QoS jumbo frames are used.
The Cisco UCS 5108 Blade Server Chassis is supported with all generations of fabric interconnects.
In the Cisco UCS Mini solution, the Cisco UCS 6324 fabric interconnect is collapsed into the IO Module
form factor, and is inserted into the IOM slot of the blade server chassis. The Cisco UCS 6324 fabric
interconnect has 24 10G ports available on it. Sixteen of these ports are server facing, two 10G ports are
dedicated to each of the eight half width blade slots. The remaining eight ports are divided into groups of four
1/10G Enhanced Small Form-Factor Pluggable (SFP+) ports and one 40G Quad Small Form-factor Pluggable
(QSFP) port, which is called the 'scalability port'.
Cisco UCS Manager Release 3.1(1) introduces support for a second UCS 5108 chassis to an existing
single-chassis Cisco UCS 6324 fabric interconnect setup. This extended chassis enables you to configure an
additional 8 servers. Unlike the primary chassis, the extended chassis supports IOMs. Currently, it supports
UCS-IOM-2204XP and UCS-IOM-2208XP IOMs. The extended chassis can only be connected through the
scalability port on the FI-IOM.
Important Currently, Cisco UCS Manager supports only one extended chassis for UCS Mini.
Cable Virtualization
The physical cables that connect to physical switch ports provide the infrastructure for logical and virtual
cables. These virtual cables connect to virtual adapters on any given server in the system.
Adapter Virtualization
On the server, you have physical adapters, which provide physical infrastructure for virtual adapters. A virtual
network interface card (vNIC) or virtual host bus adapter (vHBA) logically connects a host to a virtual interface
on the fabric interconnect and allows the host to send and receive traffic through that interface. Each virtual
interface in the fabric interconnect corresponds to a vNIC.
An adapter that is installed on the server appears to the server as multiple adapters through standard PCIe
virtualization. When the server scans the PCIe bus, the virtual adapters that are provisioned appear to be
physically plugged into the PCIe bus.
Server Virtualization
Server virtualization provides you with the ability of stateless servers. As part of the physical infrastructure,
you have physical servers. However, the configuration of a server is derived from the service profile to which
it is associated. All service profiles are centrally managed and stored in a database on the fabric interconnect.
A service profile defines all the settings of the server, for example, the number of adapters, virtual adapters,
the identity of these adapters, the firmware of the adapters, and the firmware of the server. It contains all the
settings of the server that you typically configure on a physical machine. Because the service profile is
abstracted from the physical infrastructure, you can apply it to any physical server and the physical server
will be configured according to the configuration defined in the service profile. Cisco UCS Manager Server
Management Guide provides detailed information about managing service profiles.
Chassis Links
If you have a Cisco UCS domain with some of the chassis' wired with one link, some with two links, some
with four links, and some with eight links, Cisco recommends configuring the chassis/FEX discovery policy
for the minimum number links in the domain so that Cisco UCS Manager can discover all chassis.
Tip To establish the highest available chassis connectivity in a Cisco UCS domain where Fabric Interconnect
is connected to different types of IO Modules supporting different max number of uplinks, select platform
max value. Setting the platform max ensures that Cisco UCS Manager discovers the chassis including
the connections and servers only when the maximum supported IOM uplinks are connected per IO
Module.
After the initial discovery of a chassis, if chassis/FEX discovery policy changes are done, acknowledge IO
Modules rather than the entire Chassis to avoid disruption. The discovery policy changes can include increasing
the number of links between Fabric Interconnect and IO Module, or changes to the Link Grouping preference.
Make sure that you monitor for faults before and after the IO Module acknowledgement to ensure that the
connectivity is restored before proceeding to the other IO Module for the chassis.
Cisco UCS Manager cannot discover any chassis that is wired for fewer links than are configured in the
chassis/FEX discovery policy. For example, if the chassis/FEX discovery policy is configured for four links,
Cisco UCS Manager cannot discover any chassis that is wired for one link or two links. Re-acknowledgement
of the chassis resolves this issue.
The following table provides an overview of how the chassis/FEX discovery policy works in a multi-chassis
Cisco UCS domain:
Link Grouping
For hardware configurations that support fabric port channels, link grouping determines whether all of the
links from the IOM to the fabric interconnect are grouped in to a fabric port channel during chassis discovery.
If the link grouping preference is set to Port Channel, all of the links from the IOM to the fabric interconnect
are grouped in a fabric port channel. If set to None, links from the IOM are pinned to the fabric interconnect.
Important For Cisco UCS 6400 Series Fabric Interconnects and Cisco UCS 6500 Series Fabric Interconnects, the
link grouping preference is always set to Port Channel.
After a fabric port channel is created through Cisco UCS Manager, you can add or remove links by changing
the link group preference and re-acknowledging the chassis, or by enabling or disabling the chassis from the
port channel.
Note The link grouping preference only takes effect if both sides of the links between an IOM or FEX and
the fabric interconnect support fabric port channels. If one side of the links does not support fabric port
channels, this preference is ignored and the links are not grouped in a port channel.
Note Cisco UCS 6400 Series Fabric Interconnect and Cisco UCS 6500 Series Fabric Interconnects do not
support multicast hardware hashing.
Pinning
Pinning in Cisco UCS is only relevant to uplink ports. If you configure Link Grouping Preference as None
during chassis discovery, the IOM forwards traffic from a specific server to the fabric interconnect through
its uplink ports by using static route pinning.
The following table showcases how pinning is done between an IOM and the fabric interconnect based on
the number of active fabric links between the IOM and the fabric interconnect.
1-Link All the HIF ports are pinned to the active link
Only 1,2,4 and 8 links are supported. 3,5,6, and 7 links are not valid configurations.
Port-Channeling
While pinning traffic from a specific server to an uplink port provides you with greater control over the unified
fabric and ensures optimal utilization of uplink port bandwidth, it could also mean excessive traffic over
certain circuits. This issue can be overcome by using port channeling. Port channeling groups all links between
the IOM and the fabric interconnect into one port channel. The port channel uses a load balancing algorithm
to decide the link over which to send traffic. This results in optimal traffic management.
Cisco UCS supports port-channeling only through the Link Aggregation Control Protocol (LACP). For
hardware configurations that support fabric port channels, link grouping determines whether all of the links
from the IOM to the fabric interconnect are grouped into a fabric port channel during chassis discovery. If
the Link Grouping Preference is set to Port Channel, all of the links from the IOM to the fabric interconnect
are grouped in a fabric port channel. If this parameter is set to None, links from the IOM to the fabric
interconnect are not grouped in a fabric port channel.
Once a fabric port channel is created, links can be added or removed by changing the link group preference
and reacknowledging the chassis, or by enabling or disabling the chassis from the port channel.
c) In the Multicast Hardware Hash field, specify whether all the links from the IOMs or FEXes to the
fabric interconnects in a port channel can be used for multicast traffic.
Cisco UCS 6400 Series Fabric Interconnects and Cisco UCS 6500 Series Fabric Interconnects do not
support Multicast Hardware Hash.
What to do next
To customize fabric port channel connectivity for a specific chassis, configure the chassis connectivity policy.
Important The 40G backplane setting is not applicable for 22xx IOMs.
The chassis connectivity policy is created by Cisco UCS Manager only when the hardware configuration
supports fabric port channels.
Important For Cisco UCS 6400 Series Fabric Interconnects and Cisco UCS 6500 Series Fabric Interconnects, the
chassis connectivity policy is always Port Channel.
In a Cisco UCS Mini setup, the creation of a chassis connectivity policy is supported only on the extended
chassis.
Important The 40G backplane setting is not applicable for 22xx IOMs.
Changing the connectivity mode for a chassis might result in decreased VIF namespace.
Caution Changing the connectivity mode for a chassis results in chassis re-acknowledgement. Traffic might be
disrupted during this time.
Procedure
• Global—The chassis inherits this configuration from the chassis discovery policy. This is the default
value.
Cisco UCS Manager uses the settings in the rack server discovery policy to determine whether any data on
the hard disks are scrubbed and whether server discovery occurs immediately or needs to wait for explicit
user acknowledgement.
Cisco UCS Manager cannot discover any rack-mount server that has not been correctly cabled and connected
to the fabric interconnects. For information about how to integrate a supported Cisco UCS rack-mount server
with Cisco UCS Manager, see the appropriate rack-mount server integration guide.
Important Cisco UCS VIC 1400 and 15000 series 4 port adapters support 10G/25G speed. Also, Cisco UCS VIC
15000 series adapters support 50G speed.. When connecting to the Fabric Interconnects, use the same
speed cables on all the adapter ports that are connected to same Fabric Interconnect. When the mix speed
cables are used, rack server discovery will fail and ports may go to a suspended state. Cisco UCS Manager
does not raise any faults.
Note The second server slot in the chassis can be utilized by an HDD expansion
tray module for an additional four 3.5” drives.
• 56 3.5” drive bays with an optional 4 x 3.5” HDD expansion tray module instead of the second server
• Up to 360 TB storage capacity by using 6 TB HDDs
• Serial Attached SCSI (SAS) expanders that can be configured to assign the 3.5” drives to individual
server modules
• The two servers in the chassis can be replaced by a single, dual-height server with an IO expander
The blade server chassis has flexible partitioning with removable dividers to handle two blade server form
factors:
• Half-width blade servers have access to power and two 10GBASE-KR connections, one to each fabric
extender slot.
• Full-width blade servers connect to power and two connections to each fabric extender.
Important Currently, Cisco UCS Manager supports only one extended chassis for UCS Mini.
Decommissioning a Chassis
Decommissioning is performed when a chassis is physically present and connected but you want to temporarily
remove it from the Cisco UCS Manager configuration. Because it is expected that a decommissioned chassis
will be eventually recommissioned, a portion of the chassis' information is retained by Cisco UCS Manager
for future use.
Removing a Chassis
Removing is performed when you physically remove a chassis from the system. Once the physical removal
of the chassis is completed, the configuration for that chassis can be removed in Cisco UCS Manager.
Note You cannot remove a chassis from Cisco UCS Manager if it is physically present and connected.
If you need to add a removed chassis back to the configuration, it must be reconnected and then rediscovered.
During rediscovery Cisco UCS Manager will assign the chassis a new ID that may be different from ID that
it held before.
Acknowledging a Chassis
Acknowledging the chassis ensures that Cisco UCS Manager is aware of the change in the number of links
and that traffics flows along all available links.
Note Chassis acknowledgement causes complete loss of network and storage connectivity to the chassis.
After you enable or disable a port on a fabric interconnect, wait for at least 1 minute before you re-acknowledge
the chassis. If you re-acknowledge the chassis too soon, the pinning of server traffic from the chassis might
not get updated with the changes to the port that you enabled or disabled.
Procedure
Decommissioning a Chassis
Procedure
Removing a Chassis
Before you begin
Physically remove the chassis before performing the following procedure.
Procedure
Note This procedure is not applicable for Cisco UCSC S3260 Chassis.
Procedure
Note This procedure is not applicable for Cisco UCSC S3260 Chassis.
Note You cannot renumber the chassis when you recommission multiple chassis at the same time. Cisco UCS
Manager assigns the same ID that the chassis had previously.
Procedure
Renumbering a Chassis
Note You cannot renumber a blade server through Cisco UCS Manager. The ID assigned to a blade server is
determined by its physical slot in the chassis. To renumber a blade server, you must physically move
the server to a different slot in the chassis.
Note This procedure is not applicable for Cisco UCSC S3260 Chassis.
Procedure
If either of these chassis are listed in the Chassis node, decommission those chassis. You must wait until the
decommission FSM is complete and the chassis are not listed in the Chassis node before continuing. This
might take several minutes.
This action is not available if the locator LED is already turned off.
The LED on the chassis stops flashing.
Note Creating a disk zoning policy from the existing inventory is supported only on Cisco UCS S3260 chassis.
Procedure
Procedure
Step 6 (Optional) Click the link in the Affected Object column to view the properties of that adapter.
Step 7 Click OK to close the POST Results dialog box.
Acknowledging an IO Module
Cisco UCS Manager Release 2.2(4) introduces the ability to acknowledge a specific IO module in a chassis.
Note • After adding or removing physical links between Fabric Interconnect and IO Module, an
acknowledgement of the IO Module is required to properly configure the connection.
• The ability to re-acknowledge each IO Module individually allows to rebuild the network
connectivity between a single IO Module and its parent Fabric Interconnect without disrupting
production traffic in the other Fabric Interconnect.
Procedure
Procedure
Name Description
Health Qualifier field Comma-separated names of all the heath events that
are triggered for the component.
Health Severity field Highest severity of all the health events that are
triggered for the component. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Name Description
Severity column Severity of the health event. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Procedure
SIOC Removal
Do the following to remove an SIOC from the system:
1. Shut down and remove power from the entire chassis. You must disconnect all power cords to completely
remove power.
2. Disconnect the cables connecting the SIOC to the system.
3. Remove the SIOC from the system.
SIOC Replacement
Do the following to remove an SIOC from the system and replace it with another SIOC:
1. Shut down and remove power from the entire chassis. You must disconnect all power cords to completely
remove power.
2. Disconnect the cables connecting the SIOC to the system.
3. Remove the SIOC from the system.
4. Connect the new SIOC to the system.
5. Connect the cables to the SIOC.
6. Connect power cords and then power on the system.
7. Acknowledge the new SIOC.
The server connected to the replaced SIOC is rediscovered.
Note If the firmware of the replaced SIOC is not the same version as the peer SIOC, then it is recommended
to update the firmware of the replaced SIOC by re-triggering chassis profile association.
Acknowledging an SIOC
Cisco UCS Manager has the ability to acknowledge a specific SIOC in a chassis. Perform the following
procedure when you replace an SIOC in a chassis.
Caution This operation rebuilds the network connectivity between the SIOC and the fabric interconnects to which
it is connected. The server corresponding to this SIOC becomes unreachable, and traffic is disrupted.
NVMe slot-1 in SIOC is mapped to server 1 and NVMe slot-2 to server 2. Cisco UCS Manager triggers
rediscovery on both the servers since SIOC has NVMe mapped to both the servers.
Procedure
Procedure
• Beginning with 4.0(1), Secure boot operational state is Enabled by default and is not user configurable.
The option is grayed out.
You can use Policy Driven Chassis Group Power Cap, or Manual Blade Level Power Cap methods to allocate
power that applies to all of the servers in a chassis.
Cisco UCS Manager provides the following power management policies to help you allocate power to your
servers:
Power Control Policies Specifies the priority to calculate the initial power
allocation for each blade in a chassis.
Global Power Allocation Specifies the Policy Driven Chassis Group Power Cap
or the Manual Blade Level Power Cap to apply to all
servers in a chassis.
Global Power Profiling Specifies how the power cap values of the servers are
calculated. If it is enabled, the servers will be profiled
during discovery through benchmarking. This policy
applies when the Global Power Allocation Policy is
set to Policy Driven Chassis Group Cap.
For more information about power supply redundancy, see Cisco UCS 5108 Server Chassis Hardware
Installation Guide.
In addition to power supply redundancy, you can also choose to enable a Power Save Policy from the Power
Save Policy area. For more information, see Power Save Mode Policy, on page 71 .
Note This table is valid if there are four PSUs installed in the chassis.
Note The system reserves enough power to boot a server in each slot, even if that slot is empty. This reserved
power cannot be leveraged by servers requiring more power. Blades that fail to comply with the power
cap are penalized.
blades within a chassis can borrow power from idle blades within the same chassis. If all blades are active
and reach the power cap, service profiles with higher priority power control policies take precedence over
service profiles with lower priority power control policies.
Priority is ranked on a scale of 1-10, where 1 indicates the highest priority and 10 indicates lowest priority.
The default priority is 5.
Starting with Cisco UCS Manager 3.2(2), chassis dynamic power rebalance mechanism is enabled by default.
The mechanism continuously monitors the power usage of the blade servers and adjusts the power allocation
accordingly. Chassis dynamic power rebalance mechanism operates within the overall chassis power budget
set by Cisco UCS Manager, which is calculated from the available PSU power and Group power.
For mission-critical application a special priority called no-cap is also available. Setting the priority to no-cap
does not guarantee that a blade server gets maximum power all the time, however, it prioritizes the blade
server over other servers during the chassis dynamic power rebalance budget allocations.
Note If all the blade servers are set with no-cap priority and all of them run high power consuming loads, then
there is a chance that some of the blade servers get capped under high power usage, based on the power
distribution done through dynamic balance.
Global Power Control Policy options are inherited by all the chassis managed by the Cisco UCS Manager.
Starting with Cisco UCS Manager 4.1(3), a global policy called Power Save Mode is available. It is disabled
by default, meaning that all PSUs present remain active regardless of power redundancy policy selection.
Enabling the policy restores the older behavior..
Starting with Cisco UCS Manager 4.1(2), the power control policy is also used for regulating fans in Cisco
UCS C220 M5 and C240 M5 rack servers in acoustically-sensitive environments. The Acoustic setting for
these fans is only available on these servers. On C240 SD M5 rack servers, Acoustic mode is the default mode.
Starting with Cisco UCS Manager 4.2(1), the power control policy is also used for regulating cooling in
potentially high-temperature environments. This option is only available with Cisco UCS C220 M6, C240
M6, C225 M6, and C245 M6 rack servers and can be used with any fan speed option.
Note You must include the power control policy in a service profile and that service profile must be associated
with a server for it to take effect.
Step 4 Right-click Power Control Policies and choose Create Power Control Policy.
Step 5 In the Create Power Control Policy dialog box, complete the following fields:
Name Description
Name Description
Name Description
Note For Cisco UCS C125 M5 Server, ensure
that you select the same Fan Speed Policy
for all the servers in an enclosure. Cisco
UCS Manager applies the Fan Speed
Policy of the server which gets associated
last. Having the same Fan Speed Policy
for the all the server ensures that the
desired Fan Speed Policy is applied
irrespective of which server is associated
last.
Name Description
M6, C225 M6, andC245 M6 servers,
Acoustic mode is the default mode.
On all other platforms, Low Power
mode is the default mode.
Name Description
Power Capping field What happens to a server when the demand for power
within a power group exceeds the power supply. This
can be one of the following:
• No Cap—The server runs at full capacity
regardless of the power requirements of the other
servers in its power group.
Note For Cisco UCS C-Series M5 and M6
servers, if you select No Cap in this
field, ensure that you do not select
Performance for Fan Speed Policy
field. Associating a service profile
with a server fails if you select
Performance for fan speed policy,
and No Cap for the power capping.
Priority field The priority the server has within its power group
when power capping is in effect.
Enter an integer between 1 and 10, where 1 is the
highest priority.
What to do next
Include the policy in a service profile or service profile template.
Note Today, when the requested power budget is less than the available power capacity, the additional PSU
capacity is placed in Power Save Mode automatically. This increases the efficiency of active PSUs and
minimizes energy wasted for conversion losses. However, there are a few use cases, where this default
behavior can result in an outage:
1. Lightly loaded chassis that only requires 2X PSU to support the requested power policy (Grid) and
the customer did NOT follow the installation guide recommendation regarding PSU input power
connections. In this scenario, the chassis has both active PSUs connected to one feed, and the other
two PSUs in Power Save mode connected to another feed. If the feed connected to the active PSUs
is lost, the entire chassis will experience a service interruption.
2. A heavily loaded chassis that requires a 3X PSUs to support the requested power policy (N+1), and
the customer's rack provides the chassis with dual feed. In this scenario, 3X PSUs are active and 1X
PSU is placed in Power Save mode. If the feed connected to two of the active PSUs is lost (planned
or unplanned), the customer could experience an outage if the load is greater than the remaining
active PSU can support.
Procedure
Step 4 Right-click Power Control Policies and choose Create Power Control Policy. Although these steps use the
Power Control menu, you are creating a Fan Policy, which is administered through these menus.
Step 5 In the Create Power Control Policy dialog box, complete the following fields:
Name Description
Fan Speed Policy drop-down Fan speed is for C-Series Rack servers only. Acoustic
mode is a fan policy available only on Cisco UCS
C220 M5, C240 M5, C240 SD M5, C220 M6, C240
M6 , C225 M6 , and C245 M6 Rack Servers.
Fan speed can be one of the following:
• Acoustic—The fan speed is reduced to reduce
noise levels in acoustic-sensitive environments.
The Acoustic option can result in short-term
throttling to achieve a lowered noise level.
Note For Cisco UCS C-Series M5 and M6
servers using Acoustic Mode, cap in
the Power Capping field is
automatically selected. Acoustic
Mode is the default fan speed policy
for C240 SD M5, C220 M6, C240
M6, C225 M6, and C245 M6 Rack
Servers.
Name Description
Power Capping field Power Capping occurs when the demand for power
within a power group exceeds the power supply. For
Cisco UCS C-Series M5 and M6 servers using
Acoustic Mode, cap in the Power Capping field is
automatically selected.
Note Acoustic Mode is the default fan speed
policy for C240 SD M5, C220 M6, C240
M6, C225 M6, and C245 M6 Rack Servers
and will automatically be selected, along
with the cap option.
Priority field The priority the server has within its power group
when power capping is in effect.
Enter an integer between 1 and 10, where 1 is the
highest priority. For Acoustic Mode, the default is 5.
What to do next
Include the policy in a service profile or service profile template.
The peak power cap is a static value that represents the maximum power available to all blade servers within
a given power group. If you add or remove a blade from a power group, but do not manually modify the peak
power value, the power group adjusts the peak power cap to accommodate the basic power-on requirements
of all blades within that power group.
A minimum of 890 AC watts should be set for each chassis. This converts to 800 watts of DC power, which
is the minimum amount of power required to power an empty chassis. To associate a half-width blade, the
group cap needs to be set to 1475 AC watts. For a full-width blade, it needs to be set to 2060 AC watts.
After a chassis is added to a power group, all service profile associated with the blades in the chassis become
part of that power group. Similarly, if you add a new blade to a chassis, that blade inherently becomes part
of the chassis' power group.
Note Creating a power group is not the same as creating a server pool. However, you can populate a server
pool with members of the same power group by creating a power qualifier and adding it to server pool
policy.
When a chassis is removed or deleted, the chassis gets removed from the power group.
UCS Manager supports explicit and implicit power groups.
• Explicit: You can create a power group, add chassis' and racks, and assign a budget for the group.
• Implicit: Ensures that the chassis is always protected by limiting the power consumption within safe
limits. By default, all chassis that are not part of an explicit power group are assigned to the default group
and the appropriate caps are placed. New chassis that connect to UCS Manager are added to the default
power group until you move them to a different power group.
The following table describes the error messages you might encounter while assigning power budget and
working with power groups.
P-State lowered as Displays when the server is capped This is an information message.
consumption hit power to reduce the power consumption
If a server should not be capped, in
cap for server below the allocated power.
the service profile set the value of
the power control policy Power
Capping field to no-cap.
Chassis N has a mix of This fault is raised when a chassis This is an unsupported
high-line and low-line has a mix of high-line and low-line configuration. All PSUs must be
PSU input power sources. PSU input sources connected. connected to similar power sources.
Procedure
Step 6 On the first page of the Create Power Group wizard, complete the following fields:
a) Enter a unique name and description for the power group.
This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special
characters other than - (hyphen), _ (underscore), : (colon), and . (period), and you cannot change this name
after the object is saved.
b) Click Next.
Step 7 On the Add Chassis Members page of the Create Power Group wizard, do the following:
a) In the Chassis table, choose one or more chassis to include in the power group.
b) Click the >> button to add the chassis to the Selected Chassis table that displays all chassis included in
the power group.
You can use the << button to remove one or more chassis from the power group.
c) Click Next.
Step 8 On the Add Rack Members page of the Create Power Group wizard, do the following:
a) In the Rack Unit table, choose one or more rack units to include in the power group.
b) Click the >> button to add the rack to the Selected Rack Unit table that displays all racks included in the
power group.
You can use the << button to remove one or more rack units from the power group.
c) Click Next.
Step 9 On the Add FEX Members page of the Create Power Group wizard, do the following:
a) In the FEX table, choose one or more FEX to include in the power group.
b) Click the >> button to add the chassis to the Selected FEX table that displays all FEX included in the
power group.
You can use the << button to remove one or more FEX from the power group.
c) Click Next.
Step 10 On the Add FI Members page of the Create Power Group wizard, do the following:
a) In the FI table, choose one or more FI to include in the power group.
b) Click the >> button to add the FI to the Selected FI table that displays all chassis included in the power
group.
You can use the << button to remove one or more FI from the power group.
c) Click Next.
Step 11 On the Power Group Attributes page of the Create Power Group wizard, do the following:
a) Complete the following fields:
Name Description
Input Power(W) field The maximum peak power (in watts) available to the power group.
Enter an integer between 0 and 10000000.
Recommended value for Input The recommended range of input power values for all the members
Power field of the power group.
b) Click Finish.
Note B480 M5 systems using 256GB DIMMs must have a manual blade level
cap at 1300W.
• Unbounded—No power usage limitations are imposed on the server. The server can use as much power
as it requires.
If the server encounters a spike in power usage that meets or exceeds the maximum configured for the server,
Cisco UCS Manager does not disconnect or shut down the server. Instead, Cisco UCS Manager reduces the
power that is made available to the server. This reduction can slow down the server, including a reduction in
CPU speed.
Note If you configure the manual blade-level power cap using Equipment > Policies > Global Policies >
Global Power Allocation Policy, the priority set in the Power Control Policy is no longer relevant.
Procedure
Name Description
Admin Status field Whether this server is power capped. This can be one of the following:
• Unbounded—The server is not power capped under any
circumstances.
• Enabled—The Cisco UCS Manager GUI displays the Watts
field.
Note Manual blade level power capping will limit the power
consumption of a single system, regardless of available
power in the chassis.
Watts field The maximum number of watts that the server can use if there is not
enough power to the chassis to meet the demand.
The value range is from 0 and 10000000.
Step 5 If necessary, expand the Motherboards node to view the power counters.
Procedure
Note After enabling the Global Power Profiling Policy, you must re-acknowledge the blades to obtain the
minimum and maximum power cap.
Important Any change to the Manual Blade level Power Cap configuration results in the loss of any groups or
configuration options set for the Policy Driven Chassis Group Power Cap.
By default, power allocation is done for each chassis through a power control policy.
Note When the power budget that was allocated to the blade is reclaimed, the allocated power displays as 0
Watts.
Limitation
If you power on a blade outside of the Cisco UCS Manager and if there is not enough power available for
allocation, the following fault is raised:
Power cap application failed for server x/y
Note If the priority of an associated blade is changed to no-cap, and is not able to allocate the maximum power
cap, you might see one of the following faults:
• PSU-insufficient—There is not enough available power for the PSU.
• Group-cap-insufficient—The group cap value is not sufficient for the blade.
Event Preferred Power State Actual Power State Actual Power State After
Before Event Event
Shallow Association ON ON ON
Step 4 Right-click Power Sync Policies and choose Create Power Sync Policy.
Step 5 In the Create Power Sync Policy dialog box, complete the following fields:
Name Description
Name Description
Sync-Option field The options that allow you to synchronize the desired
power state of the associated service profile to the
physical server. This can be one of the following:
• Default Sync—After the initial server
association, any configuration change or
management connectivity changes that you
perform trigger a server reassociation. This
option synchronizes the desired power state to
the physical server if the physical server power
state is off and the desired power state is on. This
is the default behavior.
• Always Sync—When the initial server
association or the server reassociation occurs,
this option synchronizes the desired power state
to the physical power state, even if the physical
server power state is on and desired power state
is off.
• Initial Only Sync—This option only
synchronizes the power to a server when a
service profile is associated to the server for the
first time, or when the server is re-commissioned.
When you set this option, resetting the power
state from the physical server side does not affect
the desired power state on the service profile.
What to do next
Include the policy in a service profile or service profile template.
Note Only servers added to a server pool automatically during discovery are removed automatically. Servers
that were manually added to a server pool must be removed manually.
To add a removed blade server back to the configuration, it must be reconnected, then rediscovered. When a
server is reintroduced to Cisco UCS Manager, it is treated as a new server and is subject to the deep discovery
process. For this reason, it is possible for Cisco UCS Manager to assign the server a new ID that might be
different from the ID that it held before.
Important Do not use any of the following options on an associated server that is currently powered off:
• Reset in the GUI
• cycle cycle-immediate or reset hard-reset-immediate in the CLI
• The physical Power or Reset buttons on the server
If you reset, cycle, or use the physical power buttons on a server that is currently powered off, the server's
actual power state might become out of sync with the desired power state setting in the service profile. If the
communication between the server and Cisco UCS Manager is disrupted or if the service profile configuration
changes, Cisco UCS Manager might apply the desired power state from the service profile to the server,
causing an unexpected power change.
Power synchronization issues can lead to an unexpected server restart, as shown below:
Desired Power State in Service Current Server Power State Server Power State After
Profile Communication Is Disrupted
Procedure
After the server boots, the Overall Status field on the General tab displays an OK status.
Step 4 Choose the service profile that requires the associated server to boot.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Boot Server.
Step 7 If a confirmation dialog box displays, click Yes.
Step 8 Click OK in the Boot Server dialog box.
After the server boots, the Overall Status field on the General tab displays an ok status or an up status.
Tip You can also view the boot order tabs from the General tab of the service profile associated with a
server.
Procedure
Note When a blade server that is associated with a service profile is shut down, the VIF down alerts F0283
and F0479 are automatically suppressed.
Procedure
After the server has been successfully shut down, the Overall Status field on the General tab displays a
power-off status.
Procedure
Step 4 Choose the service profile that requires the associated server to shut down.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Shutdown Server.
After the server successfully shuts down, the Overall Status field on the General tab displays a down status
or a power-off status.
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
Procedure
The reset may take several minutes to complete. After the server has been reset, the Overall Status field on
the General tab displays an ok status.
Perform the following procedure to reset the server to factory default settings.
Procedure
Cisco UCS Manager resets the server to its factory default settings.
Procedure
Step 7 Go to the physical location of the chassis and remove the server hardware from the slot.
For instructions on how to remove the server hardware, see the Cisco UCS Hardware Installation Guide for
your chassis.
What to do next
If you physically re-install the blade server, you must re-acknowledge the slot for the Cisco UCS Manager to
rediscover the server.
For more information, see Reacknowledging a Server Slot in a Chassis, on page 98.
Procedure
What to do next
If you physically re-install the blade server, you must re-acknowledge the slot for the Cisco UCS Manager to
rediscover the server.
For more information, see Reacknowledging a Server Slot in a Chassis, on page 98.
After decommissioning the blade server, you must wait for few minutes to initiate the recommissioning of
the server.
For more information, see Recommissioning a Blade Server, on page 98
Procedure
Procedure
Procedure
Option Description
The here link in the Click this link and then click Yes in the confirmation dialog box. Cisco UCS
Situation area Manager reacknowledges the slot and discovers the server in the slot.
OK Click this button if you want to proceed to the General tab. You can use the
Reacknowledge Slot link in the Actions area to have Cisco UCS Manager
reacknowledge the slot and discover the server in the slot.
Procedure
Procedure
Procedure
Procedure
Caution Clearing TPM is a potentially hazardous operation. The OS may stop booting. You may also see loss
of data.
Procedure
Procedure
Procedure
Step 6 (Optional) Click the link in the Affected Object column to view the properties of that adapter.
Step 7 Click OK to close the POST Results dialog box.
Procedure
Name Description
Name Description
Health Qualifier field Comma-separated names of all the heath events that
are triggered for the component.
Health Severity field Highest severity of all the health events that are
triggered for the component. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Severity column Severity of the health event. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Name Description
Severity column The severity of the alarm. This can be one of the following:
• Critical—The blade health LED is blinking amber. This is indicated
with a red dot.
• Minor—The blade health LED is amber. This is indicated with an
orange dot.
Sensor Name column The name of the sensor that triggered the alarm.
Step 6 Click OK to close the View Health LED Alarms dialog box.
Smart SSD
Beginning with release 3.1(3), Cisco UCS Manager supports monitoring SSD health. This feature is called
Smart SSD. It provides statistical information about the properties like wear status in days, percentage life
remaining, and so on. For every property, a minimum, a maximum and an average value is recorded and
displayed. The feature also allows you to provide threshold limit for the properties.
Note The Smart SSD feature is supported only for a selected range of SSDs. It is not supported for any HDDs.
Step 1 Navigate to Equipment > Rack-Mounts > Servers > Server Number > Inventory > Storage.
Step 2 Click the controller component for which you want to view the SSD health.
Step 3 In the Work pane, click the Statistics tab.
Step 4 Click the SSD for which you want to view the health properties.
You can view the values for
• PercentageLifeLeft: Displays the duration of life so action can be taken when required.
• PowerCycleCount: Displays the number of times the SSD is power cycled across the server reboot.
• PowerOnHours: Displays the duration for which the SSD is on. You can replace or turn the SSD off
based on the requirement.
Note If there is a change in any other property, updated PowerOnHours is displayed.
• WearStatusInDays: Provides guidance about the SSD wear based on the workload characteristics run
at that time.
Tip For information on how to integrate a supported Cisco UCS rack-mount server with Cisco UCS Manager,
see the Cisco UCS C-series server integration guide or Cisco UCS S-series server integration guide for
your Cisco UCS Manager release.
Cisco UCS C125 M5 Servers can be managed the same way as other rack servers from Rack Enclosure
rack_enclosure_number.
Note Cisco UCS C125 M5 Servers supports Cisco UCS 6500 Series Fabric Interconnect, Cisco UCS 6400
Series Fabric Interconnect and 6300 Series Fabric Interconnect.
Note Only those servers added to a server pool automatically during discovery will be removed automatically.
Servers that have been manually added to a server pool have to be removed manually.
If you need to add a removed rack-mount server back to the configuration, it must be reconnected and then
rediscovered. When a server is reintroduced to Cisco UCS Manager it is treated like a new server and is subject
to the deep discovery process. For this reason, it's possible that Cisco UCS Manager will assign the server a
new ID that may be different from the ID that it held before.
Important Do not use any of the following options on an associated server that is currently powered off:
• Reset in the GUI
• cycle cycle-immediate or reset hard-reset-immediate in the CLI
• The physical Power or Reset buttons on the server
If you reset, cycle, or use the physical power buttons on a server that is currently powered off, the server's
actual power state might become out of sync with the desired power state setting in the service profile. If the
communication between the server and Cisco UCS Manager is disrupted or if the service profile configuration
changes, Cisco UCS Manager might apply the desired power state from the service profile to the server,
causing an unexpected power change.
Power synchronization issues can lead to an unexpected server restart, as shown below:
Desired Power State in Service Current Server Power State Server Power State After
Profile Communication Is Disrupted
Procedure
After the server boots, the Overall Status field on the General tab displays an OK status.
Step 4 Choose the service profile that requires the associated server to boot.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Boot Server.
Step 7 If a confirmation dialog box displays, click Yes.
Step 8 Click OK in the Boot Server dialog box.
After the server boots, the Overall Status field on the General tab displays an ok status or an up status.
Tip You can also view the boot order tabs from the General tab of the service profile associated with a
server.
Procedure
Step 3 Click the server for which you want to determine the boot order.
Step 4 In the Work pane, click the General tab.
Step 5 If the Boot Order Details area is not expanded, click the Expand icon to the right of the heading.
Step 6 To view the boot order assigned to the server, click the Configured Boot Order tab.
Step 7 To view what will boot from the various devices in the physical server configuration, click the Actual Boot
Order tab.
Note The Actual Boot Order tab always shows "Internal EFI Shell" at the bottom of the boot order list.
Procedure
After the server has been successfully shut down, the Overall Status field on the General tab displays a
power-off status.
Procedure
Step 4 Choose the service profile that requires the associated server to shut down.
Step 5 In the Work pane, click the General tab.
After the server successfully shuts down, the Overall Status field on the General tab displays a down status
or a power-off status.
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
Procedure
The reset may take several minutes to complete. After the server is reset, the Overall Status field on the
General tab displays an ok status.
Perform the following procedure to reset the server to factory default settings.
Procedure
Step 3 Choose the server that you want to reset to its factory default settings.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Server Maintenance.
Step 6 In the Maintenance dialog box, click Reset to Factory Default, then click OK.
Step 7 From the Maintenance Server dialog box that appears, select the appropriate options:
• To delete all storage, check the Scrub Storage checkbox.
• To place all disks into their initial state after deleting all storage, check the Create Initial Volumes
checkbox.
You can check this checkbox only if you check the Scrub Storage checkbox. For servers that support
JBOD, the disks will be placed in a JBOD state. For servers that do not support JBOD, each disk will be
initialized with a single R0 volume that occupies all the space in the disk.
Important Do not check the Create Initial Volumes checkbox if you want to use storage profiles. Creating
initial volumes when you are using storage profiles may result in configuration errors.
Cisco UCS Manager resets the server to its factory default settings.
Procedure
Cisco UCS Manager disconnects the server, then builds the connections between the server and the fabric
interconnect or fabric interconnects in the system. The acknowledgment may take several minutes to complete.
After the server is acknowledged, the Overall Status field on the General tab displays an OK status.
Procedure
Note When you decommission the last Cisco UCS C125 M5 Server from a Rack Enclosure, Cisco UCS
Manager removes the complete Rack Enclosure rack_enclosure_number entry from the navigation
pane.
What to do next
After decommissioning the rack-mount server, you must wait for few minutes to initiate the recommissioning
of the server.
For more information, see Recommissioning a Rack-Mount Server, on page 119
Procedure
Procedure
Note For Cisco UCS C125 M5 Servers, expand Equipment > Rack Mounts > Enclosures > Rack
Enclosure rack_enclosure_number > Servers.
Step 3 Expand the Servers node and verify that it does not include the following:
• The rack-mount server you want to renumber
• A rack-mount server with the number you want to use
If either of these servers are listed in the Servers node, decommission those servers. You must wait until the
decommission FSM is complete and the servers are not listed in the node before continuing. This might take
several minutes.
Procedure
Step 3 Choose the server that you want to remove from the configuration database.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Server Maintenance.
Step 6 In the Maintenance dialog box, click Remove, then click OK.
Cisco UCS Manager removes all data about the server from its configuration database. The server slot is now
available for you to insert new server hardware.
Step 3 Choose the server for which you want to turn the locator LED on or off.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click one of the following:
• Turn on Locator LED
• Turn off Locator LED
Procedure
Step 3 Choose the server for which you want to turn the local disk locator LED on or off.
Step 4 In the Work pane, click the Inventory > Storage > Disks tabs.
The Storage Controller inventory appears.
Procedure
Step 3 Choose the server for which you want to reset the CMOS.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Recover Server.
Step 6 In the Recover Server dialog box, click Reset CMOS, then click OK.
Procedure
Step 3 Choose the server for which you want to reset the CIMC.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Recover Server.
Step 6 In the Recover Server dialog box, click Reset CIMC (Server Controller), then click OK.
Caution Clearing TPM is a potentially hazardous operation. The OS may stop booting. You may also see loss
of data.
Procedure
Step 3 Choose the server for which you want to clear TPM.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Recover Server.
Step 6 In the Recover Server dialog box, click Clear TPM, then click OK.
Procedure
Step 3 Choose the server for which you want to reset the BIOS password.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Recover Server.
Step 6 In the Recover Server dialog box, click Reset BIOS Password, then click OK.
Procedure
Step 3 Choose the server that you want to issue the NMI.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Server Maintenance.
Step 6 In the Maintenance dialog box, click Diagnostic Interrupt, then click OK.
Cisco UCS Manager sends an NMI to the BIOS or operating system.
Step 3 Choose the server for which you want to view health events.
Step 4 In the Work pane, click the Health tab
The health events triggered for this server appear. The fields in this tab are:
Name Description
Health Qualifier field Comma-separated names of all the heath events that
are triggered for the component.
Health Severity field Highest severity of all the health events that are
triggered for the component. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Severity column Severity of the health event. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Name Description
Procedure
Step 3 Choose the server for which you want to view the POST results.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click View POST Results.
The POST Results dialog box lists the POST results for the server and its adapters.
Step 6 (Optional) Click the link in the Affected Object column to view the properties of that adapter.
Step 7 Click OK to close the POST Results dialog box.
Procedure
If a server slot in a chassis is empty, Cisco UCS Manager provides information, errors, and faults for that slot.
You can also re-acknowledge the slot to resolve server mismatch errors and rediscover the server in the slot.
Procedure
After the server boots, the Overall Status field on the General tab displays an OK status.
Booting a Cisco UCS S3260 Server Node from the Service Profile
Procedure
Step 4 Choose the service profile that requires the associated server to boot.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Boot Server.
Step 7 If a confirmation dialog box displays, click Yes.
Step 8 Click OK in the Boot Server dialog box.
After the server boots, the Overall Status field on the General tab displays an ok status or an up status.
Tip You can also view the boot order tabs from the General tab of the service profile associated with a
server.
Procedure
Procedure
After the server has been successfully shut down, the Overall Status field on the General tab displays a
power-off status.
Shutting Down a Cisco UCS S3260 Server Node from the Service
Profile
When you use this procedure to shut down a server with an installed operating system, Cisco UCS Manager
triggers the OS into a graceful shutdown sequence.
If the Shutdown Server link is dimmed in the Actions area, the server is not running.
Procedure
After the server successfully shuts down, the Overall Status field on the General tab displays a down status
or a power-off status.
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
Procedure
The reset may take several minutes to complete. After the server has been reset, the Overall Status field on
the General tab displays an ok status.
Perform the following procedure to reset the server to factory default settings.
Procedure
• To place all disks into their initial state after deleting all storage, check the Create Initial Volumes check
box.
You can check this check box only if you check the Scrub Storage check box. For servers that support
JBOD, the disks will be placed in a JBOD state. For servers that do not support JBOD, each disk will be
initialized with a single R0 volume that occupies all the space in the disk.
Important Do not check the Create Initial Volumes box if you want to use storage profiles. Creating
initial volumes when you are using storage profiles may result in configuration errors.
Cisco UCS Manager resets the server to its factory default settings.
Procedure
Step 7 Go to the physical location of the chassis and remove the server hardware from the slot.
For instructions on how to remove the server hardware, see the Cisco UCS Hardware Installation Guide for
your chassis.
What to do next
If you physically reinstall the server, you must re-acknowledge the slot for Cisco UCS Manager to re-discover
the server.
Procedure
What to do next
• If you physically reinstall the server, you must re-acknowledge the slot for Cisco UCS Manager to
rediscover the server.
• After decommissioning the Cisco UCS S3260 server, you must wait for few minutes to initiate the
recommissioning of the server.
For more information, see Recommissioning a Cisco UCS S3260 Server Node, on page 136
Procedure
Procedure
OK Click this button if you want to proceed to the General tab. You can use the
Reacknowledge Slot link in the Actions area to have Cisco UCS Manager
reacknowledge the slot and discover the server in the slot.
Procedure
Turning the Locator LED for a Cisco UCS S3260 Server Node On
and Off
Procedure
Turning the Local Disk Locator LED on a Cisco UCS S3260 Server
Node On and Off
Before you begin
• Ensure that the disk is zoned. Turning the locator LED on and off cannot be done on disks that are not
zoned.
• Ensure the server, on which the disk is located, is powered on. If the server is off, you are unable to turn
on or off the local disk locator LED.
Procedure
Procedure
Procedure
Procedure
Procedure
Viewing the POST Results for a Cisco UCS S3260 Server Node
You can view any errors collected during the Power On Self-Test process for a server and its adapters.
Procedure
Step 6 (Optional) Click the link in the Affected Object column to view the properties of that adapter.
Step 7 Click OK to close the POST Results dialog box.
Name Description
Health Qualifier field Comma-separated names of all the heath events that
are triggered for the component.
Name Description
Health Severity field Highest severity of all the health events that are
triggered for the component. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Severity column Severity of the health event. This can be one of the
following:
• critical
• major
• minor
• warning
• info
• cleared
Name Description
Severity column The severity of the alarm. This can be one of the
following:
• Critical - The server health LED blinks amber.
This is indicated with a red dot.
• Minor - The server health LED is amber. This is
indicated with an orange dot.
Sensor Name column The name of the sensor that triggered the alarm.
Step 6 Click OK to close the View Health LED Alarms dialog box.
Virtual Circuits
A virtual circuit or virtual path refers to the path that a frame takes from its source vNIC to its destination
virtual switch port (vEth) or from a source virtual switch port to its destination vNIC. There are many possible
virtual circuits that traverse through a physical cable. Cisco UCS Manager uses virtual network tags (VN-TAG)
to identify these virtual circuits and differentiate between them. The OS decides the virtual circuit that a frame
must traverse on a basis of a series of decisions.
In the server, the OS decides the Ethernet interface from which to send the frame.
Note During service profile configuration, you can select the fabric interconnect to be associated with a vNIC.
You can also choose whether fabric failover is enabled for the vNIC. If fabric failover is enabled, the
vNIC can access the second fabric interconnect when the default fabric interconnect is unavailable.
Cisco UCS Manager Server Management Guide provides more details about vNIC configuration during
service profile creation.
After the host vNIC is selected, the frame exits the selected vNIC and, through the host interface port (HIF),
enters the IOM to which the vNIC is pinned. The frame is then forwarded to the corresponding network
Interface port (NIF) and then to the Fabric Interconnect to which the IOM is pinned.
The NIF is selected based on the number of physical connections between the IOM and the Fabric Interconnect,
and on the server ID from which the frame originated.
Virtual Interfaces
In a blade server environment, the number of vNICs and vHBAs configurable for a service profile is determined
by adapter capability and the amount of virtual interface (VIF) namespace available on the adapter. In Cisco
UCS, portions of VIF namespace are allotted in chunks called VIFs. Depending on your hardware, the maximum
number of VIFs are allocated on a predefined, per-port basis.
The maximum number of VIFs varies based on hardware capability and port connectivity. For each configured
vNIC or vHBA, one or two VIFs are allocated. Stand-alone vNICs and vHBAs use one VIF and failover
vNICs and vHBAs use two.
The following variables affect the number of VIFs available to a blade server, and therefore, how many vNICs
and vHBAs you can configure for a service profile.
• Maximum number of VIFs supported on your fabric interconnect
• How the fabric interconnects are cabled
• If your fabric interconnect and IOM are configured in fabric port channel mode
For more information about the maximum number of VIFs supported by your hardware configuration, see
the appropriate Cisco UCS Configuration Limits for Cisco UCS Manager for your software release.
If you change your configuration in a way that decreases the number of VIFs available to a blade, UCS
Manager will display a warning and ask you if you want to proceed. This includes several scenarios, including
times where adding or moving a connection decreases the number of VIFs.
Important VM-FEX is not supported with Cisco UCS 6400 Series Fabric Interconnects.
VIC adapters support VM-FEX to provide hardware-based switching of traffic to and from virtual machine
interfaces.
Important Remove all attached or mapped USB storage from a server before you attempt to recover the corrupt
BIOS on that server. If an external USB drive is attached or mapped from vMedia to the server, BIOS
recovery fails.
Procedure
b) Click OK.
Step 7 If a confirmation dialog box displays, click Yes.
Name Description
Version To Be Activated Choose the firmware version from the drop-down list to activate.
drop-down list
b) Click OK.
Important Remove all attached or mapped USB storage from a server before you attempt to recover the corrupt
BIOS on that server. If an external USB drive is attached or mapped from vMedia to the server, BIOS
recovery fails.
Procedure
Step 3 Choose the server for which you want to recover the BIOS.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Recover Server.
Step 6 In the Recover Server dialog box, click Recover Corrupt BIOS, then click OK.
Step 7 If a confirmation dialog box displays, click Yes.
Step 8 In the Recover Corrupt BIOS dialog box, specify the version to be activated, then click OK.