0% found this document useful (0 votes)
651 views53 pages

HMC Docs

Download as doc, pdf, or txt
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 53

HMC

What Is an HMC?

A Hardware Management Console (HMC) is simply a desktop or rack-mounted computer, very


similar to the kind that most of us use every day. What makes an HMC different from other
personal computers is that the HMC is connected to other computer systems.

You will use the HMC to manage the configuration and operation of partitions in a system, as well
as add and remove hardware without interrupting system operation. With an HMC, you can
control Capacity on Demand resources. One HMC is capable of controlling multiple servers. The
systems that are monitored by the HMC are called managed systems.

The HMC provides a graphical user interface (GUI) for configuring and operating single or
multiple managed systems. There is also a command line interface (CLI).

The HMC may connect locally to the systems it manages via a private network or connect
remotely via an open network. Using the Web-based System Manager Remote Client, you may
connect to the HMC from your PC.

In this course, we are concerned with the HMC as it relates to the IBM POWER5 Servers. There
are five HMC models available for use with POWER5 servers. They are the 7310-C03, 7310-C04,
and 7310-C05 desktop models and the 7310-CR2 and 7310-CR3 rack-mounted models.

What Does an HMC Do?

The HMC uses its connections to managed systems to perform various functions, including:

• Creating and maintaining a multiple-partitioned environment

• Managing Capacity on Demand resources.

• Displaying a virtual operating system session terminal for each partition.

• Displaying virtual operator panel values for each partition.

• Detecting, reporting, and storing changes in hardware conditions.

• Powering managed systems on and off.


• Starting, stopping, and resetting logical partitions.

• Acting as a service focal point for service representatives to determine an appropriate


service strategy and enable the Service Agent Call-Home capability.

IBM 7310-C05 Desktop HMC

The IBM 7310-C05 Desktop Hardware Management Console (HMC) is a dedicated workstation
that allows you to configure and manage partitions and Capacity on Demand on POWER5
servers. An integrated hardware management application helps you configure and partition the
server through a GUI. In addition, operating system console support is provided for i5/OS using a
5250 programming interface. This can potentially save the cost of a separate IBM eServer i5
console.

The 7310 HMC helps you manage LPAR configurations to:

• Create and store LPAR profiles that define the processor, memory, and I/O resources
allocated to an individual partition
• Start, stop, and reset a system partition
• Boot a partition or system by selecting a profile
• Display system and partition status
• Display a virtual operator panel of the contents for each partition or controlled system

The IBM eServer p5 590 and 595 and eServer i5 595 must have access to an HMC. An HMC is
required to manage any POWER5 server when Capacity Upgrade on Demand is implemented.

The HMC offers a service focal point for the systems it controls. It is connected to a dedicated
port on the service processor of the POWER5 system via an Ethernet connection. The Ethernet
connections from HMC to eServer p5 590 and 595 and eServer i5 595 must be via private LAN,
whereas other POWER5 systems can be either private or public. LAN Tools are available for
determining problems and providing service support, such as call-home and error log notification,
through a modem or the Internet.

Multiple POWER5 processor-based servers can be supported by one HMC, located locally or
remotely attached.
IBM 7310-CR3 Rack-Mounted HMC

The IBM 7310-CR3 Rack-mounted Hardware Management Console (HMC) is a dedicated


workstation that allows you to configure and manage partitions and Capacity on Demand on
POWER5 servers. An integrated hardware management application helps you configure and
partition the server through a GUI. In addition, operating system console support is provided for
i5/OS using a 5250 programming interface. This can potentially save the cost of a separate IBM
eServer i5 console.

The 7310 HMC helps you manage LPAR configurations to:

• Create and store LPAR profiles that define the processor, memory, and I/O resources
allocated to an individual partition
• Start, stop, and reset a system partition
• Boot a partition or system by selecting a profile
• Display system and partition status
• Display a virtual operator panel of the contents for each partition or controlled system

The IBM eServer p5 590 and 595 and eServer i5 595 must have access to an HMC. An HMC is
required to manage any POWER5 server when Capacity Upgrade on Demand is implemented.

The HMC offers a service focal point for the systems it controls. It is connected to a dedicated
port on the service processor of the POWER5 system via an Ethernet connection. The Ethernet
connections from HMC to eServer p5 590 and 595 and eServer i5 595 must be via private LAN,
whereas other POWER5 systems can be either private or public. LAN Tools are available for
determining problems and providing service support, such as call-home and error log notification,
through a modem or the Internet.

Multiple POWER5 processor-based servers can be supported by one HMC, located locally or
remotely attached.
Hardware Management Console Models for POWER5 Servers

Desktop HMC
Model POWER5 Models Supported POWER5 POWER4
7310_C03 520, 550, 570, 575, 590, 595, 720
7310_C04 520, 550, 570, 575, 590, 595, 720
7310_C05 520, 550, 570, 575, 590, 595, 720

Rack-Mounted HMC
Model POWER5 Models Supported POWER5 POWER4
7310_CR2 520, 550, 570, 575, 590, 595, 720
Shipped with POWER4 code. May be migrated to
7315_CR2
support POWER5 systems.
7310_CR3 520, 550, 570, 575, 590, 595, 720

The HMC models currently being shipped for POWER5 servers are the desktop model 7310_C04
and the rack-mounted model 7310_CR3. Earlier HMC models shipped with POWER4 code may
be migrated to support POWER5 systems. However, it is not possible to connect an HMC to both
a POWER4 and a POWER5 system simultaneously.
The HMC GUI at a Glance

The various elements of the HMC's graphical user interface (GUI) are covered in depth when
discussing the relevant tasks that the HMC can accomplish. For now, though, we'll provide a
quick overview.

The HMC screen is divided into two areas:

Navigation Area
The left side of the HMC GUI is the Navigation area. It displays a hierarchy of items
ordered in a tree structure. The root of the tree is the Management Environment, which is
a set of host systems that can be managed from the HMC.
Contents Area
The right side of the panel is the Contents area. It displays managed objects and related
tasks. You can choose different views in the Contents area: large icons, small icons, or
details in the form of a list.

In addition to the two screen areas, the HMC GUI has these elements:

Menu Bar
The menu bar, located at the top of the screen, contains these items:

• The Console menu contains choices that control the console.


• The Host menu lets you search within a selected category. This name
changes according to your selection within the Navigation area.
• The Selected menu contains actions that the user must select for
association with managed objects.
• The View menu contains choices for navigating, such as Back, Forward,
and Up One Level.
• The Window menu contains actions for managing sub-panels in the
console workspace.
• The Help menu lists user assistance choices.

Tool Bar
The tool bar, located directly below the menu bar, has icons for commonly-used actions
such as powering on/off a managed system, activating a partition, or viewing the
properties of a system or partition.

Status Bar
The status bar, at the bottom of the screen, displays HMC status information.
An Overview of Logical Partitioning

The HMC allows you to perform many hardware management tasks for your managed system.
You can choose to operate the managed system as a single server or run multiple partitions.

Partitioning allows you to configure a single computer into several independent systems. Each of
these systems, called partitions, runs its own independent operating system and is capable of
running applications specific to that operating system in its own independent environment. This
independent environment contains its own operating system, its own set of system processors, its
own set of system memory, and its own I/O adapters.

A profile defines a configuration setup for a managed system or partition. The HMC allows you to
create multiple profiles for each managed system or partition. You can then use these profiles to
start a managed system or partition in a particular configuration.

A system profile is a collection of often-used partition profiles. You can use a system profile to
start an ordered list of pre-defined partition profiles on your managed system.

To configure and manage logical partitions on your IBM eServer i5 or eServer p5, you must have
at least one HMC.

Logical Partitioning

Resources and partitioning go hand-in-hand. Resources are a system's processors, memory, and
I/O slots. A logical partition uses software and firmware to logically partition the resources on a
system. I/O slots can be populated by different adapters, such as Ethernet, SCSI, or other device
controllers. A disk (both internal and external) is allocated to a partition by assigning it the I/O slot
that contains the disk's adapter.

Logical partitioning (LPAR) is only limited by the total number of hardware resources in the
system. For example, a partition could have any number of installed processors assigned to it,
limited only by the total number of installed processors. Similarly, a partition could have any
amount of memory, limited only by the total amount of memory installed (minus the memory
required for partition management/overhead). I/O adapters are physically installed in one of many
I/O drawers in the system. However, with logical partitioning, any I/O adapter in any I/O drawer
can be assigned to any partition; however, only one partition with that resource can be active.

Virtual I/O devices provide for sharing of physical resources, such as adapters and devices,
among partitions. Multiple logical partitions can share physical I/O resources of a system, and
each partition can simultaneously use virtual and physical I/O devices. Also, virtual I/O devices
allow partitions to be created without adding physical I/O adapters to the system.

The LPAR Validation Tool (LVT) is available to assist you in the design of LPAR systems. The
LVT emulates an LPAR configuration and validates that planned partitions are valid. In addition,
the LVT allows you to test the placement of AIX®, Linux®, and OS/400 hardware within the
system to ensure that the placement is valid. You can access the LVT at
http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm

Dynamic LPAR

Dynamic LPAR (DLPAR) allows the "dynamic" addition, movement (relocation of resources
between LPARs), or removal of resources without having to reactivate the partition or "power
down" the operating system. Consequently, customers utilizing DLPAR do not experience an
interruption in service.
In a static LPAR configuration, individual processors, 256 MB memory blocks, and I/O adapter
slot resources are placed under the exclusive control of a given logical partition. One of the main
advantages of the LPAR implementation is that it gives fine-grained allocation control over these
individual resources, allowing them to be combined in almost any quantity and combination to
create a logical partition.

DLPAR extends these capabilities by allowing this fine-grained resource allocation to occur not
only when activating a logical partition, but also while the partitions are running. Individual
processors, memory blocks, and I/O adapter slots can be released into a "free pool," acquired
from that free pool, or moved directly from one partition to another--again, in almost any quantity
or combination depending on the total hardware resources available.

If you have partitions that need more or can use fewer resources, you can dynamically move the
resources between partitions within the managed system. A "move" operation is simply a
combined operation removing a resource from one LPAR and adding it to another.

Partition Profiles
A partition does not actually own any resources until it is activated; resource specifications are
stored within partition profiles. The same partition can operate using different resources at
different times, depending on the profile you activate.

Each logical partition has at least one partition profile. You can have more than one profile for a
partition. However, you can only activate a partition with one of its profiles at a time. One profile is
designated as the default profile. It is this profile that is activated if another is not specified.

The screen at left, for example, shows three running partitions, endsqd01, endsqd02, and
endsqd04. The partition endsqd01has two partition profiles, endsqd01_aix530_normal, which
boots in normal mode, and endsqd01_aix530_sms, which boots in System Management Services
mode. You are given the option at partition activation to choose the profile with which you would
like to start the partition.

When you activate a partition, you enable the system to create a partition using the set of
resources in a profile created for that partition. The system will attempt to allocate the resources
you assigned to the profile. If you have over-committed resources, the partition profile will not be
activated.

Partition profiles are not affected by changes you make using the DLPAR feature. If you want
permanent changes, you must then reconfigure partition profiles manually. For example, if your
partition profile specifies that you require two processors and you use DLPAR to add a processor
to that partition, you must change the partition profile if you want the additional processor to be
added to the partition the next time you use the profile.
Manufacturing Default Configuration

The initial partition setup of your managed system as received from your service provider displays
one logical partition with one partition profile. The name of this partition is the serial number of the
system. The name of the profile is Default.

A partition profile in the manufacturing default configuration has all of your managed system’s
resources. If you desire, you can install an operating system on this partition and use this partition
as the only partition on the managed system. Because all of the hardware (both required and
desired) are assigned to this partition, no other partitions can be started when this partition profile
is running. Likewise, a partition profile in the manufacturing default configuration cannot be
started while other partitions are running.

System Profiles

Using the HMC, you can create and activate often-used collections of predefined partition
profiles. A collection of predefined partition profiles is called a system profile. The system profile
is an ordered list of partitions and the profile that is to be activated for each partition. The first
profile in the list is activated first, followed by the second profile in the list, followed by the third,
and so on.

Using the same example at left, a system profile could be created to start partitions endsqd01,
endsqd02 and endsqd04 using the desired partition profile.

The system profile helps you change the managed systems from one complete set of partition
configurations to another. For example, a company might want to switch from using 12 partitions
to using only 4 every day. To do this, the system administrator deactivates the 12 partitions and
activates a different system profile, one specifying 4 partitions. The advantage of the System
Profiles is in the initial IPL of the system. By starting up in System Profile mode, all of the
partitions defined there will be started, whereas by starting in LPAR standby mode, the system
administrator would then have to start each LPAR (and select the required profile for the LPAR)
manually.

Installing the HMC

Making the HMC Cable Connections


Making the HMC Cable Connections
Installation instructions differ depending upon which model (desktop or rack-mounted) HMC you
are installing. For a rack-mounted HMC, skip this section and continue on to Installing a Rack-
Mounted HMC. For a desktop HMC, read on.

Positioning the HMC - Desktop HMC

Depending upon which POWER5 model you have, the HMC may be installed by you or by your
service representative. Before you begin connecting cables to the HMC, make sure that the HMC
is in a location where all necessary power outlets and network connections can safely be
reached. Also, maintain at least 51 mm (2 inches) of space on the sides of the system unit and
152 mm (6 inches) at the rear of the system unit to allow the system unit to cool properly.

Making the Cable Connections - Desktop HMC

To connect the HMC cabling for a desktop HMC, refer to the illustration at left as you complete
the following steps. Note that the actual HMC rear-panel connector layout may differ somewhat
from the one presented here.

1. Connect the keyboard cable to a USB plug location 5 at the rear of the system unit.
2. Connect the mouse cable to a USB plug location 5 at the rear of the system unit.
3. Connect the display monitor signal cable to plug location 4 at the rear of the system unit.
4. Attach the power cord to the monitor. Do not plug the power cords into the electrical
outlet at this point.
5. Connect the Ethernet cable to plug location 8 at the rear of the system unit.
6. Connect the external modem to the modem connector (serial connection) location 9.
7. Verify that the PC Line Voltage input switch P2 is correctly set to reflect the voltage
present at the installation site.
8. Attach the HMC's power cord to plug location P1 at the rear of the system unit.

Completing the Cable Connections

To complete the cable connections:

1. Connect the unattached end of the Ethernet cable to the Ethernet port HMC1 on your
managed system. Note: Be careful not to connect the managed system to a power
source at this time.
2. For a desktop HMC, plug the monitor's power cord into an electrical outlet.
3. Plug the HMC's power cord into an electrical outlet.
4. For an external modem, connect the telephone set to the modem connector with the
picture of the telephone receiver.
5. Power on the display, then power on the HMC. This gives the HMC time to detect (via
DDC) the display characteristics.

Before Installing the Software

After completing your cable connections, but before you begin the software installation process,
review the checklist in the eServer Information Center topic 'Initial Server Setup'. Once you
have verified that all necessary connections have been made correctly, you will be ready to
proceed.
Flexible Service Processor (FSP) Configuration
Flexible Service Processor (FSP) Configuration

Most of the communication from the HMC to an eServer i5 or eServer p5 server will be done
through the Flexible Service Processor (FSP).

The picture at top left is a rear view of an eServer model 520 server. The picture at bottom left is
a rear view of an eServer model 570 server. The HMC is connected to a dedicated port on the
service processor of the eServer i5 and eServer p5 server via an Ethernet connection. You
completed this connection in the previous topic Completing the Hardware Installation when you
connected the ethernet cable on your HMC to HMC1 on the managed system.

Follow the installation instruction for your eServer i5 or eServer p5 server. Once the HMC has
been cabled and connected to the managed system, follow the instructions in topic 8.0 HMC
Configuration to install and configure the HMC software. Do not yet connect the managed system
to a power source.

After the HMC software has been installed and configured, connect the managed system to a
power source. The HMC will automatically 'discover' the managed system and it will appear in the
Content area of the HMC's GUI interface under Server and Partition-->Server Management.

Setting FSP Time of Day through ASM.


Setting FSP Time of Day

This feature is available only when the service processor is in standby mode. To set the initial
time of day on your server through ASM:

1. In the Navigation area, click the Service Applications icon.


2. In the Content area, click the Service Focal Point icon. The Service Focal Point screen
is displayed with tasks listed.
3. From the task list, select Service Utilities.
4. The Service Utilities screen is shown with all systems recognized by the HMC listed.
5. Highlight the desired system and click Selected and Launch ASM Menu

Shut Down or Restart a Partition


Shut Down or Restart a Partition

You can use the HMC to shut down or restart the the operating system on your partition. Because
this procedure may corrupt data on the partition you want to reset, perform this procedure only
after you have attempted to restart the operating system manually.

For i5/OS (OS/400) logical partitions, only use this option if you cannot shut down or restart the
i5/OS logical partition from the command line of the operating system. Using this procedure to
shut down an i5/OS logical partition will result in an abnormal IPL.

From the Server Management application, expand the desired managed system to display
partitions on the system.

Highlight the desired partition and right click. Choose Shut Down Partition or Restart Partition
as you wish.
Follow the directions on the screen displayed to shut down or restart the partition in the desired
way.

Choosing the Operating System option to shutdown or restart the logical partition normally is the
equivalent of a soft reset. The actions of the operating system after a soft reset are determined by
its policy settings. Depending on how the settings were configured, the operating system may
perform a dump of system information or restart automatically.

Choosing the Immediate option to shutdown the logical partition immediately is the equivalent of
a hard reset. A hard reset is equivalent to powering off the system. It forces termination and can
corrupt information. Use this option only when the operating system is disrupted and cannot send
or receive commands.

You may also shutdown and re-activate a partition in one action by Restarting a partition. You
have the same options for an Operating System or Immediate shutdown, and the same caveat
applies - only use this option on an i5/OS partition if you cannot shut down or restart the partition
from the command line of the operating system.

After choosing your shutdown or restart options, a status panel will show the progress of the
shutdown. The partition is shutdown when the status shows Finished. Success. as shown below
left.

Using Capacity on Demand (CoD)

Capacity on Demand (CoD) for the POWER5 Systems

Capacity on Demand (CoD) allows you to dynamically activate one or more resources on your
server as your business peaks dictate. You can activate inactive processors or memory units that
are already installed on your server on a temporary or permanent basis.

Information on preparing for and working with Capacity on Demand may be found in the
Information Center topic Working with Capacity on Demand.

Once you have set up your environment to take advantage of Capacity on Demand, you can use
your HMC to:

• Activate Capacity on Demand resources


• View how many processors you have, how many are available, and how many may be
activated.
• View how much memory you have, how much is available, and how much may be
activated.

To see if a managed system has been set up for Capacity on Demand:


1. In the Navigation area, click the Server and Partition icon.
2. In the Content area, click the Server Management icon. Highlight the desired server and
click Selected and Properties.
3. Under the General tab, in the Capabilities area you will see CoD Capable and the value
True or False.

Capacity Upgrade on Demand

Capacity Upgrade on Demand (CUoD) offers you the capability to permanently activate one or
more inactive processors or memory units without requiring you to restart your server or interrupt
your business.

When you have purchased one or more activation features, you will receive one or more
activation codes to permanently activate your inactive processors or memory units.
To permanently activate your inactive processors or memory units:

1. In the Navigation area, open the Server and Partition folder. Click the Server
Management icon.
2. In the Content area, right-click the desired server and click Manage On Demand
Activations and Capacity on Demand.
3. Click Enter CoD Code and enter the activation code you received.

For more information on planning for and setting up Capacity Upgrade on Demand on your
eServer i5 or eServer p5 server, go to the eServer Information topic Working with Capacity
Upgrade on Demand.

On/Off Capacity on Demand

On/Off Capacity on Demand allows you to temporarily activate and deactivate processors and
memory units to satisfy business peaks. Once you request a number of processors or memory
units to be made temporarily available for a specified number of days, those processors and
memory units are available immediately. You can start and stop requests for On/Off Capacity on
Demand, and you are billed for usage at the end of each quarter.

To manage your On/Off Capacity on Demand resources:

1. In the Navigation area, open the Server and Partition folder. Click the Server
Management icon.
2. In the Content area, right-click the desired server and click Manage On Demand
Activations and Capacity on Demand.
3. Click Processor and Manage On/Off CoD.

The Manage On/Off CoD Processors screen is displayed. From this screen you may determine
and adjust:

• How many On/Off CoD processors are activated.


• How many inactive processors are available.
• How many days and hours are left in the current On/Off CoD request.
• How many processors days are available for new requests.

For more information on planning for and setting up On/Off Capacity on Demand, see the
eServer Information Center topic Working with Capacity on Demand.
Reserve Capacity on Demand

Reserve Capacity on Demand allows you to purchase a reserve capacity prepaid feature that
represents a number of processor days. You can then activate the inactive processors using
Reserve Capacity on Demand as your business requires.

To manage your Reserve Capacity on Demand resources:

1. In the Navigation area, open the Server and Partition folder. Click the Server
Management icon.
2. In the Content area, right-click the desired server and click Manage On Demand
Activations and Capacity on Demand.
3. Click Processor and Manage Reserve CoD.

For more information on planning for and setting up Reserve Capacity Demand on your eServer
i5 or eServer p5 server, go to the eServer Information topic Working with Reserve Capacity
on Demand.

Applying Software Upgrade and Fixes to the HMC

Upgrading the HMC Software


Preparing for the Software Upgrade

Verifying the Current HMC Software Level

To determine your current HMC software version:

1. Log in to the HMC as hscroot or as a user with System Administrator role.


2. Click on Licensed Internal Code Maintenance --> HMC Code Update. Under STATUS
you will see listed the current Version, Release and Build levels.

Backing Up Critical Console Information

Critical console information should be backed up before installing a new version of HMC software
so that previous levels may be restored in the event of a problem in upgrading. See Backup
Critical Console Data.

Recording Current HMC Configuration Information

Before you upgrade to the new version you should record HMC configuration information to
enable you to restore the current configuration to the newly upgraded system.

To record HMC configuration information:

1. In the Navigation area, click the HMC Management folder.


2. In the Navigation area, click HMC Configuration.
3. In the tasks list, click Schedule Operations. The Scheduled Operations panel displays
with a list of all managed systems.
4. Highlight the HMC in the list and click OK.
5. All scheduled operations for the HMC are displayed. Select Sort --> By Object

6. Select each object. Record the following details:


o Object Name
o Schedule Date
o Operation Time (displayed in 24-hour format)
o Repetitive. If repetitive is YES, do the following:
1. Select View --> Schedule Details.
2. Record the interval information.
3. Close the Scheduled Operations window
7. Repeat the previous step for each scheduled operation.
8. Close the Customize Scheduled Operations panel.
9. In the Navigation area, click the Server and Partition folder.
10. In the Contents area, double-click Server Management.
11. In the tasks list, right-click the managed system and select Profile Data --> Backup.
12. Type a backup file name and record this information.
13. Click OK.
14. Repeat steps 10-12 for each managed system.
15. In the Navigation area, click the HMC Management icon.
16. In the Navigation area, click the HMC Configuration icon.
17. In the tasks list, click Enable/Disable Remote Command Execution.
18. Record the settings of the "Enable remote command execution using the ssh facility"
option.

Upgrading the HMC Software by CD


To upgrade the HMC software:

1. Log in to your HMC as hscroot.


2. In the Navigation area, double-click the Licensed Internal Code Maintenance folder.
3. In the Contents area, click HMC Code Update.
4. In the Contents area, click Save Upgrade Data.
5. Click Hard Drive.
6. Click Continue.
7. Click Continue again to start the task. Wait for the task to complete. If the Save Upgrade
Data task fails, contact software support before proceeding. Do not continue the upgrade
process if the Save Upgrade Data task fails.
8. Click OK.
9. Insert the HMC Product Installation CD into the DVD-RAM drive.
10. Select the Console menu option, then select Exit.
11. Click Exit now. The Exit Hardware Management Console window opens.
12. Click Reboot Console.

13. During system boot, select the Upgrade option by pressing F1.
14. Press F1 again to confirm.
15. When the base HMC installation has completed, the base HMC CD ejects from the drive.
Follow the prompt to insert CD 2.
16. Press Enter to reboot the HMC. If there is a modem installed, ensure that it is powered
on.
17. When the HMC boots, installation continues from CD 2. When that installation is
complete, the system will reboot.
18. Auto detection of peripheral devices/adapters should occur at this time (i.e., ethernet
adapter).
19. There is a timer on the keyboard mapping selection screen. Select an applicable
keyboard option for your locale.
20. Your saved upgrade data will be automatically restored at this time.

Installing Software without the Guided Setup


Wizard

To install the HMC software for the first time without using the Guided Setup wizard:
1. Obtain the Product CDs.
2. Reboot your HMC with the new CD inserted in the DVD-RAM drive.
3. Monitor the reboot process carefully! If the Kudzu hardware discovery utility starts during
the reboot, you MUST select all defaults; otherwise, you will not be able to use your
keyboard.
4. When asked to perform an Install/Recovery or Upgrade, select Install/Recovery (F8).

5. Select F1 on the next screen to confirm.


6. Remove the CD from the DVD Ram drive and hit Enter when the install has completed.
7. If you see a screen asking you for Keyboard configuration, select option 2.
8. You will see a panel prompting you for local change. Choose the second option, and click
OK.
9. Now you will see a graphical login prompt.
10. Log in as user hscroot, password abc123. This is the user ID/password combination that
the HMC is shipped with.
11. Change the predefined user ID and password immediately.

The HMC Guided Setup Wizard

The Guided Setup Wizard can be used for the initial installation of the HMC software on a new
system. It cannot be used for system upgrades of any kind, and is intended only for setting up a
new HMC.

The intended sequence of system configuration information that the Guided Setup Wizard asks
for is as follows:

1. Date and time changes.


2. New password for the hscroot user ID.
3. New password for the root user ID.
4. Optionally, the user ID, password, and permissions for any additional users.

5. Network settings:
o Host name, domain name, computer description, whether or not to use DNS
(and, if so, the server search order and domain suffix search order), and routing
information.
o For each ethernet network interface, whether it is for the private or public
network, the media speed, whether or not to enable the DHCP Server (which is
only applicable if the interface is for the private network), whether the IP address
is static or dynamic and if static, the TCP/IP address and network mask.
o Firewall information, which includes what ports will allow incoming connections
from non-local hosts and, optionally, what specific hosts are allowed to make
incoming connections on these ports.
6. Specifying customer contact information for service-related activities.
7. Specifying connectivity information for service-related activities
8. Authorizing users to use Service Agent and configuring notification of problem events
9. Configuring Service Focal Point.

Guided Setup can be initiated in two ways: from (a) a splash panel and (b) the WebSM user
interface
Launching the Setup Wizard from a Splash Panel

The Guided Setup Wizard splash panel, shown at left, is displayed at initial HMC logon. It is also
available anytime that user hscroot is logged on.

Launching the Setup Wizard from WebSM

The Guided Setup Wizard can be launched from the Information and Setup option on the main
HMC GUI screen. Click on Information Center and Setup Wizard to display the Information and
setup window, then click on Launch the Guided Setup Wizard.
Customizing System Users, Tasks, and Roles
Configuring System Users

The Guided Setup Wizard is typically used to configure HMC system users. However, HMC
customization and modification is also done from the HMC Users panel.

To display the HMC Users panel:

1. In the Navigation area of the main HMC screen, click on HMC Management.
2. Select HMC Users. The HMC Users panel is displayed. To configure or modify an HMC
user, click the Manage HMC Users and Access task.

The User Profiles Screen is displayed with a list of existing HMC users. Highlight the user you
wish to modify and click User. You may:

o Add a user
o Copy user information
o Remove a user
o Modify user information

For details on the different user roles and each HMC User option, see the eServer Information
Center topic 'Overview of HMC Tasks, Roles and Commands'.

Managing User Access Task Roles

When you create an HMC user, you must assign that user a task role. Each task role allows the
user varying levels of access to tasks available on the HMC. There are five pre-defined roles:

• super administrator
• service representative
• operator
• product engineer
• viewer

From the HMC Users panel, to modify or add to the existing pre-defined user access task roles:

1. Click the Manage Access Task Roles and Managed Resource Roles. The Customize
User Controls panel is displayed.
2. To modify access task roles, check Task Roles. A list of currently defined roles is
displayed.
3. Highlight the targeted role and click Edit to modify access task roles.

For details on the different user roles and each HMC User option, see the Information Center
topic 'Overview of HMC Tasks, Roles and Commands'.
Managing Managed Resource Roles

You can assign managed systems and partitions to individual HMC users. This allows you to
create a user that has access to managed system A but not managed system B. Each grouping
of managed resource access is called a Managed Resource Role.

To change or configure managed resource roles:

1. Check Managed Resource Roles. A list of currently defined managed resource roles is
displayed.
2. Highlight the targeted role and click Edit. You may add, copy, remove, or modify the
selected role.

For details on the different user roles and each HMC User option, see the Information Center
topic 'Overview of HMC Tasks, Roles and Commands'.
DLPAR Memory Operations

DLPAR Memory Operations

With Dynamic Logical Partitioning (DLPAR), you can add memory to, or remove memory from, a
partition without rebooting the partition’s operating system. You can also move memory from one
partition to another without rebooting either partition.

Adding Memory Dynamically

To add available memory resources without rebooting the partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Memory
Resources --> Add. The Add Memory Resources panel is displayed.
6. The Available memory, Maximum Memory, and Current Memory fields are pre-loaded
with Gigabyte (GB) and Megabyte (MB) values.
7. Enter the amount of memory to add in 1 Gigabyte (GB) increments and 16 Megabyte
(MB) increments.
8. Click OK to add memory dynamically

Removing Memory Dynamically

To remove available memory resources without rebooting the partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Memory
Resources --> Remove. The Remove Memory Resources panel is displayed.
6. The Available memory, Minimum Memory, and Current Memory fields are pre-loaded
with Gigabyte (GB) and Megabyte (MB) values.
7. Enter the amount of memory to remove in 1 Gigabyte (GB) increments and 16 Megabyte
(MB) increments.
8. Click OK to remove memory dynamically

Moving Memory Dynamically

To move available memory resources from one partition to another without rebooting either
partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Memory
Resources --> Move. The Move Memory Resources panel is displayed.
6. The Minimum Memory and Current Memory fields of the source partition are pre-loaded
with memory values.
7. Select the partition to where you want to move the memory dynamically. After the
selection, the Minimum Memory and Current Memory values of target partition are
displayed.
8. Enter the amount of memory to move in 1 Gigabyte (GB) increments and 16 Megabyte
(MB) increments.
9. Click OK to move memory dynamically

Adding a Dedicated Processor Dynamically to an AIX Partition

To add a dedicated processor without rebooting the AIX partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Processor
Resources --> Add. The Add Processor Resources panel is displayed.
6. The Available processing units, Maximum Processors and Current Processors fields are
listed.
7. Enter the amount of processors to add in the Processing Units to Add field.
8. Click OK to add dedicated processors dynamically.

Adding a Shared Processor Dynamically to an AIX Partition

To add a shared processor without rebooting the AIX partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Processor
Resources --> Add. The Add Processor Resources panel is displayed.
6. The Available processing units, Maximum Processing Units, Current Processing Units,
Maximum Virtual Processors, Current Virtual Processors, and Uncapped Weight fields
are pre-loaded with values.
7. Enter the amount of processing units to add in the Amount to Add Processing Units field
and the amount of virtual processors to set in the After Add Virtual Processors field.
8. Click OK to add shared processors dynamically.
Removing a Dedicated Processor Dynamically from an AIX Partition

To remove a dedicated processor without rebooting the AIX partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Processor
Resources --> Remove. The Remove Processor Resources panel is displayed.
6. The Available processing units, Minimum Processors, and Current Processors fields are
pre-loaded with values.
7. Enter the amount of processors to remove in the Amount to Remove Processors field.
8. Click OK to remove dedicated processors dynamically.

Removing a Shared Processor Dynamically from an AIX Partition

To remove a shared processor without rebooting the AIX partition:

1. In the Navigation area, click the Server and Partition icon.


2. In the Contents area, double-click the Server Management icon.
3. In the Contents area, select the desired managed system.
4. In the Contents area, select the desired partition.
5. From the Selected menu bar item, select Dynamic Logical Partitioning --> Processor
Resources --> Remove. The Remove Processor Resources panel is displayed.
6. The Available processing units, Minimum Processing Units, Current Processing Units,
Minimum Virtual Processors, Current Virtual Processors, and Uncapped Weight fields are
pre-loaded with values.
7. Enter the amount of processing units to remove in the Amount to Remove Processing
Units field and the amount of virtual processors to set in the After Remove Virtual
Processors field.
8. Click OK to remove shared processors dynamically.
Command Line Interface (CLI): Command List
Activate partition - chsysstate

Activate system profile - chsysstate

Add memory to a partition - chhwres

Add processors to a partition - chhwres

Create LPAR - mksyscfg

Create LPAR profile - mksyscfg

Create system profile - mksyscfg

Delete LPAR - rmsyscfg

Delete LPAR profile - rmsyscfg

Delete system profile - rmsyscfg

Fast power off the managed system - chsysstate

Get LPAR state - lssyscfg

Hard partition reset - chsysstate

List all partitions in a managed system - lssyscfg

List all systems managed by the HMC - lssyscfg


List CoD capacity information - lscod

List CoD code generation information - lscod

List CoD history log - lscod

List HMC remote access settings - lshmc

List HMC network settings - lshmc

List HMC VPD information - lshmc

List HMC version - lshmc

List I/O resources for a managed system - lshwres

List Licensed Internal Code levels - lslic

List LPAR profile properties - lssyscfg

List LPAR properties - lssyscfg

List managed system properties - lssyscfg

List memory resources - lshwres

List On/Off CoD billing information - lscod

List processor resources - lshwres

List reference code entries - lsrefcode

List system profile properties - lssyscfg

List virtual I/O resources for a managed system - lshwres

Modify LPAR profile properties - chsyscfg

Modify LPAR properties - chsyscfg

Modify managed system properties - chsyscfg

Modify system profile properties - chsyscfg

Move a physical I/O slot from one partition to another -


chhwres

Move memory from one partition to another - chhwres

Move processors from one partition to another - chhwres


Power off the managed system - chsysstate

Power on the managed system - chsysstate

Re-IPL the managed system - chsysstate

Remove a physical I/O slot from a partition - chhwres

Remove memory from a partition - chhwres

Remove processors from a partition - chhwres

Soft partition reset - chsysstate

Update Licensed Internal Code - updlic

Validate a system profile - chsysstate

Commands by Name:

chhwres - change system memory and processor resources


add memory to a partition
add processors to a partition
move memory from one partition to another
move processors from one partition to another
remove memory from a partition
remove processors from a partition
chsyscfg - change system configuration
modify LPAR properties
modify LPAR profile properties
modify managed system properties
modify system profile properties

chsysstate - change system state


activate partition
activate system profile
fast power off the managed system
hard partition reset
power off the managed system
power on the managed system
re-IPL the managed system
soft partition reset
lscod - list Capacity on Demand resources for a managed system
list CoD capacity information
list CoD code generation information
list CoD history log
list On/Off CoD billing information
lshmc - List HMC Configuration Information
list HMC remote access settings
list HMC network settings
list HMC VPD information
list HMC version
lshwres - list the hardware resources of a managed system
determine DRC indexes for physical I/O slots
determine memory region size
list I/O resources for a managed system
list memory resources
list processor resources
list virtual I/O resources for a managed system

lslic - list Licensed Internal Code (LIC) levels


list LIC levels active on a managed system
list LIC levels available in a repository

lsrefcode - list reference code entries for partitions or managed systems


list reference code entries for all partitions
list reference code entries for a managed system

lssyscfg - list system configuration information

get LPAR state


list all partitions in a managed system
list all systems managed by the HMC
list LPAR profile properties
list LPAR properties
list managed system properties
list system profile properties

mksyscfg - create system configuration


create LPAR profile
create system profile

rmsyscfg - remove system configuration


delete LPAR
delete LPAR profile
delete system profile

updlic- update Licensed Internal Code (LIC)


retrieve, install, activate updates
retrieve and install updates
remove the last update
change LIC update control to HMC
change LIC update control to operating system

CLI: Working with LPARs


Modifying LPAR Properties

Use the chsyscfg command to modify the properties of a partition. The following example shows
how to change a partition’s cluster ID:
chsyscfg -r lpar -m <managed system> -i "lpar_id=l,cluster_id=3"

Valid attributes, specified with the –i flag, are:

new_name name | lpar_id


default_profile cluster_id

Instead of entering configuration information on the command line with the -i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

For more information about the valid attributes listed in this command example, refer to the
Command Attributes table.

Activating a Partition

Use the chsysstate command to activate a partition. Type the following:

chsysstate -r lpar -m <managed system> -o on -n <partition name> -f


<partition profile name>

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter.

Using a Soft Partition Reset

Use the chsysstate command to perform a soft reset of a partition. Type the following:

chsysstate -r lpar -m <managed system> -o reset -n <partition name>

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter.

Using a Hard Partition Reset

Use the chsysstate command to perform a hard reset of a partition. Type the following:

chsysstate -r lpar -m <managed system> -o off --id <partition ID>

The partition name can be specified instead of the partition ID by using the -n parameter instead
of the --id parameter.

Deleting an LPAR

Use the rmsyscfg command to remove a partition. Type the following:

rmsyscfg -r lpar -m <managed system> -n <partition name>


This command removes the specified partition and all of its associated partition profiles from the
specified managed system. The partition’s profiles are also removed from any system profiles
that contain them.

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter.

CLI: Working with System Profiles


Creating a System Profile

Use the mksyscfg command to create a system profile. In the following example, the user is
making a system profile named sysprof1, with partition profile prof1 for partition lpar1 and
partition profile prof1 for partition lpar2.

mksyscfg -r sysprof -m <managed system> –i


"name=sysprof1,\"lpar_names=lpar1,lpar2\", \"profile_names=prof1,prof1\
""

Partition IDs can be specified instead of partition names when creating a system profile. This is
done by using the lpar_ids attribute instead of the lpar_names attribute.

Instead of entering configuration information on the command line with the-i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

Activating a System Profile

Use the chsysstate command to activate a system profile. Type the following:

chsysstate -r sysprof -m <managed system> -o on -n <system profile name>

Validating a System Profile

Use the chsysstate command to validate a system profile. Type the following:

chsysstate -r sysprof -m <managed system> -n <system profile name> -


-test

To validate a system profile, then activate that system profile if the validation is successful, type
the following:

chsysstate -r sysprof -m <managed system> -o on -n <system profile


name> - -test

Deleting a System Profile

Use the rmsyscfg command to remove a system profile. Type the following:

rmsyscfg -r sysprof -m <managed system> -n <system profile name>


Listing System Profile Properties

Use the lssyscfg command to list a system profile’s properties. Type the following:

lssyscfg -r sysprof -m <managed system> --filter "profile_names=<system


profile name>"

To list all system profiles for the managed system, type the following:

lssyscfg -r sysprof -m <managed system>

Modifying System Profile Properties

Use the chsyscfg command to modify system profile properties. In the following example, the
user is adding profiles prof1 for partition lpar3 and prof2 for partition lpar4 to system profile
sysprof1:

chsyscfg -r sysprof -m <managed system> -i


"name=sysprof1,\"lpar_names+=lpar3,lpar4\",
\"profile_names+=prof1,prof2\""

Valid attributes, specified with the –i flag, include:

new_name profile_names
lpar_names | lpar_ids name

Instead of entering configuration information on the command line with the-i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

CLI: Listing Hardware Resources


Customer Servicer

The lshwres command, which lists the hardware resources of a managed system, can be used
to display I/O, virtual I/O, processor, and memory resources.

Listing I/O Resources for a Managed System

Use the following commands to list:

• I/O units on the managed system


lshwres -m <managed system> -r io --rsubtype unit

• I/O buses on the managed system


lshwres -m <managed system> -r io --rsubtype bus

• I/O slots on the managed system


lshwres -m <managed system> -r io --rsubtype slot
• All partitions participating in an I/O pool and all slots assigned to an I/O pool
lshwres -m <managed system> -r io --rsubtype iopool

• Tagged I/O for i5/OS (OS/400) partitions


lshwres -m <managed system> -r io --rsubtype taggedio

Listing Processor Resources

Use the following commands to list processor information for:

• The managed system


lshwres -m <managed system> -r proc --level sys

• Partitions
lshwres -m <managed system> -r proc --level lpar

• The shared pool


lshwres -m <managed system> -r proc --level pool

Listing Virtual I/O Resources for a Managed System

Use the following commands to list:

• Virtual Ethernet adapters


lshwres -m <managed system> -r virtualio --rsubtype eth --level
lpar

• System level virtual Ethernet information


lshwres -m <managed system> -r virtualio --rsubtype eth --level
sys

• Virtual OptiConnect pool information


lshwres -m <managed system> -r virtualio --rsubtype virtualopti
--level lpar

• HSL OptiConnect pool information


lshwres -m <managed system> -r virtualio --rsubtype hslopti
--level lpar

• Virtual serial adapters


lshwres -m <managed system> -r virtualio --rsubtype serial
--level lpar

• Virtual serial servers with open connections


lshwres -m <managed system> -r virtualio --rsubtype serial
--level openserial

• Virtual SCSI adapters


lshwres -m <managed system> -r virtualio --rsubtype scsi --level
lpar
• Partition-level virtual slot information
lshwres -m <managed system> -r virtualio --rsubtype slot --level
lpar

• Virtual slot information


lshwres -m <managed system> -r virtualio --rsubtype slot --level
slot

Listing Memory Resources

Use the following commands to list:

• Memory information for a managed system


lshwres -m <managed system> -r mem --level sys

• Memory information for partitions


lshwres -m <managed system> -r mem --level lpar

Working with LPARs

Creating LPARs

Use the mksyscfg command to create a partition.

The following is an example of how to create an AIX/Linux partition:

mksyscfg -r lpar -m <managed system> –i


"lpar_id=2,name=aixlinux_lpar2,profile_name=prof1,
lpar_type=aixlinux,boot_mode=norm, desired_procs=1,min_procs=1,
max_procs=1,min_proc_units=0.1,
desired_proc_units=0.5,max_proc_units=0.5,
proc_type=shared,sharing_mode=cap,desired_mem=400, min_mem=400,
max_mem=400,auto_start=1,
power_ctrl_lpar_ids=0,io_slots=553713666/65535/1"

The following is an example of how to create an i5/OS (OS/400) partition:

mksyscfg -r lpar -m <managed system> –i


"lpar_id=3,name=os400_lpar3,profile_name=prof1, lpar_type=os400,
desired_procs=1,min_procs=1,max_procs=1,
min_proc_units=0.1,desired_proc_units=0.5,
max_proc_units=0.5,proc_type=shared, sharing_mode=cap,desired_mem=400,
min_mem=400, max_mem=400,auto_start=1,power_ctrl_lpar_ids=0,
io_slots=553713699/65535/1, load_source_slot=553713699,
console_slot=553713699,min_interactive=0, desired_interactive=0,
max_interactive=0"

Valid attributes, specified with the –i flag, include:


name desired_proc_units min_proc_units
lpar_id max_proc_units ecs_slot
profile_name lpar_io_pool_ids sni_windows
lpar_type io_slots alt_console_slot
cluster_id boot_mode sni_device_ids
sharing_mode max_virtual_slots console_slot
desired_procs auto_start sni_config_mode
min_procs power_ctrl_lpar_ids alt_load_source_slot
max_procs virtual_opti_pool_id virtual_serial_adapters
desired_mem hsl_opti_pool_id load_source_slot
min_mem min_interactive virtual_scsi_adapters
max_mem desired_interactive uncap_weight
proc_type max_interactive virtual_eth_adapters

lnstead of entering configuration information on the command line with the -i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

Listing All Partitions in a Managed System

Use the lssyscfg command to list all partitions in a managed system. To do this, enter:

lssyscfg -r lpar -m <managed system>

To list only the names, IDs, and states of all partitions in a managed system, enter:

lssyscfg -r lpar -m <managed system> -F name,lpar_id,state --header

Listing LPAR Properties

Use the lssyscfg command to list the properties of a specific partition. Type the following:

lssyscfg -r lpar -m <managed system> --filter "lpar_ids=<partition ID>"

Note that the partition name can be specified instead of the partition ID by using the lpar_names
filter in place of the lpar_ids filter. Also, more than one partition may be specified in the filter list.

For information on using the lshwres command to list a partition's I/O, virtual I/O, processor, and
memory resources, see Listing Hardware Resources.

Getting the LPAR State

Use the lssyscfg command to display the state of a partition. Type the following:

lssyscfg -r lpar -m <managed system> --filter "lpar_names=<partition


name>" -F state
Note that the partition ID can be specified instead of the partition name by using the lpar_ids filter
in place
of the lpar_names filter. Also, more than one partition may be specified in the filter list.

Modifying LPAR Properties

Use the chsyscfg command to modify the properties of a partition. The following example shows
how to change a partition’s cluster ID:

chsyscfg -r lpar -m <managed system> -i "lpar_id=l,cluster_id=3"

Valid attributes, specified with the –i flag, are:

new_name name | lpar_id


default_profile cluster_id

Instead of entering configuration information on the command line with the -i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

For more information about the valid attributes listed in this command example, refer to the
Command Attributes table.

Activating a Partition

Use the chsysstate command to activate a partition. Type the following:

chsysstate -r lpar -m <managed system> -o on -n <partition name> -f


<partition profile name>

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter.

Using a Soft Partition Reset

Use the chsysstate command to perform a soft reset of a partition. Type the following:

chsysstate -r lpar -m <managed system> -o reset -n <partition name>

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter.

Using a Hard Partition Reset

Use the chsysstate command to perform a hard reset of a partition. Type the following:

chsysstate -r lpar -m <managed system> -o off --id <partition ID>

The partition name can be specified instead of the partition ID by using the -n parameter instead
of the --id parameter.
Deleting an LPAR

Use the rmsyscfg command to remove a partition. Type the following:

rmsyscfg -r lpar -m <managed system> -n <partition name>

This command removes the specified partition and all of its associated partition profiles from the
specified managed system. The partition’s profiles are also removed from any system profiles
that contain them.

The partition ID can be specified instead of the partition name by using the --id parameter instead
of the -n parameter

Performing DLPAR Operations


Use the chhwres command to to perform dynamic logical partitioning (DLPAR) operations on
running partitions. DLPAR operations can be performed for memory, physical I/O slots, and
processor resources.

Memory

Memory can be dynamically added to a partition, removed from a partition, or moved from one
partition to another. In the following commands, the quantity of memory to be added, removed, or
moved must be specified with the -q flag. This quantity is in megabytes, and must be a multiple of
the memory region size for the managed system.

Determining Memory Region Size

To see what the memory region size is for the managed system, enter this command:

lshwres -r mem -m <managed system> –-level sys -F mem_region_size

The value returned is the memory region size in megabytes.

Adding Memory to a Partition

To add memory to a partition, enter this command:

chhwres -r mem -m <managed system> -o a -p <partition name> -q


<quantity>

Removing Memory from a Partition

To remove memory from a partition, enter this command:

chhwres -r mem -m <managed system> -o r -p <partition name> -q


<quantity>

Moving Memory from One Partition to Another


To move memory from one partition to another partition, enter this command:

chhwres -r mem -m <managed system> -o m -p <source partition name> -t


<target partition name> -q <quantity>

Physical I/O Slots

A physical I/O slot can be dynamically added to a partition, removed from a partition, or moved
from one partition to another. In the following commands, the DRC index of the I/O slot to be
added, removed, or moved must be specified with the -s flag.

Note that only one physical I/O slot can be added, removed, or moved at a time.

Determining DRC Indexes for Physical I/O Slots

To see the DRC indexes for all of the physical I/O slots that are on the managed system, enter
this command:

lshwres -r io --rsubtype slot -m <managed system>

The DRC index for each slot is returned via the drc_index attribute.

Adding a Physical I/O Slot to a Partition

To add a physical I/O slot to a partition, enter this command:

chhwres -r io -m <managed system> -o a -p <partition name> -s <DRC


index>

Removing a Physical I/O Slot from a Partition

To remove a physical I/O slot from a partition, enter this command:

chhwres -r io -m <managed system> -o r -p <partition name> -s <DRC


index>

Moving a Physical I/O Slot from One Partition to Another

To move a physical I/O slot from one partition to another partition, enter this command:

chhwres -r io -m <managed system> -o m -p <source partition name> -t


<target partition name> -s <DRC index>

Processors

Processing resources can be dynamically added to a partition, removed from a partition, or


moved from one partition to another. These processing resources depend on the type of
processors used by the partitions:
• For partitions using dedicated processors, processing resources are dedicated
processors.

• For partitions using shared processors, processing resources include virtual processors
and processing units.

Note: Currently, AIX/Linux partitions using shared processors do not support processor DLPAR
operations.

In the following commands, for partitions using dedicated processors, the quantity of processors
to be added, removed, or moved are specified with the --procs flag.

For partitions using shared processors, the quantity of virtual processors to be added, removed,
or moved are also specified with the --procs flag. The quantity of processing units to be added,
removed, or moved are specified with the --procunits flag. Both of these flags can be specified,
but only one is required.

Note that the quantity of processing units must be multiplied by 100 for the command. For
example, to add, remove, or move .5 processing units, specify a quantity of 50.

Adding Processors to a Partition

To add processors to a partition using dedicated processors, enter this command:

chhwres -r proc -m <managed system> -o a -p <partition name> --procs


<quantity>

To add processors to a partition using shared processors, enter this command:

chhwres -r proc -m <managed system> -o a -p <partition name> --procs


<quantity> --procunits <quantity>

Removing Processors from a Partition

To remove processors from a partition using dedicated processors, enter this command:

chhwres -r proc -m <managed system> -o r -p <partition name> --procs


<quantity>

To remove processors from a partition using shared processors, enter this command:

chhwres -r proc -m <managed system> -o r -p <partition name> --procs


<quantity> --procunits <quantity>

Moving Processors from One Partition to Another

To move processors from a partition using dedicated processors to another, enter this command:

chhwres -r proc -m <managed system> -o m -p <source partition name> -t


<target partition name> --procs <quantity>

To move processors from a partition using shared processors to another, enter this command:
chhwres -r proc -m <managed system> -o m -p <source partition name> -t
<target partition name> --procs <quantity> --procunits <quantity>

Processing resources can also be moved between partitions using dedicated processors and
partitions using shared processors. To move processing resources from a partition using
dedicated processors to a partition using shared processors, specify the quantity of processors
using the --procs flag. This quantity is converted to processing units (by multiplying the quantity
by 100) by the HMC for the target partition.

To move processing resources from a partition using shared processors to a partition using
dedicated processors, specify the quantity of processing units (which must be a multiple of 100)
using the --procunits flag. This quantity is converted to processors (by dividing the quantity by
100) by the HMC for the target partition. The --procs flag cannot be specified in this case

Working with LPAR Profiles


Creating an LPAR Profile

Use the mksyscfg command to create a partition profile. The following is an example of how to
create a partition profile:

mksyscfg -r prof -m <managed system> -i


"name=prof3,lpar_id=2,boot_mode=norm,
sfp_surveillance=1,desired_procs=2,
min_procs=1,max_procs=2,min_proc_units=0.1,
desired_proc_units=0.5,max_proc_units=0.5,
proc_type=shared,sharing_mode=cap,
desired_mem=400,min_mem=400,max_mem=400,
auto_ipl=1,power_ctrl_lpar_ids=0, io_slots=553713666/65535/1"

Valid attributes, specified with the -i flag, include:

name lpar_id | lpar_name


power_ctrl_lpar_ids desired_procs
min_procs max_procs
desired_mem min_mem
max_mem proc_type
uncap_weight sharing_mode
load_source_slot alt_load_source_slot
console_slot alt_console_slot
ecs_slot min_proc_units
desired_proc_units max_proc_units
lpar_io_pool_ids io_slots
boot_mode sfp_surveillance
sni_windows virtual_opti_pool_id
hsl_opti_pool_id min_interactive
desired_interactive max_interactive
max_virtual_slots virtual_eth_adapters
virtual_scsi_adapters virtual_serial_adapters
sni_config_mode sni_device_ids
auto_ipl

The profile name (name) and the partition (lpar_id or lpar_name) must be specified. Instead of
entering configuration information on the command line with the -i flag, the information can
instead be placed in a file, and the filename specified with the -f flag.

For more information about the valid attributes listed in this command example, refer to the
Command Attributes table.

Listing LPAR Profile Properties

Use the lssyscfg command to list a partition profile. Type the following:

lssyscfg -r prof -m <managed system> -–filter


"lpar_names=<partition name>, profile_names=<profile name>"

Use the --filter parameter to specify the partition for which partition profiles are to be listed, and to
specify which profile names to list. While the filter can only specify a single partition, it can specify
multiple profile names for that partition.

Note that the partition ID can be specified instead of the partition name by using the lpar_ids filter
in place of the lpar_names filter.

Modifying LPAR Profile Properties

Use the chsyscfg command to modify a partition profile’s properties. The following example
shows how to change prof1's memory amounts:

chsyscfg -r prof -m <managed system> -i


"name=prof1,lpar_name=lpar3,min_mem=256,
max_mem=512,desired_mem=512"

Valid attributes, specified with the -i flag, include:

name lpar_name | lpar_id


new_name desired_procs
min_procs max_procs
desired_mem min_mem
max_mem proc_type
uncap_weight sharing_mode
load_source_slot alt_load_source_slot
console_slot alt_console_slot
ecs_slot min_proc_units
desired_proc_units max_proc_units
lpar_io_pool_ids io_slots
boot_mode sfp_surveillance
sni_windows virtual_opti_pool_id
hsl_opti_pool_id min_interactive
desired_interactive max_interactive
max_virtual_slots virtual_eth_adapters
virtual_scsi_adapters virtual_serial_adapters
sni_config_mode sni_device_ids
auto_ipl power_ctrl_lpar_ids

Instead of entering configuration information on the command line with the -i flag, the information
can instead be placed in a file, and the filename specified with the -f flag.

For more information about the valid attributes listed in this command example, refer to the
Command Attributes table.

Deleting an LPAR Profile

Use the rmsyscfg command to remove a partition profile. Type the following:

rmsyscfg -r prof -m <managed system> -n <profile name> -p <partition


name>

The partition ID can be specified instead of the partition name by using the --id parameter in
place of the -p parameter.

You might also like