HMC
HMC
HMC
Dino Quintero
Sven Meissner
Andrei Socoliuc
Note: POWER4™ systems use a serial line to communicate with the HMC.
This has changed with POWER5™. The POWER5 systems use a LAN
connection to communicate with the HMC. POWER4 and POWER5 systems
cannot be managed by the same HMC.
Table 1 shows the current list of the hardware models for HMCs supported in a
POWER4 or POWER5 environment. The HMCs are available as desktop or
rack-mountable systems.
2 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Table 1 Types of HMCs
Type Supported managed HMC code version
systems
The HMC 3.x code version is used for POWER4 managed systems and HMC 4.x
for POWER5 systems (iSeries™ and pSeries®). For managing POWER5
pSeries machines, HMC 4.2 code version or later is required.
Table 2 shows a detailed relationship between the POWER5 pSeries servers and
the supported HMCs.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 3
The maximum number of HMCs supported by a single POWER5 managed
system is two. The number of LPARs managed by a single HMC has been
increased from earlier versions of the HMC to the current supported release as
shown in Table 3.
HMC connections
During the installation of the HMC, you have to consider the number of network
adapters required. You can have up to three Ethernet adapters installed on an
HMC. There are several connections you have to consider when planning the
installation of the HMC:
HMC to the FSP (Flexible Service Processor): It is an IP-based network used
for management functions of the POWER5 systems; for example, power
management and partition management.
POWER5 systems have two interfaces (T1 and T2) available for connections
to the HMC. It is recommended to use both of them for redundant
configuration, and high availability. Depending on your environment, you
have multiple options to configure the network between the HMC and FSP.
The default mechanism for allocation of the IP addresses for the FSP ports is
dynamic. The HMC can be configured as a DHCP server which allocates the
IP address at the time the managed system is powered on. Static IP address
allocation is also an option. You can configure the FSP ports with a static IP
address by using the Advanced System Management Interface (ASMI)
4 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
menus. However not all POWER5 servers support this mechanism of
allocation. Currently p575, p590, and p595 servers support only DHCP.
When planning for the HMC installation also consider that the distance between
the HMC and the managed system must be within 8m (26 ft) distance. The
distance complies with IBM maintenance rules.
Partitioning considerations
With POWER5 systems a greater flexibility was introduced in setting up the
resources of a partition by enabling the Advanced Power Virtualization functions
to provide:
POWER™ Hypervisor: Supports partitioning and dynamic resource
movement across multiple operating system environments.
Shared processor LPAR (micro-partitioning): Enables you to allocate less
than a full physical processor to a logical partition.
Virtual LAN: Provides network Virtualization capabilities that allow you to
prioritize traffic on shared networks.
Virtual I/O (VIO): Provides the ability to dedicate I/O adapters and devices to
a virtual server, thus allowing the on demand allocation and management of
I/O devices.
Capacity on Demand (CoD): Allows system resources such as processors
and memory to be activated on an as-needed basis.
Simultaneous multi-threading (SMT): Allows applications to increase overall
resource utilization by virtualizing multiple physical CPUs through the use of
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 5
multi-threading. SMT is a feature supported only in AIX 5.3 and Linux at an
appropriate level.
Multiple operating system support: Logical partitioning allows a single server
to run multiple operating system images concurrently. On a POWER5 system
the following operating systems can be installed: AIX 5L™ Version 5.2 ML4 or
later, SUSE Linux Enterprise Server 9 Service Pack 2, Red Hat Enterprise
Linux ES 4 QU1, and i5/OS.
At the beginning of a partition size planning, you have to consider that the
allocated amount of memory in these three regions is not usable for the physical
memory allocation of the partition.
Also the amount of IO drawers and the different ways to use IO, such as shared
environment, affect the amount of memory the hypervisor uses.
Note: The number of VIOs, the number of partitions, and the number of IO
drawers affect the hypervisor memory.
Note: The bigger the maximum value of a partition, the bigger the amount of
memory not usable for the physical memory allocation of the partition.
6 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
To calculate your desired and maximum memory values accurately, we
recommend that you use the LVT tool. This tool is available at:
http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm
Figure 1 shows an example of how you can use the LPAR validation tool to verify
a memory configuration. In Figure 1, there are 4 partitions (P1..P4) defined on a
p595 system with a total amount of 32 GB of memory.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 7
The memory allocated to the hypervisor is 1792 MB. When we change the
maximum memory parameter of partition P3 from 4096 MB to 32768 MB, the
memory allocated to the hypervisor increases to 2004 MB as shown in Figure 2.
8 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Micro-partitioning
With POWER5 systems, increased flexibility is provided for allocating CPU
resources by using micropartitioning features. The following parameters can be
set up on the HMC:
Dedicated/shared mode, which allows a partition to allocate either a full CPU
or partial units. The minimum CPU allocation unit for a partition is 0.1.
Minimum, desired, and maximum limits for the number of CPUs allocated to a
dedicated partition.
Minimum, desired and maximum limits for processor units and virtual
processors, when using the shared processor pool.
Capped/uncapped and weight (shared processor mode).
Table 4 summarizes the CPU partitioning parameters with their range values,
and indicates if a parameter can be changed dynamically.
Min/Desired/Max values for CPU, processing units, and virtual processors can
be set only in the partition’s profile. Each time the partition is activated, it tries to
acquire the desired values. A partition cannot be activated if at least the
minimum values of the parameters cannot be satisfied.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 9
Note: Take into consideration that changes in the profile will not get activated
unless you power off and start up your partition. Rebooting of the operating
system is not sufficient.
Capacity on Demand
The Capacity on Demand (CoD) for POWER5 systems offers multiple options,
including:
Permanent Capacity on Demand:
– Provides system upgrades by activating processors and/or memory.
– No special contracts and no monitoring are required.
– Purchase agreement is fulfilled using activation keys.
On/Off Capacity on Demand:
– Enables the temporary use of a requested number of processors or
amount of memory.
– On a registered system, the customer selects the capacity and activates
the resource.
– Capacity can be turned ON and OFF by the customer; usage information
is reported to IBM.
– This option is post-pay. You are charged at activation.
Reserve Capacity on Demand:
– Used for processors only.
– Prepaid debit temporary agreement, activated using license keys.
– Adds reserve processor capacity to the shared processor pool, used if the
base shared pool capacity is exceeded.
– Requires AIX 5L Version 5.3 and the Advanced POWER Virtualization
feature.
Trial Capacity on Demand:
– Tests the effects of additional processors and memory.
– Partial or total activation of installed processors and/or memory.
– Resources are available for a fixed time, and must be returned after trial
period.
– No formal commitment required.
10 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
HMC sample scenarios
The following examples illustrate POWER5 advance features.
Figure 4 on page 12 shows the initial configuration. Node nils, a partion of a p550
system, is a production system with 2 CPUs and 7 GB memory. We will force
node nils to fail. Node julia, also a partion of a p550 system, is the standby
system for nils. The resources for julia are very small, just 0.2 processors and 1
GB memory.
In case of takeover, CoD On/Off will be activated. Two more CPUs and 8 GB
more memory will be available to add to a partion. You can use CoD On/Off for
our procedure because you have to pay for the actual days the CoD is active
only. You have to inform IBM about the amount of days you have made use of
CoD monthly. This can be done by the service agent automatically. For more
information, refer to “APPENDIX” on page 40.
Furthermore, the resources that will be available by activating CoD On/Off can
be assigned to dedicated and to shared partitions. After CoD activation, the CPU
and the memory resources will be assigned to julia so that julia will have the
same resources as nils had.
After nils is again up and running and ready to reacquire the application, julia will
reduce the resources as in the initial configuration and will deactivate CoD.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 11
P550 – 2 CPU - 8GB P550 – 4 CPU – 8 GB
Oli (production)
1 CPU (dedicated)
5120 MB
Nicole_vio
0.8 CPU (shared)
1024 MB
HMC 1 HMC 2
Table 5 shows our configuration in detail. Our test system has only one 4-pack
DASD available. Therefore we installed a VIO server to have sufficient disks
available for our partitions.
12 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Table 6 Memory allocation
Memory (MB)
In the management area of the HMC main panel, select HMC Management →
HMC Configuration. In the right panel select Enable or Disable Remote
Command Execution and select Enable the remote command execution
using the ssh facility (see Figure 5).
The HMC provides firewall capabilities for each Ethernet interface. You can
access the firewall menu using the graphical interface of the HMC. In the
“Navigation Area” of the HMC main panel, select HMC Management →
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 13
HMC Configuration. In the right panel select Customize Network Setting,
press the LAN Adapters tab, choose the interface used for remote access and
press Details. In the new window select the Firewall tab. Check that the ssh port
is allowed for access (see Figure 6).
The packages can be found on the AIX 5L Bonus Pack CD. To get the latest
release packages, access the following URL:
http://sourceforge.net/projects/openssh-aix
Openssl is required for installing the Openssh package. You can install it from
the AIX 5L Toolbox for Linux CD, or access the Web site:
http://www.ibm.com/servers/aix/products/aixos/linux/download.html
After the installation, verify that the openssh filesets are installed by using the
lslpp command on the AIX node, as shown in Example 1.
14 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
openssh.msg.en_US 3.8.0.5302 C F Open Secure Shell Messages -
Log in the user account used for remote access to the HMC. Generate the
ssh keys using the ssh-keygen command. In Example 2, we used the root
user account and specified the RSA algorithm for encryption. The security
keys are saved in the /.ssh directory.
Distribute the public key in file id_rsa.pub to the HMC. In Example 3, we use
the mkauthkeys command to register the key for the hscroot account. The key
will be saved in the file authorized_keys2 on the $HOME/.ssh directory on the
HMC.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 15
Now, we force node nils to fail and prepare to start the takeover scenario (see
Figure 7).
oli (production)
1 CPU (dedicated)
5120 MB
CoD activation
DLPAR operations 2
HMC 1 HMC 2
16 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 8 Activating the On/Off CoD
Example 4 shows how node julia activates 2 CPUs and 8 GB of RAM for 3 days
by running via ssh the command chcod on the HMC.
Memory:
root@julia/.ssh>ssh hscroot@hmctot184 "chcod -m p550_itso1 -o a -c onoff -r
mem -q 8192 -d 3"
Perform the dynamic LPAR operations to increase the CPU units and
memory capacity of the target partition.
After enabling the CoD feature for CPU, the additional processors are
automatically added in the shared processor pool and can be assigned to any
shared or dedicated partition.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 17
Note: If you use reserve CoD instead of ON/OFF CoD to temporarily activate
processors, you can assign the CPUs to shared partitions only.
In order for node julia to operate with the same resources as node nils had, we
have to add 1.8 processing units and 6.5 GB memory to this node.
In the Server and Partition panel on HMC, right-click on partition julia and select
Dynamic Logical Partitioning → Processor Resources → Add. In the dialog
window, enter the desired values for additional processing units and virtual
processors as shown in Figure 9.
In Example 5, we run the command lshwres on the HMC to get the current
values of the cpu units and virtual processors used by node julia, before and after
increasing the processing units.
18 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Example 5 Perform the CPU addition from the command line
root@julia/>lsdev -Cc processor
proc0 Available 00-00 Processor
lpar_name:curr_proc_units:curr_procs
julia:0.2:1
lpar_name:curr_proc_units:curr_procs
julia:2.0:2
root@julia/>
In the Server and Partition panel, right-click partition julia and select Dynamic
Logical Partitioning → Memory Resources → Add. In the dialog window,
enter the desired amount of memory to add as shown in Figure 10 on page 20.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 19
Figure 10 Add memory to partition
lpar_name:curr_mem
julia:1024
lpar_name:curr_mem
julia:7168
20 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
At the time node nils is back and ready to reacquire the applications running on
node julia, we reduce the memory and CPU to the initial values and turn off CoD.
In order for node julia to operate with the initial resources, we have to remove 1.8
processing units and 6 GB memory from this partition.
1. Perform dynamic LPAR operations to decrease the CPU units and memory
capacity of the target partition.
The following steps are taken to decrease the CPU units and memory capacity of
the target partition.
In the Server and Partition panel, right-click partition julia and select Dynamic
Logical Partitioning → Memory Resources → Add. In the dialog window,
enter the desired amount of memory to remove as shown in Figure 11.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 21
– Using the command line interface.
Example 7 shows how to deallocate via the command line 6 GB of memory from
node julia.
Example 7 Deallocating the memory using the command line interface (CLI)
root@julia/>lsattr -El mem0
goodsize 7168 Amount of usable physical memory in Mbytes False
size 7168 Total amount of physical memory in Mbytes False
lpar_name:curr_mem
julia:7168
lpar_name:curr_mem
julia:1024
In the Server and Partition panel on HMC, right-click partition julia and select
Dynamic Logical Partitioning → Processor Resources → Remove. In the
dialog window, enter the desired values for processing units and virtual
processors as shown in Figure 12 on page 23.
22 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 12 Perform the deallocation for the CPU units
– Using the command line interface to remove 1.8 processing units from
node julia is shown in Example 8.
lpar_name:curr_proc_units:curr_procs
julia:2.0:2
lpar_name:curr_proc_units:curr_procs
julia:0.2:1
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 23
2. Deactivating the On/Off CoD for CPU and memory.
For an example of the graphical interface, refer to the menu presented in Figure 8
on page 17, and the section “Activating On/Off CoD using the command line
interface.” on page 17.
Example 9 shows how to use the command line interface to deactivate the
processor and memory CoD resources.
Example 9 Disabling all allocated CoD resources for CPU and memory
Memory:
ssh hscroot@hmctot184 chcod -m p550_itso1 -o d -c onoff -r mem
CPU:
ssh hscroot@hmctot184 chcod -m p550_itso1 -o d -c onoff -r proc
In case there are more than one uncapped partitions, you can use the weight
parameter to determine the priority. This value is used proportionally. The higher
the weight, the higher the priority to acquire the processing units.
To access the menus, from the Server Management menu of the HMC,
right-click on the partition name and select Dynamic Logical Partitioning →
Processor Resources →Add. Refer to Figure 13 on page 25.
24 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 13 Toggle the Capped/Uncapped option
You have to consider the number of virtual processors to be able to use all the
CPUs from the shared processor pool.
In our example, after the CoD operation, we have 3.0 available processing units
in the shared processor pool and 1 dedicated processor allocated to node oli.
The partition nicole_vio uses 0.8 processing units and is capped.
Partition julia uses 0.2 units and 1 virtual processor, and can use 1 physical CPU.
Adding 1 virtual CPU allows this partition to use a maximum of 2.0 processing
units.
In Example 10, we produced heavy CPU load on partition julia while the other
partition using the shared processor pool is in an idle state. The physc parameter
shows the actual number of physical processing units used by partition julia.
Example 10 Output of topas -L
Interval: 2 Logical Partition: julia Tue Mar 31 16:20:46 1970
Psize: 3 Shared SMT OFF Online Memory: 512.0
Ent: 0.20 Mode: UnCapped Online Logical CPUs: 2
Partition CPU Utilization Online Virtual CPUs: 2
%usr %sys %wait %idle physc %entc %lbusy app vcsw phint %hypv hcalls
100 0 0 0 2.0 999.70100.00 1.00 200 0 0.0 0
===============================================================================
LCPU minpf majpf intr csw icsw runq lpa scalls usr sys _wt idl pc lcsw
Cpu0 0 0 527 258 234 4 100 65 100 0 0 0 1.00 83
Cpu1 0 0 211 246 209 2 100 520 100 0 0 0 1.00 117
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 25
Example of using two uncapped partitions and the weight
For the example of two uncapped partitions using the same shared processor
pool, we use the configuration described in Table 7.
We created a heavy CPU load on both uncapped partitions and verified their load
using the topas -L command.
Example 11 and Example 12 are the outputs of the topas -L command from
nodes oli and julia, including the same weight value.
26 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Cpu2 0 0 757 771 699 6 100 15 100 0 0 0 0.37 2172
Cpu3 0 0 712 712 698 6 100 27 100 0 0 0 0.37 2178
We changed the weight for the partition oli to the maximum value 255 while
partition julia is set to 128.
The operation can be performed dynamically. For accessing the GUI menus,
from the Server Management menu of the HMC, right-click on the partition
name and select Dynamic Logical Partitioning → Processor Resources
→Add (as shown in Figure 14).
When both partitions are heavy CPU loaded, the amount of processing units
allocated from the processor shared pool is proportional to the weight value of
the partitions.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 27
Cpu2 0 0 756 740 700 8 100 19 100 0 0 0 0.42 2683
Cpu3 0 0 702 703 699 8 100 2 100 0 0 0 0.41 2652
In Example 13 and Example 14 the physc parameter has different values for the
two nodes.
Node oli and node julia have 1.0 processor units entitled and 100% CPU usage.
The shared processor pool has 3.0 units, so the idle capacity is 1.0 unit shared
by partitions julia and oli, proportionally to their weight. In our case, partition oli
adds 255/(255+128) from 1.0 processing units, while partition julia adds
128/(255+128) processing units.
28 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Node oli has increased processing loads during the workday: 7 AM to 7 PM and
it is idle most of the time outside this interval. Partition julia has an increased
processing load during 10 PM to 5 AM and is idle the rest of the time. Since both
partitions are uncapped, we will reallocate only a piece of memory to partition
julia during the idle period of time of partition oli.
This example shows how to implement via the HMC scheduler the dynamic
LPAR operations for the memory. We implement two scheduled operations that
run every day:
9 PM: Move 2 GB of memory from partition oli to partition julia.
6 AM: Move 2 GB of memory back from partition julia to partition oli.
The following steps are performed from the HMC to configure the scheduled
dynamic LPAR operations:
1. On the HMC main configuration panel, select HMC Management → HMC
Configuration. Then, in the right panel select Schedule operations. In the
new window select the target node for the dynamic LPAR operation as shown
in Figure 15.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 29
Figure 16 Selecting the scheduled operation
3. Next, in the Date and Time tab, select the time for the beginning of the
operation and a time window where the operation can be started as shown in
Figure 17.
4. Click on the Repeat tab and select the days of the week for running the
scheduler. We selected each day of the week for an infinite period of time as
shown in Figure 18 on page 31.
30 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 18 Selecting the days of the week for the schedule
5. Click on the Options tab and specify the details of the dynamic LPAR
operation as shown in Figure 19.
Note: By default, the time-out period for the dynamic LPAR operation is 5
minutes. In our test case, the memory reallocation was performed for 2GB of
RAM. When performing this operation, higher values might require a larger
time to complete.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 31
6. Repeat steps 1 through 5 for creating the reverse operation, specifying julia
the target partition for the scheduled operation, and 06:00:00 AM for the start
window of the scheduler.
7. After setting up both operations, their status can be checked in the
Customize Scheduled Operations window for each of the nodes as shown
in Figure 20.
8. For checking the completion of the scheduled operation, display the Console
Events Log, by selecting HMC Management → HMC Configuration →
View Console Events as shown in Figure 21.
32 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Comparing profile values with current settings
If you perform a dynamic LPAR operation and you want to make this change
permanent, you have to do maintenance on the appropriate profile. Otherwise,
after the next shutdown and power on of the LPAR, the partition will have the old
properties and this might not be desired.
In Example 15, hmc1 and hmc2 are monitored. To use this script, you have to
change hmc1 and hmc2 with the names of your HMCs. The amount of HMCs is
variable as long as they are in quotation marks and comma separated.
Place this script on a partition that has ssh access with a special user to every
HMCs you want to monitor. In the example, we used the user hscroot. It is
necessary that you can get access without the need to type in the password. To
do so, please refer to “Enabling ssh access to HMC” on page 13.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 33
};
};
};
};
Here is a sample output from the script shown in Example 15 on page 33.
34 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
hmc2 cec-green green2 max_mem: prof= 32768
hmc2 cec-green green2 max_procs: prof= 4
hmc2 cec-green green2 min_procs: prof= 1
hmc2 cec-green green2 des_procs: prof= 2
hmc2 cec-green green3 min_mem: prof= 2048
hmc2 cec-green green3 des_mem: prof= 12288 curr= 4608
hmc2 cec-green green3 max_mem: prof= 32768
hmc2 cec-green green3 max_procs: prof= 4
hmc2 cec-green green3 min_procs: prof= 1
hmc2 cec-green green3 des_procs: prof= 2 curr= 1
In Example 16 on page 34, you can see that the LPAR blue6 has 2 GB memory
configured instead of the desired 4 GB or that LPAR blue4 works currently with
one processor instead of the desired 2 processors. LPAR vio2 is down, therefore
the current values are all set to 0.
The HMCs are automatically notified of any changes that occur in the managed
system. If there is a change on one HMC, a couple of seconds later, it is visible
on the second one automatically. Or if the managed system sends a state or an
operator panel value, for example, when a LPAR is starting, the different states
and LED codes will be visible on both HMCs at the same time.
There is a locking mechanism to prevent basic conflicts. For the amount of time it
takes to handle an operation, the HMC gets exclusive control over the interface
of the managed system. After this operation is completed, the lock will be
released and the interface is released for further commands.
Important: When using a service agent, enable it on one HMC only to prevent
duplicated service calls.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 35
Working with two HMCs eases the planning of HMC downtimes for software
maintenance, as there is no downtime needed. While doing the HMC code
update on one HMC, the other one continues to manage the environment. This
situation allows one HMC to run at the new fix level, while the other HMC can
continue to run the previous one. You should take care to move both HMCs to
the same level to provide an identical user interface.
HMC1 HMC2
eth0 eth0
T1 T2 T1 T2
P5 Managed P5 Managed
System System
Figure 22 describes two HMCs in different networks both running DHCP servers.
The CEC uses two LAN-adapters, one gets the IP-address from HMC1 and the
second one from HMC2.
If you use your HMC as a DHCP server for the CEC, be sure to have the HMC up
and running before powering on the CEC; otherwise the CEC will get its default
IP-address and will not work in your network.
36 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Note: Either eth0 or eth1 can be a DHCP server on the HMC.
The managed system will be automatically visible on the HMCs. This is our
recommended way to do high availability with HMCs. It is supported by all
POWER5 systems.
Two HMCs on the same network, using static IP addresses is shown in
Figure 23.
HMC1 HMC2
T1 T1
P5 Managed P5 Managed
System* System*
Figure 23 HMCs connected to the FSP using 1 network and static IP addresses
In Figure 23, all systems HMCs and CECs have their own fixed IP-address. So
you do not need to consider in which sequence they has to be started.
Important: For p5-575, p5-590, and p5-595 systems, fixed IP-addresses are
not supported. You have to use the DHCP server.
The fixed IP-address can be set by launching the ASMI menu. Please refer to
“APPENDIX” on page 40 to get more information on how to launch the ASMI
menu.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 37
A new system is shipped with a default IP-addresses. You can change these
IP-addresses by connecting your laptop to either T1 or T2 of the CEC. Assign an
IP-address to your laptop’s interface that is in the same network as the
respective network adapter of your CEC. For T1, it is network 192.168.2.0/24
and for T2 192.168.3.0/24. Do not use the same IP-addresses as the CEC
already have assigned.
Note: For p510, p520, p550, and p570 at first startup, a default IP address is
configured on the FSP interfaces if an DHCP server is not available:
eth0 (external T1): 192.168.2.147
eth1 (external T2): 192.168.3.147
Run a browser on your laptop and type in the IP-address of the respective
network adapter of the CEC:
https://192.168.2.147
Log in to the ASMI menu using a username and a password. In the main ASMI
panel, select Network Services → Network Configuration. Using the menu
from Figure 24, you can configure the FSP Ethernet interfaces eth0 and eth1.
38 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
For more detailed information, refer to “Access to the ASMI menu” on page 40“.
To add a managed system, select the Server Management bar and choose Add
Managed System(s) as shown in Figure 25.
If you want to avoid such problems, you can use fixed IP-addresses.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 39
APPENDIX
The following sections contain additional information to be considered when
dealing with HMCs.
40 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 26 Accessing the ASMI menu using WebSM
For further information related to the access to the ASMI menus, refer to the
“ASMI Setup Guide” at:
http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iphby.pdf
Note: Before configuring the WebSM client, ensure that your name resolution
works properly. The HMC hostname must be resolved by the PC client station.
If a DNS is not configured, then put the HMC hostname in the hosts file. For
Windows XP, the file is C:\Windows\system32\drivers\etc\hosts.
Download the WebSM client code from the HMC. Open a browser and
access the following URLs:
http://<hmchost>/remote_client.html
Log in the HMC using the hscroot account. Run the InstallShield for your
platform.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 41
Access the secure WebSM download page and run the InstallShield program
for your platform:
http://<hmchost>/remote_client_security.html
Verify the WebSM installation by starting the WebSM client program and connect
to the HMC. The next steps describe how to configure the secure connection to
WebSM server.
The following steps need to be performed from the HMC console. The Security
Management panel is not available via WebSM:
Choose one of the HMCs as the Certificate Authority. In the main menu of the
HMC, select System Manager Security. Select Certificate Authority, and
then Configure this system as a Web-based System Manager
Certification Authority. A panel will be displayed as shown in Figure 27.
42 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
For our example, we perform the following actions:
– Enter an organization name: ITSO.
– Verify the certificate expiration date is set to a future date.
– Click the OK button, and a password is requested at the end of the
process. The password is used each time you perform operations on the
Certification Authority Server.
The next step is to generate the authentication keys for the WebSM clients
and servers:
– Private keys will be installed on the HMCs.
– Public keys will be installed on WebSM remote clients.
From the main panel HMC, select System Manager Security, select Certificate
Authority, and then in the right window, Generate Servers Private Key Ring
Files. Enter the password set in the previous step. A new menu is displayed for
defining options as shown in Figure 28.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 43
At this menu:
– Add both HMCs in the list of servers (the current HMC should already be
listed): hmctot184.itso.ibm.com, hmctot182.itso.ibm.com
– Enter the organization name: ITSO.
– Verify that the certificate expiration date is set to a future date.
Install the previous generated private key to the current HMC.
Copy the private key ring file to removable media for installing it to the second
HMC.
44 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 30 Copying the private key ring file to removable media
Tip: To transfer the security keys from the HMC, you can use the floppy drive
or a flash memory. Plug the device in the USB port, before running the copy
procedure, and then, it will show up in the menu as shown in Figure 30.
Copy the private key from removable media to the second HMC.
Insert the removable media in the second HMC. From the HMC menu select:
System Manager Security → Server Security. In the right window, select
Install the private key ring file for this server. A new window is displayed for
selecting the removable media containing the private key for the HMC (see
Figure 31 on page 46).
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 45
Figure 31 Installing the private key ring file for the second HMC
Copy the public key ring file to removable media for installing the key file on
the client PC. Select System Manager Security → Certificate Authority,
and in the right panel, select Copy this Certificate Authority Public Key
Ring File to removable media. A dialog panel is displayed (see Figure 32 on
page 47).
46 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 32 Save the public key ring file to removable media
You will be provided with a second window to specify the format of the file to
be saved. Depending on the platform of the WebSM client, you can select
either:
– HMC or AIX client: A tar archive is created on the selected media.
– PC Client: A regular file is created on the selected media. This option
requires a formatted media.
Note: Two files are saved on the media, containing the public key ring files:
SM.pubkr and smpubkr.zip.
Next, go back to the System Manager Security menu and select Server
Security. Select Configure this system as a Secure WEB based System
Manager Server as shown in Figure 33 on page 48.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 47
Figure 33 Select the security option for the authentication
48 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Next, go to each of your remote clients and copy the PUBLIC key ring file into
the “codebase” directory under WebSM. When you log in via WebSM, you will
get information if the SSL connection is available or not. Verify the checkbox
Enable secure communication” in Figure 35.
The first line turns on the daemon, and the second specifies the IP address or
hostname of the server to which the HMC will synchronize its time.
Microcode upgrades
The method used to install a new firmware depends on the release level of
firmware which is currently installed on your server.The release of the firmware
can be determined from the firmaware’s filename: 01SFXXX_YYY_ZZZ, where
XXX is the release level.
The microcode update can be performed either by using the HMC or the target
system, when an HMC is not available. The policy for the microcode update can
be changed from the ASMI. For further details, refer to the ASMI Setup Guide at:
http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iphby.pdf
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 49
Attention: Before updating the microcode of the system, we recommend to
carefully read the installation notes of the version you plan to install. For
further information, refer to the microcode download for eServer pSeries
systems page at:
http://techsupport.services.ibm.com/server/mdownload
In our example, we use a p550 system attached to the HMC. We select the FTP
server method for installing the microcode update from version 01SF220 to the
new version 01SF230. We downloaded the rpm and xml file from the microcode
download Web page and put them on the FTP server. Since we are upgrading to
a new release of firmware, the update is non-concurrent and a system power off
must be performed before starting the upgrade procedure.
At the beginning of the installation procedure, always check for the most updated
version of the HMC code. In our example, we used HMC 4.5. For the latest code
version of the HMC, refer to the Web page:
http://techsupport.services.ibm.com/server/hmc
Steps performed to update the microcode of the p550 system are as follows:
1. Access License Internal Code Updates menus on HMC. In the Management
Area, select License Internal Code Maintenance → Licensed Internal
Code Updates (see Figure 36 on page 51). Select Upgrade Internal
Licensed Code to a new release.
50 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 36 License Internal Code Updates menus on the HMC
2. Select the target system (see Figure 37) and click OK.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 51
3. We downloaded the microcode image to an FTP server, so we specify as LIC
Repository FTP Site (Figure 38).
4. In the details window, enter the IP address of the FTP server, username and
password for the access and the location of the microcode image (see
Figure 39). After connecting to the FTP server, a license acceptance window
is displayed. Confirm the license agreement and continue with the next step.
52 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
5. You are provided with a new window which displays the current and the target
release of the firmware (see Figure 40). Click OK to start the upgrade
process.
The update process might take 20-30 minutes. When the update operation ends,
the status completed is displayed in the status window, as shown in Figure 41.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 53
Dual HMC cabling on the IBM 9119-595 and 9119-590 Servers:
http://www.redbooks.ibm.com/abstracts/tips0537.html?Open
ASMI setup guide:
http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iph
by.pdf
54 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
The team that wrote this Redpaper
This Redpaper was produced by a team of specialists from around the world
working at the International Technical Support Organization, Austin Center.
Octavian Lascu
International Technical Support Organization, Austin Center
Tomas Baublys
IBM Germany
Martin Kaemmerling
Bayer Business Services
Beth Norris
Motorola, Inc., Tempe, Arizona
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 55
Yvonne Lyon
International Technical Support Organization, Austin Center
56 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, and service names may be trademarks or service marks of others.
58 Hardware Management Console (HMC) Case Configuration Study for LPAR Management, and