sg248377 - Practical Migration From x86 To LinuxONE
sg248377 - Practical Migration From x86 To LinuxONE
Michel M Beaulieu
Felipe Cardeneti Mendes
Guilherme Nogueira
Lena Roesch
Redbooks
International Technical Support Organization
December 2020
SG24-8377-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to LinuxONE on IBM LinuxONE III LT1 and IBM LinuxONE III LT2
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Part 1. Decision-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Part 2. Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Contents v
6.1.6 Configuring zfcp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.1.7 Linux multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.2 Migrating Db2 and its data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.2.1 Preliminary migration steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.2.2 Data migration using db2move and db2look. . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.2.3 Migration using the LOAD utility with the FROM CURSOR option . . . . . . . . . . . 138
6.3 Moving IBM MQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.3.1 Moving the IBM MQ Web Server and the RESTful API component . . . . . . . . . . 141
6.4 Migrating a Node.js application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.5 Migrating WebSphere Application Server Network Deployment cluster . . . . . . . . . . . 145
6.6 Application Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Contents vii
viii Practical Migration from x86 to LinuxONE
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® IBM® OMEGAMON®
BigInsights® IBM Cloud® Parallel Sysplex®
Db2® IBM Garage™ Redbooks®
developerWorks® IBM Research® Redbooks (logo) ®
DS8000® IBM Security™ Tivoli®
FICON® IBM Spectrum® WebSphere®
FlashCopy® IBM Z® z Systems®
GDPS® IBM z Systems® z/VM®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
JBoss, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the
United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
Two servers are available for LinuxONE: The IBM® LinuxONE III LT1 and IBM LinuxONE III
LT2. We describe these servers in “IBM LinuxONE servers” on page 5.
Aside from still running SUSE Linux Enterprise Server and Red Hat Enterprise Linux Servers,
LinuxONE runs Ubuntu, which is popular on x86 hardware.
Ubuntu, which runs the cloud, smartphones, a computer that can remote control a planetary
rover for NASA, many market-leading companies, and the Internet of Things, is now available
on IBM LinuxONE servers. Together, these two technology communities deliver the perfect
environment for cloud and DevOps. Ubuntu 16.04 on LinuxONE offers developers,
enterprises, and Cloud Service Providers a scalable and secure platform for next generation
applications that include OpenStack, KVM, Docker, and JuJu.
The following are reasons why you would want to optimize your servers through virtualization
using LinuxONE:
Too many distributed physical servers with low utilization
A lengthy provisioning process that delays the implementation of new applications
Limitations in data center power and floor space
High total cost of ownership (TCO)
Difficulty allocating processing power for a dynamic environment
This IBM Redbooks® publication provides a technical planning reference for IT organizations
that are considering a migration from their x86 distributed servers to LinuxONE. This book
walks you through some of the important considerations and planning issues that you might
encounter during a migration project. Within the context of a pre-existing UNIX based or x86
environment, it presents an end-to-end view of the technical challenges and methods
necessary to complete a successful migration to LinuxONE.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Poughkeepsie Center.
Guilherme Nogueira is an IT Specialist for IBM Global Technology and Services in Brazil. He
has years of experience in configuring LinuxONE servers. He supported over 1,800
LinuxONE servers as part of his previous role. He also worked in automatic server
provisioning, supporting LinuxONE deployments for IBM private cloud. He is now part of an
internal application team to map hardware connections in data centers. He holds a degree in
Information Security and Computer Networking from Faculdade de Tecnologia de Americana
(FATEC Americana). His areas of expertise include Linux, systems automation, security, and
cryptography. Guilherme co-authored and contributed to three other IBM Redbooks
publications.
Lena Roesch is a LinuxONE Client Technical Specialist at IBM in UK and Ireland. She is
working primarily with organizations that are new to the LinuxONE platform, helping them add
next-level security and stability to their private, public, and hybrid cloud infrastructure. Lena is
involved in various projects supporting businesses in implementing a trusted computing base
for their digital asset and blockchain solutions to increase security and compliance and
provide a high degree of confidence to their customers. She has a MSc in Technology
Management and co-chairs the Guide Share Europe UK Region 101 stream, organizing
introductory sessions and workshops for people new to the IBM platform.
Robert Haimowitz
IBM Garage™ for Systems, Poughkeepsie Center
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xiii
xiv Practical Migration from x86 to LinuxONE
Part 1
Part 1 Decision-making
This part describes the key benefits of migrating to LinuxONE and the reasons to migrate to
LinuxONE. It also outlines some of the considerations when deciding to migrate to LinuxONE.
The part concludes with some virtualization concepts to help you to understand and compare
x86 virtualization, and both KVM and IBM z/VM virtualization.
LinuxONE’s hardened Linux-based software stack can run most open source software
packages, such as databases and data management, virtualization platforms, and containers,
automation and orchestration software, and compute-intensive workloads, such as
blockchain.
Linux is available on many computing platforms from set top boxes and handheld devices to
the largest servers. The flexibility of the operating system allows users to run applications
without worrying about being tied to a particular hardware platform. You have control over the
choice of hardware platform that supports your application. Workloads that are running on
LinuxONE benefits from a hardware platform that includes specialized processors,
cryptographic cards with dedicated RISC processors, and a combination of hypervisors that
allow unparalleled flexibility in Linux deployment.
A major benefit of Linux is that it is open source. The software is unencumbered by licensing
fees and its source code is freely available. Hundreds of Linux distributions are available for
almost every computing platform. The following three supported LinuxONE enterprise Linux
distributions1 are available:
Red Hat: Red Hat Enterprise Linux (RHEL)
SUSE: SUSE Linux Enterprise Server
Canonical: Ubuntu Server
These Linux distributions provide customers who use Linux with various support options,
including 24 x 7 support with one-hour response time worldwide for customers running
production systems. In addition to the Linux operating system, all the major Linux distributions
offer a number of other open source products that they also fully support.
To simplify problem determination, IBM customers can contact IBM in the first instance and, if
it is a new problem with Linux, IBM will work with the distributors to resolve the problem.
The increased interest and usage of Linux resulted from its rich set of features, including
virtualization, security, Microsoft Windows interoperability, development tools, a growing list of
independent software vendor (ISV) applications, performance, and, most importantly, its
multiplatform support.
This multiplatform support allows customers to run a common operating system across all
computing platforms, which means significantly lower support costs and, for Linux, no
incremental license charges. It also offers customers the flexibility of easily moving
applications to the most appropriate platform. For example, many IT organizations choose
Linux for the ability to scale databases across highly scalable hardware.
1
A Linux distribution is a complete operating system and environment. It includes compilers, file systems, and
applications such as Apache (web server), SAMBA (file and print), sendmail (mail server), Tomcat (Java
application server), MySQL (database), and many others.
The doors are designed for acoustics and optimized for air flow. The IBM LinuxONE III LT1
offers air-cooled (internal radiator) or water-cooled systems (WCS).
At the heart of the LinuxONE III LT1 is the new processor chip that is made with 12 cores and
uses the density and efficiency of 14 nm silicon-on-insulator technology, Running at 5.2 Ghz,
it delivers increased performance and capacity across a wide range of workloads.
Up to 190 client configurable cores are available, which are known as Integrated Facilities for
Linux (IFLs). The IBM LinuxONE III LT1 includes processor capacity that is represented by
feature codes.
Five processor capacity feature codes are available for the IBM LinuxONE III: Max34, Max71,
Mac108, Max145, and Max190. The numbering signifies that, for example, a Max 34 can
configure up to 34 IFLs (cores), Max71 for up to 71 IFLs, and so on.
The system offers 8 TB of Redundant Array of Independent Memory (RAIM) per central
processing complex (CPC) drawer and up to 40 TB per system. RAIM is intended to provide
redundancy for primary memory, sockets, and memory channels for more reliability and
availability.
IBM LinuxONE III LT1 also integrates new hardware compression capabilities, which delivers
greater compression throughput than previous generation systems. This on-chip compression
co-processor uses industry standard compression algorithms and can reduce data storage
requirements and costs. This compression can also increase data transfer rates to boost
throughput above comparable x86 CPUs; all without adversely impacting response times.
Each core includes a dedicated coprocessor for cryptographic functions, which is known as
the Central Processor Assist for Cryptographic Functions (CPACF). CPACF supports
pervasive encryption and is providing hardware acceleration for encryption operations.
For more information about the IBM LinuxONE III, see this web page.
Cryptography
Disk Connectivity
Connectivity
Red Hat Red Hat Enterprise Linux (RHEL 6.10, RHEL 7.7 and 8.0)
Supported hypervisors
The LinuxONE III LT2 is based on the same 12-core processor chip as the LinuxONE III LT1
that leverages the density and efficiency of 14 nm silicon-on-insulator technology. This model
LT2 is available with five feature-based sizing options: Max4, Max13, Max21, Max31, and
Max65.
The LinuxONE III LT2 design incorporates two Central Processor Complex (CPC) drawers for
the Max65. The numbering signifies that, for example, a Max21 can configure up to 21 IFLs
(cores), Max31 for up to 31 IFLs, and so on. The cores run at 4.5Ghz.
The system offers 8 TB of Redundant Array of Independent Memory (RAIM) per CPC drawer
and up to 16 TB total per LinuxONE III LT2 system, depending on the configuration. RAIM is
intended to provide redundancy for primary memory, sockets, and memory channels for
added reliability and availability. IBM Virtual Flash Memory (VFM) is now in the RAIM and
provides high levels of availability and performance.
As with the LinuxONE III LT1, the LT2 model also brings an integrated storage option to
LinuxONE by supporting carrier cards into which NVMe SSDs can be plugged. It provides the
low latency and high I/O throughput that can help with real-time analytics, memory-intensive
and fast storage workloads, such as streaming, paging and sorting, and traditional
applications, such as relational databases.
An Integrated Accelerator is available for zEDC on the LinuxONE III LT2 processor chip.
Clients no longer need to purchase zEDC Express adapters for their servers. The Integrated
Accelerator for zEDC provides values for existing and new compression users along with less
CPU consumption for compression.
Pervasive usage that is enabled in highly virtualized environments gives all LPARS and virtual
machines 100% access. Therefore, customers no longer must choose which Linux guests
can use the accelerator and applications for compression.
LT2 Up to 65 64 GB - 16 TB
Cryptography
Disk Connectivity
Connectivity
Red Hat Red Hat Enterprise Linux (RHEL 6.10, RHEL 7.7, and 8.0)
Supported hypervisors
IBM LinuxONE provides the highest levels of availability (near 100 percent uptime with no
single point of failure), performance, throughput, and security. End-to-end security is built in
with isolation at each level in the stack, and provides the highest level of certified security in
the industry.
IBM LinuxONE delivers on the promise of a flexible, secure, and smart IT architecture that
can be managed seamlessly to meet the requirements of today’s fast-changing business
climate.
2 PR/SM is standard component of all IBM LinuxONE models, which enables LPARs to share system resources.
PR/SM divides physical system resources, both dedicated and shared, into isolated logical partitions. Each
partition is like an independent system running its own operating environment. It is possible to add and delete
resources like processors, I/O, and memory across partitions while they are actively in use.
Note: For more information about Common Criteria, Evaluation Assurance Levels,
Protection Profiles, and a list of certified products, see this website.
– IBM Dynamic Partition Manager provides facilities to define and run virtualized
computing systems by using a firmware managed environment that coordinates the
physical system resources shared by the partitions. The partitions resources include
processors, memory, network, storage, crypto, and accelerators.
– Both the industry-leading virtualization hypervisor z/VM and the open source
hypervisor kernel-based virtual machine (KVM) are supported on all IBM LinuxONE
models.
– PR/SM, z/VM, and KVM employ hardware and firmware innovations that make
virtualization part of the basic fabric of the IBM LinuxONE platform.
– IBM HiperSockets allows up to 32 virtual LANs, thus allowing memory-to-memory
TCP/IP communication between partitions.
Scalability:
– IBM LinuxONE III Model LT1 scales to 190 physical processors and up to 40 TB of
memory.
– IBM LinuxONE III Model LT2 scales to 65 physical processors and up to 16 TB of
memory.
– LinuxONE III Model LT1, can process 1 trillion web transactions per day and can
support thousands of virtual servers or up to two million containers on a single system.
Security:
– LinuxONE’s pervasive encryption capabilities allow you to encrypt massive amounts of
data with little affect on your system performance. The LinuxONE hardware benefits
from encryption logic and processing on each processor chip in the system.
– The Central Processor Assist for Cryptographic Function (CPACF) is well-suited for
encrypting large amounts of data in real time because of its proximity to the processor
unit. CPACF supports:
• DES
• TDES
• AES-128
• AES-256
• SHA-1
• SHA-2
• SHA-3
• True Random Number Generator
With the LinuxONE III, CPACF supports Elliptic Curve Cryptography clear key,
improving the performance of Elliptic Curve algorithms.
The following algorithms are supported:
• EdDSA (Ed448 and Ed25519)
• ECDSA (P-256, P-384, and P-521)
• ECDH (P-256, P-384, P521, X25519, and X448)
Protected key signature creation is also supported.
IBM developed its Secure Service Container (SSC) technology, which is exclusive to
LinuxONE, to provide an easy-to-deploy secure hosting appliance for container-based
applications that run in hybrid cloud environments. SSC is a secure computing environment
for microservices-based applications that can be deployed without any application code
changes, which makes it an easily consumable solution for cloud-native development. It
provides several unmatched security benefits, such as automatic pervasive encryption of data
in-flight and at-rest, protection from privileged administrators, and tamper protection during
installation and start time to protect against malware.
Figure 1-1 Traditional architecture, Secure Service Container, and microservices architecture
Red Hat OpenShift is a container-based application platform that is built on open source
technologies and principles, such as Kubernetes, and enhances them and enables
enterprises to also benefit from open source. Red Hat OpenShift Container Platform 4.2 was
released for IBM LinuxONE servers in February of 2020. Since then, organizations can use
enterprise server capabilities, such as security features, scalability, and reliability, to run
cloud-native applications and accelerate their digital transformation. Application developers
and operations teams can use OpenShift as a powerful tool to easily manage their container
environment and see how they are organized through an intuitive web interface.
Figure 1-2 shows the IBM LinuxONE platform ready for each layer of cloud.
LinuxONE can provide the same agility and time to value as other cloud services, along with
unparalleled enterprise qualities of service. IBM LinuxONE allows those delivering cloud
services to rapidly deploy a trusted, scalable OpenStack-based Linux cloud environment that
can start small and scale up massively to support thousands of virtual servers or up to two
million containers on a single system.
Virtualization portfolio
Establishing cloud environments on IBM LinuxONE begins with virtualization technology.
Customers have a choice of deploying z/VM, the world’s first commercially available
hypervisor to provide virtualization technology, or the newer industry-standard KVM. Both
hypervisors allow you to bring new virtual servers online in a matter of minutes (or less) to
accommodate growth in users, although each technology is designed with a different
audience in mind.
The overall IBM virtualization portfolio includes the following applications for infrastructure
and virtualization management:
IBM LinuxONE Hardware: IBM LinuxONE III LT1 and IBM LinuxONE III LT2:
– Massively scalable.
– Characterized by excellent economics and efficiencies.
– Highly secure and available.
z/VM:
– Support more virtual servers than any other platform in a single footprint.
– OpenStack support.
KVM for IBM LinuxONE:
– Provides a choice for clients who want open virtualization while taking advantage of the
robustness, scalability, and security of the LinuxONE platform.
– The standard interfaces that it provides allows for easy integration into an existing
infrastructure.
Linux for IBM LinuxONE:
Distributions available from Red Hat Enterprise Linux, SUSE Linux Enterprise Servers,
and Ubuntu.
IBM Wave for z/VM:
A graphical interface tool that simplifies the management and administration of a z/VM and
Linux environment.
IBM Dynamic Partition Manager (DPM):
Tool for LinuxONE configuration and setup to simplify and speed up deployment of Linux
servers by using only the Hardware Management Console (HMC).
For more information about how to install, configure, and use the IBM Cloud Infrastructure
Center, see the IBM Cloud Infrastructure Center documentation, which is available at IBM
Knowledge Center.
The IBM Cloud Infrastructure Center architecture on z/VM is shown in Figure 1-3.
Consider the following points regarding the IBM Cloud Infrastructure Center Architecture on
KVM that is shown in Figure 1-4:
Only one management node must be set up for managing the entire KVM Cloud
infrastructure.
For each to-be-managed KVM, one compute node is required.
The management node can be on same KVM instance with one of the compute nodes, but
it is highly recommend to separate management node and compute node into different
LPAR.
Other industry OpenStack-based cloud management solutions can also run on LinuxONE,
including (but not limited to) VMware’s vRealize Automation product.
The minimum OpenShift architecture consists of five Linux guests (bootstrap, three control
nodes, and one worker node) that are deployed on top of IBM z/VM 7.1. Customers that use
OpenShift on LinuxONE can use the IBM Cloud Infrastructure Center to manage the
underlying cluster infrastructure.
The business case to support a migration to LinuxONE is focused on cost savings provided
by server consolidation to the LinuxONE platform and an overall simplification of the
distributed environment.
Techombank in Vietnam decided to use LinuxONE to power a bold new approach to banking,
delivering rapid, reliable performance for new customer-centric services and scaling
seamlessly to meet fast-growing demands. The enterprise moved 21 production applications
to IBM LinuxONE in just 90 days. They realized a 40% TCO reduction, proven architecture for
very large and variable workloads, which increased transactions 4x during peak season.
The Met Office is the UK’s national weather service, providing weather forecasts for the
public, for government, and for businesses in a wide variety of sectors. It creates more than
3,000 tailored forecasts and briefings each day, as well as conducting weather- and
climate-related research. The Met Office uses post-processing systems to tailor its weather
forecasts for specific clients’ needs. The requirement to have the applications run 24 hours a
day, 365 days a year, and being able to deliver services day-in and day-out is critical to its
brand and its reputation.
Running these systems on a distributed Linux infrastructure was becoming complex and
expensive. Therefore, Met Office decided to migrate suitable candidates from its distributed
Linux landscape onto IBM LinuxONE.
The cost savings arise because LinuxONE is treated by most software vendors as a
distributed system, and software is usually charged by the core. Because an IFL is classified
as a single core, and has high processing power, significant savings can be achieved by
consolidating multiple distributed servers to an LinuxONE processor. Figure 2-1 on page 19
shows an example company that has 45 virtual servers and uses only 14 licenses.
Virtual Resources
SRV7 SRV8 SRV9
SRV1 SRV2 SRV3 DEV DEV DEV SRV7 SRV8 SRV9
APPL APPL APPL APPL APPL APPL APPL APPL APPL
Memory (Shared)
Physical Resources
100GB 300GB 300GB 10GB
Processor (Shared)
Virtual Servers: 45
Total of Licenses used on LinuxONE
DB Licenses: 6
Application Licenses: 8
Note: For an accurate TCO study, contact your software vendor or IBM representative to
understand its policies and pricing regarding application consolidation on IBM LinuxONE.
In general, it can be presumed that the proof of concept, associated testing, and final
implementations are the most time consuming of a migration.
Start with a fairly simple application that has a low service level agreement (SLA) and a staff
that has the associated skills.
For applications developed within the company, ensure that you have the source code
available. Regarding the operating system platform, even a workload from a different platform
can be migrated, but start with servers running Linux. This process will substantially increase
the success criteria of the migration effort. Applications that require close proximity to
corporate data stored on IBM LinuxONE are also ideal candidates, as are applications that
have high I/O rates because I/O workloads are offloaded from general-purpose processors
onto specialized I/O processors.
IBM LinuxONE III has a powerful processor with a clock speed of 5.2 GHz. Because IBM
LinuxONE is designed to concurrently run disparate workloads, remember that some
workloads that required dedicated physical processors designed to run at high sustained
CPU utilization rates might not be optimal candidates for migration to a virtualized Linux
environment. This is because workloads that require dedicated processors do not take
advantage of the virtualization and hardware sharing capabilities. An example of such an
application might include video rendering, which requires specialized video hardware.
Chapter 5, “Migration analysis” on page 59, provides an in-depth analysis of the process of
determining the most appropriate applications to migrate to an IBM LinuxONE environment.
The first step is to determine the expected consolidation ratio for a specific workload type.
This step allows you to answer the question “What is the theoretical maximum number of
servers that can be consolidated?”
Although this process might set limits on the upper boundaries for virtualization, the efficiency
of the target platform and platform hypervisor might reduce consolidation ratios. In practice,
service levels are often the determining factor.
IBM offers a tool to help Chief Information Officers (CIOs) in determining the IBM LinuxONE
resources that are required to consolidate distributed workloads. It is a self-help web tool
named IBM Smarter Computing Workload Simulator that gives a fast and easy way to view
areas of potential savings and efficiency through the lens of IBM Smarter Computing systems
and technologies.
Additionally, IBM’s Competitive Project Office can support you with a more detailed analysis
of your expected consolidation ratio. Contact your IBM representative to start such a study.
Important: Other factors must be considered to get a complete TCO, including floor space,
energy savings, scalability, security, and outages. For a more accurate sizing study, contact
your IBM representative.
The stakeholders discuss the project plan and produce the principal goals of the migration
plan. Documents must be created that represent the strategy that will be used to accomplish
the migration goals.
The checklists that are shown in this chapter are created specifically for LinuxONE. You can
use this chapter along with Consolidation Planning Workbook Practical Migration from x86 to
IBM LinuxOne, REDP-5433, which provide you with more information about the planning
worksheets, including blank copies to guide you through the process and a sample project
plan.
It provides space where you can record whether the same or similar products and tools are
available on the target LinuxONE operating environment.
Required Users
Required Groups
Observations
Backup Solutions
Operating System
Database
Hypervisor
Other
Application-specific Dependencies
Cron jobs
Each product or tool that is listed in the product worksheet must be analyzed. All the
parameters, dependencies, and optimization options must be taken into account in the source
operating environment. The planning team must then assess whether the same kind of
features or build options are available in the target operating environment.
If the same feature is not available with the same tools or product in the target environment,
the team can assess other options:
Obtain a similar feature by linking other product or tools in the target operating
environment.
Make note of the parameters available in the same tool in the target operating environment
that can be combined to give the same characteristics as in the source environment.
When completing the application features worksheet, verify whether changing parameters or
options in the target operating environment has any side effects on the application or other
tools that are used for application implementation.
If all the checklists are properly analyzed and applied, then the tools, products, and their
implementation differences would be accounted for in the actual migration. This process
would in turn reduce the risks and the migration can be run smoothly.
Make the descriptions as detailed as possible by providing the physical location, server host
name, IP address, network information, software product used, focal point, and any other
information that you believe important to register about the services. The target environment
must have the same infrastructure available to it as is available in the source environment.
SERVER NAME:
Network connectiona
Disk Resourceb
Mount Point : Size (in GB) : Type /home : 3 :Ext4 VG OS Logical Volume
Mount Point : Size (in GB) : Type /opt : 5 :Ext4 VG OS Logical Volume
Mount Point : Size (in GB) : Type /tmp : 5 :Ext4 VG OS Logical Volume
Mount Point : Size (in GB) : Type /var : 1 :Ext4 VG OS Logical Volume
DATA Filesystem
Mount Point : Size (in GB) : Type /DB : 100 : Ext3 /DB:100:XFS VG DB Logical Volume
Mount Point : Size (in GB) : Type /WAS : 50 : Ext3 /WAS:50:XFS VG DATA Logical Volume
CUSTOM Filesystem
Mount Point : Size (in GB) : Type /MGM:10:BTRFS VG DATA Logical Volume
Volume Groups :
Volume Group OS : 20GB
Volume Group DB : 150GB
Volume Group DATA: 80GB
a. The following network connections are available for IBM LinuxONE:
- Ethernet/QETH
- Open vswitch
- IBM HiperSockets
- Direct OSA-Express connection
b. Logical Volume Manager (LVM) provides storage management flexibility and reduced downtime with online
resizing of logical volumes
This chapter provides helpful information about virtualization, particularly to compare and
contrast the virtualization concepts of IBM LinuxONE with those commonly used by x86
distributed systems. The two have many concepts in common, but other concepts are
different. This brief comparison provides terminology, vocabulary, and diagrams that are
helpful when migrating workloads to LinuxONE.
Virtualization turns physical hardware into logical resources that can be shared, shifted, and
reused. One of the most highly prized features of virtualization is dynamically sharing or
dedicating more virtual resources, such as CPU, RAM, and disk, to a virtual guest while the
virtual guest is running. This process greatly eases the system administration tasks of scaling
the supply of services to meet demand.
Virtualization allows a single physical server to host numerous logical servers. The servers
share the physical resources to allow all the guests servers to accomplish more than the
single physical server can on its own, while maximizing the effective use of the physical
resources. In such a virtual environment, the physical server is commonly called the “host”
system and the logical servers are known as “guests.” Although software solutions in the
industry use variations of these terms, this publication uses the terms “host” and “guest” as
defined above.
Systems administrators rely on virtualization to ease and facilitate the complex work of
managing increasingly complex environments. IT managers look to virtualization to address
the ever increasing demand for more computing power from customers while accommodating
shrinking IT budgets.
The growing number of physical servers also increases the amount of electric power and air
conditioning that is consumed in the data center. Virtualization helps to reduce the amount of
electricity that is used, which reduces costs. Aspirations of a “green” data center can similarly
be met in part by using virtualization.
Virtualization has become popular in recent years, with research suggesting that more than
half of all workloads in the data center are virtualized1. Despite its more recent hype,
virtualization has existed in advanced computing systems for quite some time. The
conception of virtualization began in the late 1960s as IBM introduced Control Program
(CP)-67. This innovation quickly grew to become a defining feature of IBM hardware,
including all LinuxONE systems.
Figure 3-1 shows a physical server (host name “x86host1”) with three separate virtual Linux
guest operating systems contained on this physical host. The physical server has a fixed
amount of CPU, RAM, and physical access to disk and network resources. The virtual guests
are allocated CPU, RAM, and disk resources as a subset of what is available on the physical
server, and the network resources are all equally shared by the guests and physical host.
In a typical x86 deployment of virtual services, the physical servers are generally deployed in
pairs or trios, often called clusters of servers. The clusters provide for some standard level of
high availability such that if one physical server fails, another would be able to take over the
running workload with negligible interruption.
A LinuxONE system is divided into isolated logical partitions (LPARs), with each partition
running as an independent host system with its own operating environment. This
configuration means that it has its own CPUs, RAM, devices (such as disks and network
connections), and other resources. LinuxONE Emperor can be configured to run as many as
85 partitions, and the LinuxONE Rockhopper can be configured with up to 40 partitions.
Although LinuxONE can run dozens of partitions, and thus dozens of Linux instances, greater
flexibility to virtualize further can be achieved by running an extra hypervisor within each
partition. For more information about these technologies, see 3.3.3, “KVM hypervisor” on
page 30, and 3.3.4, “z/VM hypervisor” on page 31.
A system administrator can easily create, modify, and manage the LinuxONE without needing
to learn PR/SM and its associated components or commands. The DPM interface allows for
dynamic reconfiguration of CPU, memory, and I/O resources. Wizards prompt for specific
details about a system as it is being created, and automatically enter those and other values
where they are required. Advanced menus offer greater control by experienced system
administrators.
A system administrator can start the installation of a KVM hypervisor with DPM to simplify the
deployment of Linux guests within a newly created partition.
DPM provides resource monitoring on a per-partition basis, which allowing for nearly
real-time usage data and historical trends over time.
Running Linux, extended with the KVM module, within an IBM LinuxONE partition allows
multiple instances of the QEMU application, which provides emulation and virtualization to a
guest operating system. Each QEMU instance runs as a separate process of the host Linux
system, which separates guest instances and protects each set of virtual resources from
each other and from the host system. QEMU communicates with the KVM interface to
establish a guest Linux operating system as though it were running on its own private
hardware.
Using the earlier diagram of a typical x86 virtualization system as a model (see Figure 3-1 on
page 29), a similar virtualization system as it relates to LinuxONE and KVM is shown in
Figure 3-2.
The virtualization capabilities of z/VM provide added isolation, resource sharing, and
resource management features that many systems administrators require.
The work that IBM did in collaboration with these major Linux distributions provided code
within the kernel and the core utilities tied to the kernel to facilitate the operation of the Linux
kernel with LinuxONE hardware.
Note: All supported Linux distributions of RHEL, SUSE, and Ubuntu can run on LinuxONE.
Remember that the Linux kernel alone does not make an operating system. So that a Linux
distribution can run on LinuxONE, the distribution must also include binutils, glibc, and
other core components that are built for LinuxONE.
Running a Linux guest on LinuxONE makes deployment of services faster. You are often able
to spin up a running Linux server in a matter of minutes. Linux servers can be built, cloned,
and deployed within the LinuxONE infrastructure without the pain of requisitioning,
purchasing, mounting, and wiring a new physical server.
Development teams who need a new server for a proof of concept can set up and tear down a
test environment over and over again with no impact to running production systems.
The guest must have access to identical resources on both the source and destination
systems to run satisfactorily. An attempt to perform a migration with a mismatch in resources
will fail because the guest might not behave correctly upon being started on the destination
system.
In addition to moving between real servers, the KVM guest migration permits the state of a
guest operating system to be saved to a file on the host. This process allows the guest
operating system to be restarted later.
Although running the SSI members on the same LinuxONE system is feasible, ideally the
cluster members are contained on separate systems for optimum resiliency when an outage
occurs, The members of the SSI cluster are managed together.
Coupled with SSI, Live Guest Relocation (LGR) facilitates the relocation of a Linux guest from
one member of the SSI cluster to another. This relocation happens nearly instantaneously,
without the Linux guest having any knowledge of the relocation. Network processes and
connections, disk operations, and user interactions on the Linux guest are unaware that the
underlying infrastructure has moved to a different “physical” environment.
Figure 3-5 Simple representation of SSI cluster before live guest relocation
The relocation of Linux guests from one SSI member to another makes it possible to perform
maintenance on the individual SSI cluster members without disrupting the services running
on the Linux guests. With all Linux guests relocated away from an SSI member, that SSI
member can now be updated and rebooted with no impact to any running guests. When the
maintenance on this SSI member is completed, Linux guests can be relocated back to their
original host member. Perhaps all Linux guest systems can be relocated to this SSI member
while similar maintenance is performed on other SSI members in the cluster.
An additional benefit of SSI and LGR is the ability to relocate workloads to accommodate a
more balanced use of system resources. When an SSI cluster contains a configuration of
multiple Linux guests overusing the network, a portion of the guests can be relocated to a
different member of the SSI cluster where network utilization is lower.
Figure 3-6 shows that a Linux guest was relocated from lnx1host3 to lnx1host2 with no
interruption in the services that are running from the Linux guest. Now that no guests are
running on lnx1host3, the host can be rebooted. After rebooting lnx1host3, Linux guests can
be relocated back onto lnx1host3.
More to the point, knowing that z/VM, SSI, and LGR can be used in this way makes migrating
workloads to LinuxONE all the more compelling.
For more information about SSI and LGR, see the following publications:
An Introduction to z/VM Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8006
The Virtualization Cookbook for IBM z Systems Volume 1: IBM z/VM 6.3, SG24-8147
Using z/VM v 6.2 Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8039
Chapter 29: “Preparing for Guest Relocations in a z/VM SSI Cluster”, of z/VM Version 7
Release 1 CP Planning and Administration, SC24-6271
The KVM module manages the sharing of all the virtual resources (processors, memory, and
so on) onto the real resources, among the different VMs that are running in parallel.
3.6.2 QEMU
The QEMU application connects to the KVM interface of the host kernel and provides both
emulation and virtualization of resources that are presented to a guest operating system that
is running within the QEMU process.
The API can be used to handle various tasks, such as starting or stopping a guest, adding or
removing resources, or migrating a running guest to another system.
Libvirt includes a set of command-line interfaces (the virsh command) for managing both
KVM itself and the VMs that run within the LinuxONE partition. Libvirt also provides the
interface for resource management tools as described in “Resource management” on
page 37.
IBM Cloud Infrastructure Center and Cockpit (with optional plug-in cockpit-machines) are
tools for managing VMs by using a web browser.
Virtual Machine Manager (virt-manager) is a graphical user interface for Linux that allows a
user to manage multiple KVM hosts from a single location. It provides KVM lifecycle
management, and some performance metrics of recent virtual resource usage of each VM.
When a Linux guest logs on to a z/VM session, it starts its own CP session. For production
systems, this login is usually done automatically when the z/VM system is initially loaded or
booted. An entry exists in the z/VM directory for each VM that can be started.
Note: If an administrator logs off the Linux VM console by using the conventional LOGOFF
CP command, the VM powers off and terminates all running work. The administrator must
use the DISCONNECT command (not the LOGOFF command) to ensure that this problem does
not occur.
For more information about z/VM, see Introduction to the New Mainframe: z/VM Basics,
SG24-7316.
CP and CMS give the system administrator a more direct route to manipulating the available
resources for the benefit of the Linux guest.
To reduce the complexity of z/VM management, IBM Wave for z/VM is a perfect solution to
help system administrators in their daily tasks. The following is a list of features that can help
with maintenance tasks:
Display and manage virtual servers and resources, all from the convenience of a single
graphical interface
Provision VMs, and install a guest operating system
Provision virtual resources, such as processors, memory, network, and storage
Capture and clone virtual servers across partitions
Create and configure virtual switches (VSWITCHes) and guest LANs
Relocate VMs to other partitions
Display, monitor, and manage z/VM hypervisor resources, such as paging
More information about IBM Wave can be found in IBM Wave for z/VM Installation,
Implementation, and Exploitation, SG24-8192.
For more information, see the following IBM Cloud Infrastructure Center resources:
Data sheet
Web page
In the current environment of distributed computing, the memory, CPU or disk resources are
underutilized most of the time the server is running. However, it is necessary to have the
capacity available when the server reaches peak load.
Although resources can be rigidly committed to a specific workload, it is the flexibility of the
virtual resources that is appealing. Overcommit is powerful for virtualized guests because
typically not every guest needs all of the allocated resources at the same time.
The key to efficient memory management is to be aware of the total amount of virtual memory
that is likely to be active at any time. Also, be aware of the amount of real memory that is
allocated to the LinuxONE partition.
Both KVM and z/VM allow you to overcommit memory, but keep the overcommitment ratio of
the total amount of virtual memory likely to be active to total amount of virtual memory to
around 2:1. For test or development workloads, the ratio should be no more than 3:1.
The keys to determining the appropriate virtual memory size are to understand the working
set for each VM, and to ensure that the Linux instances do not have any unneeded processes
installed. Another recommendation is to use VDisks for swap, as described in “Swap device
consideration” on page 42.
As described next, these features are not available in a distributed x86 environment. Only
LinuxONE can provide these versatile features, which dramatically reduces the amount of
physical memory that is required to maintain a similar set of workloads.
CMM
CMM is used to reduce double paging that can happen between a Linux guest and z/VM.
CMM requires the IBM Virtual Machine Resource Manager (VMRM) running on z/VM to
collect performance data and notify the Linux guest about constraints when they occur. On
Linux servers, the cmm kernel extension is required, and it is loaded with the modprobe
command.
CMMA
CMMA enables a Linux guest to share the page status of all 4 KB pages of guest memory
with the KVM or z/VM hypervisor. Linux does this sharing by marking the status of each page,
which allows the hypervisor to preferentially steal unused and volatile pages and thus reduce
paging.
NSS
NSS is a z/VM feature that allows virtual guests to share a read-only copy of a single
operating system such as CMS or Linux. The benefit of this feature is that only one copy of
the operating system is in storage accessible to all VMs. This feature decreases storage
requirements and simplifies maintenance.
Figure 3-7 DCSS and NSS shared by multiple Linux guests on z/VM
For more information about setting up a Discontiguous Saved Segment and its use with the
Execute-In-Place (XIP) file system, see Using Discontiguous Shared Segments and XIP2
Filesystems With Oracle Database 10g on Linux for IBM System z, SG24-7285.
Note: When defining memory requirements for virtual Linux guests, remember that the
Linux kernel uses all the extra available memory allocated to it as a file system cache.
Although this feature is useful on a stand-alone system (where that memory would
otherwise go unused), in a virtualized environment this causes the memory resource to be
consumed in the partition. Therefore, it is important to assign only the memory needed for
the running applications when they are at peak load.
Linux swap can be thought of as an overflow when an application cannot get enough memory
resource. Thus, swapping activity indicates that the application must be analyzed to
understand whether more memory is needed. Adding memory to a VM might not be the
solution to the problem.
Commit a specific amount of virtual memory to each Linux guest to accommodate no more
than its intended workload, and fine-tune this amount of memory precisely so that Linux
swapping does not normally occur. As a general rule, z/VM paging performs better than Linux
swapping.
In the absence of the perfect memory configuration, and when workloads demand significant
swapping, the ideal is to provide a VDisk device for this purpose. VDisks are virtual disks that
are allocated in z/VM memory. They become a fast swap device for Linux. Swapping to a
VDisk in memory is far more efficient than swapping to regular disk, and it is generally less
expensive, too, considering all factors. The Linux administrator must take care during the
initial installation of the Linux guest to ensure that the VDisk is formatted as a swap device.
But more than that, the VDisk must also be formatted each time that the Linux guest is
booted.
For more information about optimizing memory on z/VM and Linux, see Linux on IBM System
z: Performance Measurement and Tuning, SG24-6926.
With LinuxONE, SAN device support is expanded to include Small Computer System
Interface (SCSI), connected to LinuxONE through Fibre Channel. In the x86 distributed world,
the term Fibre Channel is often abbreviated as FC. To avoid confusion with FC in IBM
terminology, referring to FICON channel devices, LinuxONE uses the phrase Fibre Channel
Protocol (FCP) to refer to the connection to a SAN.
Typically, a SAN with Fibre Channel consists of independent and redundant fabrics, which
provide connectivity between processor and peripheral devices. The Fibre Channel adapters
each have their own unique worldwide name (WWN), which is put into zones within the fabric.
Modern Fibre Channel adapters can be virtualized by using N_Port ID Virtualization (NPIV).
They provide several different virtual devices that all have their unique port name (WWPN)
and can be put into separate zones, despite sharing physical resources on a server.
In theory, just one zone with all adapters and storage adapters would be sufficient. For actual
production deployments, create a separate zone for each of the NPIV devices. The reason is
that during logon and logoff of a single NPIV device, the whole zone is rediscovered. Although
this process does not cause errors, it can cause short hangs, depending on the size of the
zone. When a separate zone is created for each NPIV device, only the local zone is
discovered, which has no effect on other zones.
KVM provides SCSI support by using the Linux zfcp and scsi drivers, and can pass a full
SCSI LUN to a Linux guest through the virtio infrastructure. z/VM provides SCSI support by
using its own drivers, and can pass either a full SCSI LUN, or a small piece of one as a
minidisk, to a Linux guest. However, it is more common for z/VM to pass the FCP devices to a
Linux guest, and allow the Linux to perform the SCSI configuration.
For a more detailed description of disk storage, see 5.2, “Storage analysis” on page 71.
OSA devices can be virtualized through virtual switch devices to many Linux guests. It is
available to KVM guests by using the Open vSwitch interface, and to z/VM guests by using a
VSWITCH controller. Each Linux guest connects by using a virtual device that is controlled by
the qeth module to a virtual switch system in a LinuxONE partition.
HiperSockets provide high-speed interconnectivity among guests that run on IBM LinuxONE,
without any special physical device configuration or cabling. The guests communicate with
one another internally by using the in-memory capabilities of the PR/SM hypervisor. However,
HiperSockets are not intended to be used for sophisticated networking and should not be
used for external traffic.
OSA-Express and HiperSockets use the queued direct I/O (QDIO) mechanism to transfer
data. This mechanism improves the response time by using system memory queues to
manage the data queue and transfer between a hypervisor and the network device. Various
examples are described in 5.1, “Network analysis” on page 60.
For more information about network in Linux, see Advanced Networking Concepts Applied
Using Linux on IBM System z, SG24-7995.
Part 2 Migration
After you decide to migrate from the x86 platform to LinuxONE, this part of the book guides
you with an overview of the migration process and assists you in migration planning.
The following chapters describe the key components of a migration analysis and walk you
through an example hands-on migration. Planning checklists and worksheets are provided in
this part to assist you in your own planning and hands-on migration.
This chapter provides you with information about the approaches that are involved in planning
your migration and defines various types of stakeholders along with their roles and
responsibilities. Not every organization uses the same titles for stakeholders, but the titles that
you use should match the functions described in this book.
Additionally, this chapter describes the process for a migration project from identifying the
stakeholders, assembling them, and identifying success criteria through to verifying both the
migration itself and its success.
Vendors
The third-party vendors have many resources that you can use, and they are often ready
to help if you make your needs known. They can respond quickly and are often the most
cost-effective source of information and solutions.
For independent software vendor (ISV) applications that you are targeting for migration,
you need to determine whether the vendors provide compatible versions that support the
distribution of Linux that you plan to use. Many ISV applications have other third-party
dependencies. Vendors should be able to help you map out all ISV dependencies,
including middleware. Most leading middleware products are available on LinuxONE, and
there are often open source alternatives.
Contractors
Specialists can be called on to assist with transient needs. They can provide skills that
your staff does not yet have, or skills that will not be needed after the migration project is
completed. Contractors can be used to enhance the skills of your staff as they perform
tasks on the migration project. Make sure that skills transfer takes place for persistent,
recurring tasks.
The security administrators are the team responsible for data protection, including the
authentication and authorization of users who access company applications. The target
application must adhere to existent security policies or demonstrate heightened security
methods and standards. For more details about LinuxONE security, see 5.6, “Security
analysis” on page 98.
Identify the following stakeholders, as defined in 4.1, “Stakeholder definitions” on page 48:
Business stakeholders define the business and success criteria.
Operational stakeholders provide information about the application requirements,
database requirements, available network bandwidth, CPU load, and allowable downtime.
Security and compliance teams define compliance requirements for the entire migration
effort.
To make sure that all interests are taken into account, request a meeting of the key people
who requested the migration and who are affected by it. Subsets of stakeholders with related
tasks and responsibilities should also meet to enhance communications and encourage
teamwork.
A communications plan, coupled with proper training on the new system, should minimize the
number of users who reject or oppose the project. It encourages users to start out with
acceptance instead of dissatisfaction as the initial response, and lead to a quick transition into
exploration and productive use.
These issues are even more important regarding the IT support team. A strategic decision to
switch an operating system or platform can inadvertently create an impression of disapproval
of the work the team has done so far. This perception might cause staff to think that their
current skills are being devalued.
You should be able to articulate the objectives for your Linux migration and relate them to your
key business drivers. Whether you are trying to gain efficiencies by reducing costs, increasing
your flexibility, improving your ability to support and roll out new application workloads, or
some other key business drivers, be sure to set up objectives that line up with these goals.
Even the smallest of migrations should be able to do this process, and it will help guide your
planning.
Defining metrics (increased performance, more uptime, open standards, enterprise qualities)
early in the project helps the team stay focused and reduces opposition. Be sure that you
have a means of tracking the metrics. Getting stakeholder agreement on your metrics early in
the project helps ensure the support of everyone from executives to users.
Often, the migration to Linux is accompanied by other objectives. For example, some
customers upgrade their database at the same time to get the latest features and
performance enhancements, and to obtain support that works well with the latest distributions
of Linux. As with any project, the scope must be well defined to prevent project overrun.
However, it is also important that you have a means to manage additions to the plan as
business needs dictate.
Because cost is often a key motivator for migrating to Linux, give careful consideration to
identifying where cost reduction is targeted. Identify metrics for defining return on investment
before beginning migration activities, and identify metrics for other success criteria.
Identify the
Stakeholders
Pre-assessment
Verification testing
Define success
criteria
Actual migration
Finalize the new
environment stack
Check success
Pilot proof of concept
Decision to migrate
Identifying the stakeholders is described in 4.2, “Identify the stakeholders” on page 51 and
4.3, “Assembling the stakeholders” on page 52. This section describes each of the remaining
elements in this approach.
4.4.1 Pre-assessment
During the pre-assessment phase, a high-level analysis and initial feasibility study of the
application architecture, source code dependencies, database compatibility, and build
environment is performed. This task defines an overall scope for the migration to the target
operating system. The applications that run on the current servers are assessed to determine
whether they are available and certified to run on LinuxONE. In addition, an evaluation of the
risks that are related to migration is performed. This process helps to identify major risk areas
at the earliest stage.
Additionally, perform a careful analysis of present and anticipated business needs and weigh
the results against the pros and cons inherent in each option of migration. The outcome of
this phase is a recommended migration approach, and a high-level risk assessment and
analysis report that identifies potential issues that can occur during the migration.
Regardless of how the project success is defined, all stakeholders must understand and
agree on the criteria before the porting effort starts. Any changes to the criteria during the
porting cycle must be communicated to all stakeholders and approved before they replace the
existing criteria.
During this phase, most of the technical incompatibilities and differences in the environmental
options are identified, and are usually fixed.
Custom-built applications
If custom-built applications are written in one or more programming languages, several tools
might need to be validated on the target environment. These tools can include compilers, the
source code management system, the build environment, and third-party add-on tools.
Additionally, an in-depth analysis should be carried out on the various build options specified
to ensure that the tools on the LinuxONE platform provide the expected functionality after the
migration (for example, static linking, library compatibilities, and other techniques). The effort
that is involved can vary greatly depending on how portable the application code is.
ISV applications
If you are running ISV applications on x86 that you are targeting for migration, you need to
determine whether the vendor provides compatible versions that support the distribution and
version of the target LinuxONE. Many ISV applications have other third-party dependencies.
Be sure to map out all ISV dependencies, including middleware. Most leading middleware
and open source products are available on LinuxONE.
Note: Many open source alternatives are available for many applications and services for
LinuxONE.
The POC phase should involve all of the same tasks and activities of the full migration. The
main objectives of the POC are to focus on the identified areas of risk, empirically test the
recommended approaches, and prove that the full migration can be completed successfully.
In this way, the major potential migration risks that are identified during the pre-assessment
can be addressed in a controlled environment, and the optimum solution can be selected and
proven. This service targets the areas of issue and risk, proves that the optimal resolution
methods have been selected, and provides a minor scope of the whole migration.
Note: POC projects might require extra funding and can lengthen the project schedule, but
will likely contribute to the project’s success.
During this phase, analyze and discuss all key requirements with the stakeholders including
timing, resource needs, and business commitments such as service level agreements
(SLAs). Also, discuss any related aspects of the migration, such as new workloads,
infrastructure, and consolidation. The decision to implement the migration must be acceptable
to all stakeholders involved in such activity, especially the business owner.
Migration activities rely heavily on having ready access to the personnel responsible for the
development, deployment, and production support of the applications and infrastructure in
question. Anticipating change and ensuring the early involvement of affected teams are
efficient ways to handle change issues. For example, support staff for hardware might be
comfortable with UNIX related hardware support and know where to go for help. However,
practitioners who are expert in the previous environment might be less open to change if they
feel threatened by new ways of doing things where they do not have expertise.
The team follows the planned approach and methodology during their migration activities. If
needed, modifications are made to the application source code and build environment. The
new application binary files are generated and checked for compliance with the target version
of the operating system.
If any performance issues are encountered during this stage, the target environment can be
tuned for maximum performance.
If the success criteria are not achieved, the migration implementation must be reviewed. After
the review is complete, the testing phase must be redone to ensure that the application being
migrated meets the acceptance criteria and is ready to go into production.
VLAN and VLAN tagging are supported by both OVS and MacVTap devices.
VSWITCHes do not need a connection to an OSA card to operate. They can also provide
purely virtual networks. This feature also simplifies the setup of private interconnects between
guest systems. When creating private interconnects in an SSI with live guest relocation (LGR)
enabled, use dedicated VLANs with external interfaces. This configuration is necessary to
accomplish the private connection between guests that run on different nodes in the SSI.
The VSWITCH infrastructure provides two basic configuration options. One configures
user-based access, and the other configures port-based access. From the possibilities, both
are equivalent. Just the configurations differs.
You can read more about VSWITCH benefits on Set up Linux on IBM System z for
Production, SG24-8137, and technical information about Advanced Networking Concepts
Applied Using Linux on IBM System z, SG24-7995.
RoCE Express
The 25GbE and 10GbE RoCE Express2.1 features use Remote Direct Memory Access
(RDMA) over Converged Ethernet (RoCE) to provide fast memory-to-memory
communications between two LinuxONE servers.
These features are designed to help reduce consumption of CPU resources for applications
that use the TCP/IP stack (such as IBM WebSphere® that accesses an IBM Db2® database).
They can also help reduce network latency with memory-to-memory transfers by using
Shared Memory Communications over RDMA (SMC-R).
With SMC-R, you can transfer huge amounts of data quickly and at low latency. SMC-R is
transparent to the application and requires no code changes, which enables rapid time to
value.
SMC-D requires no extra physical resources (such as RoCE Express features, PCIe
bandwidth, ports, I/O slots, network resources, or Ethernet switches). Instead, SMC-D uses
System-to-System communication through HiperSockets or an OSA-Express feature for
establishing the initial connection.
For more information about RoCE Express and Internal Shared Memory, see IBM z15
Technical Introduction, SG24-8850.
One VSWITCH instance operates at Layer 2 or Layer 3 of the OSI Reference Mode. It is
virtually attached to the same network segment where the OSA card is physically connected.
This section covers some common scenarios and how they look on LinuxONE.
In a Layer 2 VSWITCH configuration, all Linux guests have their own Media Access Control
(MAC) address. In a Layer 3 VSWITCH configuration, the Linux guests respond with the OSA
card’s MAC address to requests from outside the LinuxONE LAN segment.
Figure 5-5 Single virtualized network with multiple LPARs and failover
You can set up the same scenario on LinuxONE. If you have in place a physical switch, a
third-party firewall solution, and a router in your environment, you can reuse them as part of
your network planning on LinuxONE. Otherwise, you can use some network facilities
available on LinuxONE and its hypervisors.
Figure 5-7 Multiple virtualized network scenario: DMZ and secure network
Figure 5-8 Multiple virtualized network scenario with failover: DMZ and secure network
Note: Although the use of HiperSockets for this scenario is possible, it might not be the
best solution. If one of the LPARs is CPU-constrained, using HiperSockets could cause a
delay of network traffic. For more information about HiperSockets, see Set up Linux on IBM
System z for Production, SG24-8137.
At this point, the target Linux server must be assigned a host name that is different from the
source server name:
1. Migrate the applications (for more information, see 5.3, “Application analysis” on page 80)
and files from the source server to the target server.
2. Shut down the source server.
3. Change the Linux server’s host name.
4. Change the DNS registered name to the new Linux IP address.
If the application running is an IP-based application, you can change the IP address of the
target Linux server to the source IP address.
These migration models are covered in more detail in the following subsections.
In both types of data migration, some unexpected issues must be carefully considered. The
result of not doing so might lead to an extended outage, unexpected downtime, data
corruption, missing data, or data loss.
Always consider the content of the data that is migrated before selecting online migrations as
a solution.
To avoid such issues, online data migration must always be run during off-hours, and you
should always take a data backup just before the actual data migration activity begins.
That exported or dumped file must be transferred across the network, and the database
import procedure must be run at the target server. For more information, see 5.4, “Database
analysis” on page 88.
When migrating Linux systems from x86 to LinuxONE, the SAN Volume Controller allows you
to non-disruptively migrate data to LinuxONE. For more information about the IBM System
Storage SAN Volume Controller, see this web page.
For source applications that are on servers where storage is local or the external storage is
not compatible with Fibre Channel data storage, all data must be copied by using the network
file system from the source server to the target server (LinuxONE):
1. Create a server file system with mount points for all data files.
2. Create a temporary file system to be used in the file transfer process on the target server.
3. Configure the target server as an NFS file server, a Samba file server, or an FTPS File
Server to upload the files from the source server.
Consider the following points:
– If there is enough space at the source server to compact all of the data, consider using
data compression features such as zip, or tar with gzip and bzip formats. Both of
these formats are compatible with LinuxONE. The data can be transferred by using an
FTP server that is configured on the target server.
– If not enough space is available at the source server to compact the data, mount the
NFS file system or map the Samba file system at the source machine, and copy the
files across the network.
4. Verify the correct files permissions at the target directory. Adjust file permissions after the
transfers for production work.
For file storage in an external storage system compatible with Fibre Channel, you can migrate
to a LinuxONE server configured with zFCP adapters to connect directly to the volumes that
should be migrated to LinuxONE servers.
A best practice for LinuxONE is that only one version of a Linux OS distribution should be
installed from scratch. Therefore, design the basic Linux file system to allow the highest
possible model of servers and then, clone all the other Linux guests in the environment from
this source (known as the golden image). On IBM Wave for z/VM, this golden image is called
a prototype. The file system that stores the application data is created after the cloning
process depending on the needs of the application that is on the server.
For more information about creating a golden image, see The Virtualization Cookbook for IBM
z Systems Volume 1: IBM z/VM 6.3, SG24-8147.
The LVM is useful for Linux file systems because it allows you to dynamically manage file
system size and has tools to help back up and restore failing partitions.
Figure 5-11 shows five minidisk (MDisk) devices that are used by a Linux guest to create a
unique VG. It is then further organized or allocated into two LVs.
/dev/VGO
Volume Group
/dev/VGO/LVO
Logical Volume
/dev/VGO/LV1
However, a small performance price must be paid when using LVM. However, the flexibility of
LVM often outweighs the cost of the performance loss.
For more information about the LVM setup during the installation, see The Virtualization
Cookbook for IBM z Systems Volume 1: IBM z/VM 6.3, SG24-8147.
Following this best practice, the golden image should include the following file systems:
root (/)
/boot
/var
/tmp
/opt
/home
Note: These best practices are better aligned to suit generic business needs. Plan your file
system distribution according to your needs and consult your Linux distribution manual for
further recommendations.
Important: The root (/) file system should not be placed on an LVM device because
during an LVM failure, you can recover the system by using the single user mode.
Important: Like root (/), do not place the /boot file system on an LVM device. The
preferred file system type for /boot is EXT3.
The size of this file system depends on the number and type of applications that are running
and how long the log files are kept on the server. Also, consider whether the application is
designed to write files here, and their sizes and frequencies.
The services control files are also placed on the /var file system so it can never be scaled to
be a shared file system and it must be always read/write.
Because it is a dynamic file system, place it on an LVM device to allow the capability to be
extended or reduced as needed.
The file system size depends on the size of the software packages that will be installed in it. It
is easy to estimate the requirements for a single software package. However, upgrades,
maintenance, and additional software packages are not so easy to plan for. The /opt file
system can also be a dynamic file system and should be configured on an LVM device.
Depending on the situation, the /home file system can be a dynamic file system. If it is
dynamic, configure it on an LVM device.
Other database management systems put their data files in other directories. For example,
the MySQL database server’s default location for data files is the /var/lib/mysql directory. If
the server is a MySQL database server and you are using the Linux distribution from Red Hat
Linux or SUSE Linux, consider including a new file system at the /var/lib/mysql mount
point.
For each target database management server, make sure that you know where the binary
files and the data files will be located. Only with this information can you plan to create the
devices and file systems for the target system.
There might be file location differences depending on the distribution of Linux that you install
at your site. Make sure that you know these differences, if any, and plan for them.
In a shared disk environment, remember that the file system changes performed by a guest
machine that has the read/write control are only be available to other guests that share the file
system after unmount and mount of that system. As an example, think of the environment of a
web cluster service where the application servers only need read access to the web pages
and do not need to write to the same file system where the files are allocated.
In the example that is shown in Figure 5-12, only the special file system and mount points
relevant to the solution are represented. The data file location is at mount point /srv/www/app.
This is the file system that is shared between the Linux guests. There is also the shared file
system /opt/ibm/IBMHTTP, where the web server binary files are installed. For the IBMHTTP
service, the log files are redirected to the local /var/log/httpd file system. All shared devices
are the same device type, and are managed by the z/VM operating system.
z/VM
RO RO RW
RO RO RW
/srv/www/app /srv/www/app /srv/www/app RW RW RW
/opt/ibm/IBMHTTPD /opt/ibm/IBMHTTPD /var
/opt/ibm/IBMHTTPD /var /var
The benefits of using a shared file system are based on economy of resource. You can
reduce application binary space allocation and code updating efforts because you only have
to update one master server and then remount it on the subordinate servers.
Disk devices
A single Linux guest can talk to multiple disk devices of the same or different device type. This
feature is helpful when using large file systems, such as in the case of database servers. If
necessary, you can split up a single disk into partitions with the fdisk or fdasd Linux utilities, or
into minidisks with z/VM.
A combination of both solutions can help you improve system performance and use storage
resources efficiently. For more information, see Linux on IBM System z: Performance
Measurement and Tuning, SG24-6926.
Software-defined storage
The IBM LinuxONE brings support for IBM Spectrum® Scale and other software-defined
storage technologies. This newer virtualization abstracts the rich features that are found in a
single enterprise storage system so that they become available across multiple storage
facilities. This feature provides tremendous benefits in the clustering technologies used for
High Availability solutions and data replication or backup.
For more information about IBM Spectrum Scale, see this web page.
For more information about Non-Volatile Memory express support on IBM LinuxONE, see
Maximizing Security with LinuxONE, REDP-5535.
An application’s infrastructure diagram helps you to understand the relationship among its
interconnected components and assess the overall migration complexity. With a diagram
available, it is possible to fully establish expectations, required efforts, and goals along with all
involved stakeholders during the migration process, which typically speeds up the migration
process.
Such situations present valid reasons for considering a migration to a more efficient platform
like IBM LinuxONE. In most cases, a migration to LinuxONE will help an organization realize
significant cost savings over three to five years. The question is, which applications can you
migrate and what risk factors are associated with the migration?
Another key element in choosing the appropriate applications for migration is whether they
are supported on LinuxONE. This consideration is normally not a problem with homegrown
applications, depending on what language they were written in. Also, LinuxONE has a long
list of supported open source applications available on the platform.
IBM software
IBM has many of its software products available for LinuxONE. The benefit to customers is
that a migration from one platform to another is in many cases effortless because many of
these products share their code base across multiple platforms.
Generally, migrating from IBM products on distributed servers to the same IBM products on
LinuxONE is a relatively straightforward process. For more information and examples, see
Chapter 6, “Hands-on migration” on page 127.
Db2
You can use Db2 for Linux, UNIX, and Windows products on LinuxONE. It works seamlessly
in the virtualized environment without any extra configuration. In addition, autonomic features,
such as self-tuning memory management and enhanced automatic storage, help the
database administrator to maintain and tune the Db2 server. For more information and a
migration example from x86, see 6.2, “Migrating Db2 and its data” on page 134.
Oracle
Because Oracle database is fully supported on LinuxONE and runs in an efficient manner on
this platform, it is a good candidate for migration to LinuxONE.
Oracle databases on LinuxONE also support Real Application Clusters (RAC), the Oracle
high availability clustering solution. The advantages for Oracle RAC on Linux are a
high-availability cluster with low latency within the LinuxONE platform that is combined with
HiperSockets for inter-LPAR communication.
Oracle WebLogic Server is also supported on LinuxONE. It allows you to have a complete
Oracle Java environment and high available Oracle database within the same LinuxONE
machine.
In many cases, Oracle supports mixed configuration mode where the database tier sits on
Linux and applications for Oracle E-Business Suite, Oracle Siebel, and Oracle Business
Intelligence run on distributed servers under Linux, Windows, or UNIX. For more information
about which Oracle products are certified for LinuxONE, contact your Oracle representative or
see this web page.
IBM also took its leading role in the open source community seriously. IBM made important
contributions to projects, such as Apache Hadoop, which enabled continuous development in
the fields of analytics and high performance computing. Clients and solution builders that
want to innovate on top of a high-performance data analytics platform can take advantage of
the flexibility, throughput, and resiliency of IBM LinuxONE Platform, and the immediate
price-performance value that is provided by LinuxONE solutions.
A LinuxONE server that is running Node.js and MongoDB can handle over 30 billion web
events per day while maintaining 470K read and writes per second. The popular MEAN stack
runs up to 2x faster than on other platforms. IBM LinuxONE allows MongoDB to scale
vertically with dynamically allocated resources instead of horizontally by sharding and
replicating the database. LinuxONE and MongoDB provide strong consistency, which ensures
that critical data remains consistent and minimizes sharding-related processor usage.
Infrastructure services
The following infrastructure services are good candidates for LinuxONE:
Network infrastructure services, such as FTP, NFS, DNS, are well-served on LinuxONE.
These workloads are generally minimal, but are critical to the business. The main benefit
of hosting these services on LinuxONE is the availability of the hardware’s disaster
recovery capabilities.
LDAP security services fit well running on LinuxONE, including OpenLDAP products and
commercial products, such as IBM Security™ Directory Server, IBM Tivoli® Directory
Integrator, and IBM Tivoli Access Manager. By using LinuxONE, customers can build a
robust identity services oriented infrastructure.
Application development
LinuxONE provides the following benefits for application development:
Whether for Java, C/C++, or most other programming languages, a virtualized Linux
environment is an ideal platform for application development. Although developers usually
develop on a stand-alone platform, testing and modifying are generally performed in a
server environment. Developers can be given multiple virtual servers to perform interactive
testing while troubleshooting or enhancing the application.
Select an application that is reasonably self-contained and that does not rely too much on
input from multiple sources and other applications. In addition, choose an application that
does not require a major rewrite to run on LinuxONE.
The best candidates are Java-based applications because these applications are generally
platform-independent. However, if you are moving to a different Java Platform, release or a
different middleware product, some code changes might be necessary.
Applications that are written in C/C++ are also suitable if the source code is available.
However, keep in mind that these must be recompiled for the IBM LinuxONE platform.
After you select an application to migrate, clearly define your goals and expectations. The
POC’s results should achieve the same performance, usability, and functionality as the source
production environment.
Many distributed applications grew in only a few years from a single server to tens or even
hundreds of interconnected systems. These interconnected servers not only add network
burden, but complexity and built-in fragility. If such an application is being considered for
migration, make simplification part of the core of what needs to be done.
Note: The main thing to remember during migration planning is to completely map all
application interdependencies. The aim is to identify any obsolete networking technologies
and interfaces, which might in turn require another application to be migrated to a current
network technology.
Note: Starting with IBM Java 8 SR5, enable the Pause-less garbage collection feature by
using the -Xgc:concurrentScavenge argument to your JVM. For more information about
how the Pause-less garbage collection feature works, see the IBM Java SDK
documentation.
Architecture-dependent code
Programs that are in directories (on non IBM LinuxONE systems) with names, such as
/sysdeps or /arch typically contain architecture-dependent code. You must reimplement them
for the hardware architecture to port any of these programs to LinuxONE.
During the migration planning discussions, the workload of the instances and the databases
that are running at the source environment must be considered, along with the number of
concurrent users and the number of instances and databases that are running in a unique
source server.
A suitable capacity plan is the key to successfully migrate multiple databases to a single
LinuxONE LPAR. An important point to consider when planning the capacity is the
relationship between the source and target server resources. Typically, less resources are
required in IBM LinuxONE servers when compared to their same workloads running under
the x86 architecture. For more information about some of these aspects, see 5.4.4, “Technical
considerations” on page 90.
Use Table 5-1 on page 89 to map the performance of your database instances. Change the
Server name column to “Instance name” and document the appropriate information in its
respective fields.
With this information, multiple servers can be migrated into a single LinuxONE logical
partition whilst being capable of serving all user requests, with improved database response
times and enhanced management. This process makes it easy to define the number of virtual
CPUs that each server needs and avoid CPU constraints during peak usage hours.
Tip: If possible, gather data for an entire month instead of a single day. The more data that
is available, the more accurate the performance analysis.
Databases also uses buffer pages to speed up table access. That is, database servers are
memory and storage-bound and, thus, require proper capacity to operate efficiently and
quickly serve its users.
CPU
The number of virtual CPUs that is allocated to a database server is important. However,
configuring the amount of virtual CPUs to the same amount of real CPUs does not guarantee
better performance.
The number of processes in a processor queue is directly influenced by all other resources
competing for CPU time. Typically, when many processes are competing against each other
for CPU access, it is said that the system is “busy” and cannot serve all requests effectively.
Memory or I/O constraints can also affect the processor queue directly.
Before deciding that the server does not have enough CPUs, analyze the CPU access times
with Linux performance tools, such as sar and top.
Memory
Databases typically require a large memory area to achieve acceptable performance.
Depending on the database and the amount of memory required, consider using large or
huge pages. Memory access-intensive applications that use large amounts of virtual memory
might obtain performance improvements by using large or huge pages.
IBM LinuxONE features impressive memory access speed times. During the migration of a
database server, allocate less memory to the IBM LinuxONE machine. Then, increase or
decrease it as necessary.
A LinuxONE preferred practice is to use z/VM VDISKs devices as swap devices. Because
swap that is configured at VDISK devices provides desirable response times, the eventual
memory paging (the process that moves memory blocks to and from real memory and to and
from swap memory) is not considered a real problem. It is also not considered a problem if the
server has no more than 50% of the swap memory allocated. However, this configuration
involves variable paging and swapping allocation, which must be monitored to avoid database
outages.
If the server often presents long periods of swapping, increase the guest’s memory and
continue monitoring the server to find its best memory size.
The Linux kernel provides a configurable kernel parameter called vm.swappiness, which
determines whether more or fewer pages of memory are to be swapped in and out to disk.
Consult your database product documentation and your database administrator for more
information about how to correctly tune this value to the wanted workload.
Example 5-1 shows how to configure the vm.swappiness parameter of the /etc/sysctl.conf
file as recommended by the Db2 product documentation.
The correct swap size depends on your database requirements and how much memory it
uses. The swap memory is used during high peaks only; therefore, set your swap size to a
safe number to avoid an out of memory condition. As with memory sizing, frequently monitor
the overall swap consumption and increase or decrease it as necessary.
Shared memory
Linux systems use the interprocess communication (IPC) facility for efficient communication
of processes without the kernel intervention. The IPC uses three resources to allow
communication among processes: Messages queues, semaphores, and shared memory.
Shared memory is a memory segment that is shared by more than one process. The size of
the shared memory directly influences database performance because it can allocate more
objects in real memory, which allows the system to perform less I/O.
Several Linux Kernel parameters can be tuned to improve the memory allocation to
databases, depending on the workload requirements. Table 5-2 shows the recommended
kernel parameters for a Db2 database.
kernel.shmmax Defines the maximum size of one A total of 90% of the total memory;
shared memory segment in bytes. however, if a large amount of storage
is available, you can leave 512 MB -
1 GB for the operating system.
kernel.shmall Defines the available memory for Convert the shmmax value to 4 K
shared memory in 4 K pages. (shmmax value x 1024 /4).
kernel.shmmni Defines the maximum number of A total of 4096. This amount enables
shared memory segments. large segments to be created, which
avoids the need for thousands of
small shared memory segments.
This parameter varies depending on
your application.
Note: Consult your database product documentation for its recommended kernel
parameters and other tuning that might be necessary.
Storage
Data storage access on a database server is intensive and must be considered during server
migration. To take advantage of the I/O processing capabilities of LinuxONE, the first
consideration in design is to spread the I/O workload over as many paths as possible to the
storage server.
For more information about how disk device accesses are made and how an external storage
system provides its own disk page caching, see 3.8.3, “Virtualized disk” on page 42.
Four file formats are supported for import and export. The format that is chosen usually
reflects the source it comes from or the target tools to be used. Often, the files’ extension,
such as .ixf, .del, or .asc, reveal the content format. For example, a file that is named
employee.ixf contains uneditable Db2 interchange format data. Import can traverse the
hierarchy of tables in .ixf format.
This method can require an excessive amount of downtime if your database is large. Oracle
developed the following methods to migrate from one hardware platform to another:
Transportable table spaces: This technique was introduced in Oracle 8i to allow entire
table spaces to be copied between databases in the time it takes to copy the data files.
Data Pump export/import: These high-performance replacements are for the original
Export and Import utilities.
Recover manager (rman): An Oracle Database client that performs backup and recovery
tasks on your databases and automates administration of your backup strategies.
Oracle GoldenGate: A comprehensive software package for real-time data integration and
replication in heterogeneous IT environments.
Custom procedural approaches.
If the database is not using all of the available memory, reduce the server memory until it
starts paging. A system that is constantly swapping is the first indication of insufficient
memory.
High load averages are not always an indication of CPU bottlenecks. Monitor the LPAR
performance and determine if any other process or server that is running in the same LPAR is
competing for CPU time when the problem occurs.
Database data files and log files must be in different file systems and should be striped across
the storage hardware. Have multiple paths to the data to ensure availability.
The systems administrator and the database administrator must work together during the
sizing process. Database servers typically require adjustments at the Linux and database
levels.
Backup concepts
The term backup refers to the creation of an extra copy of a data object to be used for
operational recovery. The selection of data objects to be backed up must be done carefully to
ensure that, when restored, the data is still usable.
A data object can be a file, a part of a file, a directory, or a user-defined data object, such as a
database table. Potentially, you can make several backup versions of the data, each version
at a different point in time. These versions are closely tied together and related to the original
object as a group of backups. The files are backed up by using backup operations, which
typically occur daily, whenever the file is changed. The most recently backed-up file version is
designated the “active” backup. All other versions are “inactive” backups.
If the original data object is corrupted or lost on the system, restore is the process of
recovering the most current version of the backed-up data. The number and retention period
of backup versions is controlled by backup policy definitions.
Old versions are automatically deleted as new versions are created under the following
circumstances:
The number of versions stored exceeds the defined limit
After a defined period
On any system, several categories of data objects must be backed up, each with different
retention needs. A database table can be backed up frequently at regular intervals, whereas
an operating system configuration file is backed up only when it is changed.
The difference between a backup and an archive software is that the backup creates and
controls multiple backup versions that are directly attached to the original client file, whereas
the archive creates an extra stored object that is normally kept for a specific time, such as vital
records.
In addition, the KVM snapshot and managed save functions can be used to save the state of
its managed Linux guests.
For more information about options for backing up data within Linux that offer better flexibility,
see 5.5.4, “Linux backup”.
Other utilities are available that customized the use of the command-line tools. For example,
Amanda adds a user-friendly interface for the backup and restore procedures, which makes
backup tasks easier to manage. It include a client and server component to facilitate a central
backup solution for various remote clients, regardless of the platform. Amanda is typically
included, or at least available, in most Linux distributions.
Another useful feature of Linux backups is evident in the capabilities of the file system. File
systems, such as ZFS and BTRFS, can take snapshots. These mechanisms can aid the
backup process by allowing the backup software to concern itself only with backing up the
static snapshot while allowing new changes to the data to continue unimpeded. This process
provides for much greater efficiency of the backup process.
Several databases provide mechanisms to create backups, which ensures that memory
buffers are flushed to disk and that a consistent data set is created. This feature can also be
combined with storage facilities, such as FlashCopy, that perform instantaneous point-in-time
copies.
Finally, commercial backup utilities, such as the IBM Spectrum Protect, are available for an
enterprise environment. For more information about IBM Spectrum Protect, see this web
page.
Changes in the hardware environment at times can lead to a change of storage technology,
which means reorganizing the media data content. Therefore, the data inventory structures
might need to be reconverted to allow efficient data retrieval.
Because the operating system and the backup and archival management tools are going to
be retained or upgraded, no incompatibility issues occur with the archived data. This fact also
means that the migration is relatively straightforward because the storage backup and
archival manager product can access the archived data.
Often, backup and archival managers include built-in migration tools that migrate the archived
data from the source operating environment to the target environment. This is an useful time
at which to reorganize the archives and purge unwanted data so that you efficiently reduce
the storage needs of the archives.
In each domain, the concept of “principle of least privilege” is applied, which results in the
security policy. That policy is where each individual is only granted the access that they need,
no more. You need to establish individuals and their roles, and who is going to be allowed to
do what. This process is vital for overall system security because if a compromise occurs, its
exposure will only be to the affected role.
Use mandatory access controls to not only ensure that privileged access is given to only what
is needed, but to also ensure that authorization is withdrawn when privileges are revoked.
A basic premise underlying the concept of security is that you are only as strong as your
weakest point. That is why security is time-consuming, and it is difficult to predict the amount
of time that the analysis will take. If this is the first time that you are undertaking a security
analysis, do not underestimate the time or scope involved in this task.
System logs and application logs need to be immutable. Logs must be kept in such a way that
they cannot be altered by system users. If logs can be altered, overall system integrity comes
into question if a hack is suspected. Therefore, it is important that all logs be kept in a way
that makes them a permanent record of what occurred on the system.
Document the system security and all the assumptions made. Include all “what if” situations
that can reasonably be expected to occur. Also, document security plans such as change
control, audits, and procedures for break-ins in all domains.
The VM layer allows for many operating system images to run on the same hardware at the
same time. The hypervisor allows for resources to be shared between each VM. It also allows
for virtual devices to be created and consumed, like HiperSockets.
Processor Resource/System Manager (PR/SM) has been certified through the Common
Criteria at Evaluation Acceptance Level (EAL) 5+. More details about Common Criteria are
covered in 1.3, “Reasons to choose LinuxONE” on page 8.
To further ensure the isolation of one partition from another, dedicate the OSAs used to
connect to external networks by a hypervisor to the partition in question. These precautions
ensure that other guests or partitions cannot share an external-facing network. However, if
the security policy states that nothing on a virtualized environment can be connected to the
internet, you have the option of putting the web servers on x86 servers with a physical firewall
between the web servers and the hypervisor.
LinuxONE provides two options. The first is to implement a software firewall on a virtual
server within the virtual Linux environment. This configuration has some challenges because
the firewall software might not be used in the organization and as such would have to be
certified, which might be a long and complicated process.
The different security zones that are shown can be in separate partitions or in the same
partition. Customers reported that using external firewalls has minimal performance impact.
As mentioned, conforming to the security policy can simplify a migration. However, the reality
is that for applications within the LinuxONE footprint, there might be no requirement for
firewalls if all incoming communications to LinuxONE are processed by external firewalls.
Control of hypervisor
Who will own the hypervisor, and what is the protocol for requesting changes or actions?
Regardless of whether the KVM or z/VM hypervisor is used, if you control the hypervisor, you
need to fully understand it because it is the basis for all the virtual machines on that partition.
It must be secure and its access should be highly controlled. Also, document a change
request protocol and publish it to all stakeholders.
You also need to plan for hypervisor maintenance, which might require that some or all of the
virtual machines be quiesced. Therefore, ensure that a change window is set aside to allow
for maintenance, and put a plan in place and set a schedule to allow for security and
hypervisor updates and maintenance.
During migration, you might be given an already hardened Linux image. In that case, you
simply need to know what is allowed and not allowed with the image. However, if a hardened
Linux image does not exist, create and maintain one.
You will need your migration analysis to determine what needs to be reenabled. If any
applications are to be installed and services enabled, you need to provide credible business
cases for each, individually or as a set. Completing the security analysis can provide just such
business cases. Make sure that the documentation includes all applications and services as a
delta from the base hardened Linux image.
Important: RHEL includes the SELinux security method, SUSE Linux Enterprise Server
includes AppArmor for its enhanced security method, and Ubuntu uses AppArmor by
default (although SELinux is available). Determine whether those environments are in use
or required, and plan accordingly.
Those mechanisms are complex, so invest the time that you need to identify code and
applications that have not been ported to work in these environments.
If you know that there is a security issue with an application, do not use it. You need to
address all security issues before the system is placed in production. If more secure ways to
configure an application are available, invest the time to make those changes during
migration. For example, you might place a database on a different virtual machine than the
application that uses it. Remember, the more separation, the more straightforward security
will be. Systems with easy-to-understand security tend to be easier to defend and maintain.
Code dependencies
Almost all code uses APIs and other libraries to carry out the tasks that it was designed for.
Therefore, you need to review these dependencies before migrating. If you discover that a
dependency exists on an item that has a known security issue, you must find and implement
a suitable replacement.
Application dependencies
Generate and review a list of all application dependencies for known security issues. Only
fixed or known secure versions should be used. You might be tempted to migrate the
application over to the new Linux guest and test to prove that the migration is achievable.
However, such testing is invalid if any application or its dependency is on code that has known
security issues.
Use exceptions to help ensure that input always conforms to the format that is expected, and,
if the unexpected occurs, that it can be gracefully handled.
Networking
If the code implements TCP sockets, make sure that its design and function are reviewed with
the networking team that represents the firewall. That team will probably need to know the
following information:
What ports are used by the code, and for what purpose?
What type of protocol is used: TCP, UDP, ICMP, and so on?
Are special settings used on the port, as in TCP keepalive?
How long can a connection tolerate a lack of response?
How long is a connection allowed to idle?
Escalations of authority
Apply the “principle of least privilege”, which means that programs only operate with the
authority needed to accomplish a goal. If the code accesses a database, it should access it
only as a user with the access needed, and not as an administrator.
Migrating code
Analyze your code to determine any escalations of authority. Also, ensure that it accounts for
exceptions, so that a de-escalation of authority exists. In other words, make sure that if the
code is broken, it does not allow the user to operate at a different access level than is allowed.
Migrating applications
Programs that run as root, the super user, must be carefully examined and assurances given
that they are operating as designed. Generally, do allow any code or program to run with such
authority, if you can avoid it. Make sure that server applications are run at the suggested
secure settings during all phases of the migration. You do not want to run applications as the
administrator during development, only to discover during testing that certain functions do not
work.
Implementing executable system security requires an audit trail, without exceptions. All
access to the system must be logged in a secure fashion to ensure that if an authorized user
commits an indiscretion, that it cannot be covered up.
Availability analysis
Sometimes attackers do not break into a system, but instead bring down a service by
overwhelming it with requests. Thus system or services availability needs to be understood
and service level agreements maintained.
Communicating availability
Establish a standard for communicating system availability that explains how to report issues
and outages to ensure that they are communicated to the appropriate staff. An unexpected
interruption in availability can be the first sign that there is a security issue, that potential
security threat needs to be addressed.
Accountability analysis
As previously mentioned, all system logs and application logs must be immutable. If attackers
gain access, they generally erase evidence of their presence to avoid detection. Also, if users
attempt to perform unauthorized acts, they might try to cover their activities by erasing log
files or incriminating evidence.
Another approach to securing system logs is to use a remote log server, as supported by
syslog-ng. See an example of this approach in 8.6, “Deploying central log server” on
page 188. The logs on the remote log server are not necessarily immutable, but they are not
directly writable from a system that has been compromised.
Authentication
Ensure that communication end-points are who they say they are. Attackers often “spoof” or
pretend to be a system or user that they are not. To protect against such attacks, use
“authentication” conversations:
Users must be assured that they are connecting to the server that they think they are.
Servers need to be assured that users are who they say they are.
This authentication must be kept private so that eavesdropping cannot occur.
Disabling Telnet access and using Secure Shell (SSH) accomplishes this authentication.
Using Secure Sockets Layer (SSL) with web servers also accomplishes this level of security,
and is preferred over the default of no SSL.
Confidentiality analysis
Confidentiality must first be communicated and then enforced. Thus, before users can access
a system, they need to be told what the confidentiality of a system is and how any data or
information is to be used or shared. In addition, a system needs to be in place to enforce the
policy. This enforcement is normally done by auditing access logs. If a violation is detected, it
needs to be communicated to the affected parties.
Tip: Use ANSI art or special characters to make the login window attractive. It is useful to
display system information such as the Linux distribution with its version and release
information, along with a greeting.
On web pages, create a link from the main page so that the system policy can be easily
accessed. If you are allowing VNC login, display the policy by updating
/etc/gdm/custom.conf as shown in Example 5-3.
After the system is moved from test to production mode, it remains that way. Outages are
expensive for companies, but failing to plan change windows and downtime also causes
security problems. In the rare case that a VM needs to be restarted, you need the ability to
allow for these types of changes.
Record how long it takes to make changes and test worse case scenarios. After testing the
change on the clone is complete, report to production stakeholders how long the change will
take and how long the worst case will take.
You can also configure Samba to use LDAP as its user repository. Thus, you can have one
security domain across MS Windows, IBM AIX®, and Linux. For more information about this
topic, see Open Your Windows with Samba on Linux, REDP-3780.
OpenSSL
An open source implementation of Secure Sockets Layer, OpenSSL can use the libica shared
library for hardware encryption.
Using this approach offloads the cycles and allows for more concurrent access to a web
server that is using SSL or applications that use one of the supported APIs. To learn about
how to configure your system so that your Linux guest takes advantage of the installed
hardware, see The Virtualization Cookbook for IBM z Systems Volume 1: IBM z/VM 6.3,
SG24-8147.
Data is at the core of every business decision, becoming increasingly complex with time. This
value must be protected with a strong perimeter using encryption. IBM LinuxONE is capable
of strong encryption with efficiencies, enabling enterprises to benefit from IBM Pervasive
Encryption for data at-rest, along with network and application cryptographic accelerations.
IBM Pervasive Encryption is a capability that offers extensive encryption for data that is
in-flight and at-rest to substantially simplify encryption while reducing costs that are
associated with protecting data and achieving compliance mandates.
Working with the new Crypto Express feature, the key materials that are used to create this
fortified data perimeter are protected by using the unique protected key CPACF. The keys are
used in the encryption process are not visible to the applications and operating system in
clear text form.
Organizations also realize that a move from selective encryption (protecting specific types of
data only) to pervasive encryption (encrypting all data) is needed. Likewise, many barriers
that are encountered today with current enterprise data protection policy and strategy can be
removed with pervasive encryption, such as the following examples:
Decoupling encryption from data classification
This process allows organizations to implement their encryption strategy independent of
any challenges they might face while identifying and classifying sensitive data. It also
reduces the risk of unidentified or mis-classified data.
Using encryption without interrupting business applications or affecting service level
agreements (SLAs)
Changes to the application are not required if data is encrypted after it leaves the
application and decrypted before it reaches the application.
Reducing high costs that are associated with processor overhead
The cost of encryption is minimized by encrypting data in bulk and by using hardware
encryption accelerators with high performance and low latency.
Similarly, the bigger the switch, the less overhead that is available for analyzing, classifying,
encrypting, and maintaining the encryption environment. Ultimately, you gain broader scope
of encryption at the cost of granularity the further down the software or hardware stack you
go.
The following components for data at-rest encryption are in the Linux user space:
cryptsetup: A utility that is used to create secure keys and to manage disk encryption. (It
interfaces with dm-crypt and zkey.)
By using cryptsetup commands, you can perform the following actions:
– Open: Creates a mapping device
– Close: Removes the mapping device
For more information about setting up encryption for data at-rest by using IBM LinuxONE
cryptographic capabilities, see Getting Started with Linux on Z Encryption for Data At-Rest,
SG24-8436.
The secure boot feature is part of the Unified Extensible Firmware Interface (UEFI), which is a
central interface to the firmware, operating system, and individual components of the server. It
protects from root level attacks and malware that target boot-time vulnerabilities. The system
checks images at boot time for a vendor-signed cryptographic key to verify that the image is
from an official provider, and that the image was not tampered with or replaced by malicious
third parties.
The system firmware first confirms that the system boot loader is signed with a verified
cryptographic key. The system then confirms that the key was authorized by a database that
is contained in the firmware and only recognized keys allow the system to boot.
For more information about Linux Secure Boot under LinuxONE, see Maximizing Security
with LinuxONE, REDP-5535.
This section describes some of the operational issues which, if present in the source
application, must be addressed in the target application. A careful and detailed analysis about
how the source application is supported by operations staff is required for a successful
migration effort.
An analysis of the operational functions can highlight characteristics of the application that
were not clear from the analysis of other application interfaces or from the code itself. The
application code might be successfully ported, but it is just as important that the application’s
operational support structures be migrated successfully as well.
For better understanding, the following terms and definitions are used when discussing
disaster recovery, high availability, and related concepts:
Disaster recovery (DR)
Planning for and using redundant hardware, software, networks, facilities, and so on, to
recover the IT systems of a data center or the major components of an IT facility if they
become unavailable for some reason.
High availability (HA)
Provide service during defined periods, at acceptable or agreed upon levels, and mask
unplanned outages from users. High availability employs fault tolerance, automated failure
detection, recovery, bypass reconsideration, testing, problem, and change management.
The goal for mission-critical systems should be continuous availability. Otherwise, the
systems should not be defined as mission-critical.
The challenge with DR is to achieve a balance between the impact of an unavailable system
on the health of the business versus the cost of creating a resilient environment for the
application. This planning should include the likely scenarios that might impact an
application’s availability, and unrelated events that might impact the ability of a business to
function.
The usual IT issues such as server failure, network failure, power outage, disk failure,
application failure, and operator error, can be planned for through duplication of resources
and sites. Unrelated factors are rare and not directly related to IT, but they can have a huge
impact on the ability of a business to function. These events include fire, natural disasters
such as earthquake, severe weather, and flood, and civil disturbances. These events can
have a major impact on the ability of people to work.
Although this chapter focuses on the IT-related issues, you also create a plan to deal with the
other, non-IT related events.
Table 5-3 lists the components of an IBM LinuxONE virtualized environment running an
application as a Linux guest and the relative costs of rectifying a single point of failure.
Table 5-3 Potential single points of failure that can impact availability
Single point of failure Probability of failure Cost to rectify
LinuxONE hardware Very low High
Apart from hardware and software failures, the following types of planned outages can impact
an application’s availability:
Hardware upgrades that require a power-on reset
Configuration changes that require a reboot of the partition
KVM or z/VM maintenance
Linux kernel maintenance that requires a reboot
Application maintenance
Additionally, many services enhancements have been introduced to avoid planned outages:
Concurrent firmware fixes
Concurrent driver upgrades
Concurrent parts replacement
Concurrent hardware upgrades
On/Off Capacity on Demand provides extra capacity in two hour increments that is available
to be turned on to satisfy peak demand in workloads.
All scenarios assume that the IBM LinuxONE is configured with redundant LPARs, redundant
paths to disk (FICON and FCP), redundant Open System Adapters connected to the
organization’s network, redundant system consoles, and redundant Hardware Management
Consoles. This is the normal setup for an IBM LinuxONE System.
The application design needs to include redundant software servers. The storage
infrastructure should also include redundant Fibre Channel switches, mirrored disks, and
data.
Design the communications network around redundancy with redundant network routers,
switches, hubs, and wireless access points.
For mission-critical systems, provide an uninterrupted power supply and a second site far
enough away from the primary site to avoid being affected by natural disasters.
Another important factor in the availability of applications is security and access controls. For
more information, see 5.6, “Security analysis” on page 98.
If a Linux virtual machine that runs the WebSphere Application Server workload fails, the
other node in the cluster takes over if you are running WebSphere Application Server Network
Deployment. This failover is achieved because an application deployed to a cluster runs on all
members concurrently. Additional availability is provided through the nondisruptive addition of
new virtual machines to the cluster.
Figure 5-15 LinuxONE with a single partition running WebSphere Application Server cluster
In this case, the production workload and WebSphere Application Server cluster is split
across two LPARs, which give HA to WebSphere Application Server because a partition or
hypervisor failure will not impact the availability of WebSphere Application Server.
Development and test workloads run in their own LPAR, so any errant servers have no impact
on the production workloads. As in the first scenario, a failure of a LinuxONE processor is
fixed automatically without any impact to the running application.
Figure 5-16 LinuxONE with multiple partitions running WebSphere Application Server cluster
At regular intervals, the clustering software verifies that the physical partitions, the virtual
machines, and the server applications (a web server in this example) are all responsive. If any
component is not, then the service IP points to the network of the passive system, as shown
in Figure 5-18. The cluster can be configured as to what action is performed at this point, from
notification to resource reboots of the failing system.
For more information about Tivoli System Automation for Multiplatforms, see this web page.
WAS
Dmgr
Firewall
Firewall
Primary
Load HTTP WAS
Balance Server Server
Router
HTTP
Firewall
Firewall WAS
Server Server
Backup
Load
Balance
WebSphere
Cluster
z/VM LPAR 2
WAS
Dmgr
Firewall
Firewall
WAS DB2
Primary
HTTP Server Server
Load
Balance Server . (Pri)
JDBC
Router HADR
WAS
Server DB2
Firewall
HTTP
Firewall
. Server
Backup Server
JDBC (Stby)
Load
Balance
WebSphere
Cluster
z/VM LPAR 2
Figure 5-20 Active/active WebSphere Application Server cluster and Db2 HADR
During a partition-wide outage of the primary Db2 system, the standby Db2 system takes
over in seconds, which provides high availability. Communication between the primary and
standby systems is through TCP/IP, which in this case is done by using the high-speed virtual
network feature HiperSockets (available on LinuxONE).
The standby Db2 system can also be at a remote site to provide enhanced availability during
a site failure.
IBM Tivoli SA MP running in both Db2 servers is designed to automatically detect a failure of
the primary, and then issue commands on the standby for its Db2 to become the primary.
Other cluster management software can be used. However, SA MP and sample automation
scripts are included with Db2 to manage the HA requirements of your Db2 database system.
Firewall
Firewall
Primary Oracle
WAS Server
Load HTTP Server With
Balance Server RAC
Shared
Router
Disk
Oracle
HTTP WAS Server
Firewall
Firewall
Server Server With
Backup
Load RAC
Balance
z/VM LPAR 2
In a LinuxONE environment, communication between the database nodes uses a virtual LAN
in the same LPAR or HiperSockets to other LPARs. Both methods are at memory-to-memory
speeds with low latency.
Distances greater than 100 km (62 miles) are also possible, although this configuration
requires an asynchronous copy on the remote site, so it is not synchronized with the primary
copy. For information about the GDPS Virtual Appliance, see IBM GDPS Family: An
Introduction to Concepts and Capabilities, SG24-6374.
A combination of different solutions is required, each focusing on one or few aspects of the
application computing environment, as shown in the following examples:
Storage: Distributed, highly available storage is the objective of a few projects:
– GFS2: Created by Red Hat
– OCFS2: Created by Oracle and named Oracle Cluster File System, is in its second
version
– GlusterFS: Now maintained by Red Hat
Application availability:
– Application service clustering are often managed by the application server, such as
WebSphere (see “Active/active application server cluster” on page 118)
– pacemaker, evolved from Linux-HA project
– corosync: Group communication system, evolved from OpenAIS
– heartbeat: Simple clustering which is a main focus of Linux-Ha project
Note: It is out of the scope of this book to discuss every available solution. For more
information about implementation and options, see your distribution reference.
Ubuntu options
For more information about Ubuntu high availability, see this web page.
From an availability view point, an SLA for an “in-house” business application should focus on
the first two items: What service is being delivered and how is it being measured:
Application availability hours, for example:
– 24 hours/day x 7 days a week.
– 6:00 AM to 6:00 PM, weekdays.
– 9:00 AM to 5:00 PM, weekdays.
– Definition of how availability is measured and who will do the measurement. For
example, system availability, application availability, database availability, and network
availability.
Minimum system response time
Defined number and definition of where and how is it measured.
Cost of availability
As shown from the examples in this chapter, there is a great degree of difference in cost and
complexity of the various availability options discussed. Providing CA and a DR plan is not an
insignificant expense, but with the degree of reliance on IT systems by most businesses
today, it is a cost that cannot be ignored.
If you have a web-facing revenue-generating application, you can calculate the cost of
downtime by monitoring the average revenue that is generated in a specific amount of time.
This amount provides an idea of the revenue that might be lost during an outage and how
much you should spend to make the application more resilient. Other businesses have
different ways of calculating the cost of downtime.
Keep in mind that for any HA configuration to be successful in a real DR situation, there
needs to be a fully documented DR plan in place that is fully tested at least once every year.
This section covers some of the technical aspects that needs to be considered before
migrating a distributed virtualized environment onto IBM LinuxONE Cloud. This process is
usually easy if the workloads targeted for LinuxONE cloud migration have been virtualized.
You cannot move the individual virtual machines directly on to a LinuxONE cloud, But you can
create similar capabilities of the virtualized environment on the targeted LinuxONE cloud
platform.
Before migration, create a chart of the following information for planning purposes:
Performance and availability of a range of services and the linkages between them.
Record performance metrics for each of the individual hypervisor.
Application architectures and interdependencies.
Network connectivity diagram for individual virtual machines and their applications.
Security and isolation of networks among applications.
Storage allocations, and their performance requirements.
Many methods of migrating your data from your source x86 servers to IBM LinuxONE are
available, and many different ways of configuring your new LinuxONE environment exist. A
typical migration plan often involves a deep architectural understanding of the source
application, as described in Chapter 5, “Migration analysis” on page 59.
After the main migration tasks are completed, applications can be tuned to use many of the
platform features, such as the Pause-less Garbage Collection for Java workloads, leveraging
the LinuxONE’s Guarded Storage Facility for improved throughput and response times, and
Pervasive Encryption, for encryption of data in-flight and at-rest.
This chapter describes a hands-on migration scenario that is performed in our lab in which we
migrated a full working application, which was composed of a WebSphere Application Server
ND cluster, Db2, IBM WebSphere MQ, and a simple Node.js application from several x86
servers to IBM LinuxONE.
Our application represents most of the workloads that used by the organizations. In that
sense, it is worth mentioning that more simpler or complex scenarios might exist, which can
require different migration approaches in accordance to each application and businesses
requirements.
Table 6-1 Software products and tools checklist for the x86 environment
Software products and tools checklist for x86 environment
Network connectiona
Disk Resourceb
Mount Point : Size (in GB) : Type /opt : 3 :Ext4 Logical Volume
Mount Point : Size (in GB) : Type /home: 10 :XFS Logical Volume
Mount Point : Size (in GB) : Type /var : 5 :Ext4 Logical Volume
Mount Point : Size (in GB) : Type /tmp : 1 :Ext4 Logical Volume
DATA Filesystem
Mount Point : Size (in GB) : Type /opt/WebSphere : 10 : /opt/WebSphere : 10 : Logical Volume
Ext3 XFS (vg_websphere)
Mount Point : Size (in GB) : Type /web : 5 : Ext3 /web : 5 : XFS Logical Volume
(vg_data)
CUSTOM Filesystem
Mount Point : Size (in GB) : Type /tempspace : 10 : XFS Logical Volume
(vg_websphere)
Because the zfcp module is handled by the Linux kernel, the failover configuration is not
handled by PR/SM nor z/VM, and must be done from within the Linux guest. Therefore, as
described in 3.8.3, “Virtualized disk” on page 42, it is recommended to attach at least two
N_Port ID Virtualization (NPIV) adapters to the guest system. Each adapter must be
connected over different Fibre Channel fabrics for redundancy.
For more information, see How to use FC-attached SCSI devices with Linux on z Systems,
SC33-8413.
With the FCP devices attached to the Linux system, bring each one online by running the
chzdev command along with its bus ID, as shown in Example 6-1.
Running this command automatically configures the FCP device for the active and persistent
system configuration. To bring an FCP device for the active configuration only so that it does
not persist across restarts, pass the -a parameter for the chzdev command, as shown in
Example 6-2.
Example 6-2 Bring online an FCP device just for the active system configuration
chzdev -e zfcp-host 0.0.f100 -a
Most modern Linux distributions automatically discover the available ports from its targets
and perform auto LUN scanning from these ports. Ideally, turning the FCP device online
should be the only activity that is required for the SCSI devices to be presented to the Linux
guest. However, depending on your distribution or application requirements, you might want
to override these settings. Table 6-3 shows the zfcp module parameters that are responsible
for handling these settings.
Note: Many distributions implement their own tools for configuring and scanning by way of
FCP. IBM also frequently releases documentation covering several administrative tasks
that are applicable to the major LinuxONE supported distributions at IBM
developerWorks®.
Most Linux distributions include the multipath tool that is installed by default. Example 6-4
shows how to install and enable the multipathd daemon under a Red Hat Enterprise Linux 8.2
system.
Example 6-4 Installing and enabling the device-mapper multipath under an RHEL 8.2 system
dnf install device-mapper-multipath
mpathconf --enable
systemctl enable --now multipathd
devices {
device {
vendor "IBM"
product ""
path_grouping_policy group_by_prio
prio alua
}
}
Migrating databases typically involve a downtime, specially when handling production data.
Always ensure that the source Db2 instances are stopped and that the applications that rely
on it are routed to the new server right after the migration is complete. Failure to do so can
result in the target system running out of sync in relation to its source server in such a way
that the migration steps must be repeated from scratch.
Note: Always perform a full database backup before the migration occurs. Taking this
precaution prevents common human errors during the process and also allows for a
point-in-time database restore (if necessary).
This section describes the migration of Db2 data from a source x86 system (lnx-x86-db2) to
a LinuxONE guest (lnxone-db2). We perform the migration of the same database by using
two different methods: the db2move/db2look and the LOAD FROM CURSOR utilities.
Example 6-6 Perform a full database backup before any other activity occurs
lnx-x86-db2 $ db2 list applications
SQL1611W No data was returned by Database System Monitor.
lnx-x86-db2 $ db2 deactivate db SAMPLE
DB20000I The DEACTIVATE DATABASE command completed successfully.
lnx-x86-db2 $ db2 BACKUP DATABASE SAMPLE TO /db2_backup COMPRESS
Still under the source x86 systems, retrieve the list of authorization IDs that are required for
your application to function, as shown in Example 6-7 on page 135. In our application, we are
required to have a group that is named appdb created. The members of this group are granted
privileges to our migrated databases. See the Db2 product documentation and your
stakeholders for more information about requirements, depending on your environment.
AUTHID AUTHIDTYPE
------ ----------
APPDB G
PUBLIC G
SYSDEBUG R
SYSDEBUGPRIVATE R
SYSTS_ADM R
SYSTS_MGR R
DB2INST1 U
7 record(s) selected.
From the target LinuxONE system, create the Db2 instance ID that houses your databases. It
is a good practice to also create a dedicated common group for all instances you are going to
use. Also, create the authorization IDs and groups that are required for your application to
function. Example 6-8 shows the creation of our required application Db2 IDs and groups.
After the Db2 product installation under the lnxone-db2 server, we created our application
required file systems and adjusted the required kernel parameters. Then, we created the Db2
instance. Example 6-9 shows the creation of the db2inst1 Enterprise Server Edition (ese)
instance that listens on TCP port 50000 and uses db2fenc1 as its fenced ID.
Example 6-11 shows how to use the db2look command to generate the required DDL
statements for replicating our source x86 database objects to our target LinuxONE system.
For more information about the db2look tool, see the IBM Db2 product documentation that is
available at IBM Knowledge Center.
The Db2 Backup resulting file cannot be used to move data between x86 and IBM LinuxONE
operating systems. Use the db2move utility to export the source x86 tables data, as shown in
Example 6-12. By default, the db2move utility generates its exported files in the current
working directory. Ensure that you switch to the wanted destination directory before running
this command because several files are created for every table in the database.
After all of the required Db2 data is exported, copy the results from the db2move and db2look
utilities from the source x86 system to the target LinuxONE server. We used the rsync tool to
complete this task.
Connected to our target lnxone-db2 system, first create the database and its objects by using
the DDL file that was generated by the db2look utility, as shown in Example 6-13.
Note: Review the output of the command that is shown in Example 6-13. Minor
adjustments to the generated DDL file might be necessary, especially when migrating
between different Db2 releases. In that case, DROP the database and retry the operation
after reviewing the SQL statements. Refer to the Db2 product documentation for more
information.
After the Db2 objects are migrated, load the data into the database by using the db2move
utility. Finally, SET INTEGRITY on any tables that might require it, as shown in Example 6-14.
We can now connect to the target database and query its data.
From the lnxone-db2 server, we created and migrated the SAMPLE database objects by
using the same file that was generated by using the db2look utility. Then, we ran the Db2
CATALOG command to define a connection to our source x86 database, as shown in
Example 6-15.
We then recycled our Db2 instance by using the db2stop and db2start commands.
Example 6-16 shows the SQL statements that we created to load the required tables data
from our source database to our target LinuxONE server.
DECLARE cur CURSOR DATABASE X86SAMP USER db2inst1 USING password FOR SELECT * FROM
DB2INST1.DEPARTMENT WITH UR;
LOAD FROM cur OF CURSOR REPLACE INTO DB2INST1.DEPARTMENT NONRECOVERABLE;
We saved our SQL statements to a file named CURSOR.sql. Then, we issued a database
CONNECT to the target database. Finally, we ran our CURSOR.sql statements, as show in
Example 6-17.
When the LOAD FROM CURSOR operation finishes, the database is ready for use.
In the source system, save the information of the queue manager that you want to move and
its authorities, as shown in Example 6-18.
Example 6-18 Dump source queue manager configuration and save its authorities
lnx-x86-mq $ dmpmqcfg -m QM1 -a > /tmp/QM1.mqsc
lnx-x86-mq $ amqoamd -m QM1 -s > /tmp/QM1AUT.mqsc
After dumping its attributes, quiesce the source queue manager and stop it, as shown in
Example 6-19. Do not proceed until the queue manager is fully down. You might need to
manually stop applications that might be connected to the queue manager.
Copy the generated files to the target LinuxONE system. Then, after completing the product
installation and its preliminary set-up for your environment, create and start the target queue
manager, as shown in Example 6-20.
Example 6-20 Create and start the target queue manager under IBM LinuxONE
lnxone-mq $ crtmqm -q QM1
IBM MQ queue manager created.
Directory '/mnt/mqm/data/qmgrs/QM1' created.
The queue manager is associated with installation 'Installation1'.
Creating or replacing default objects for queue manager 'QM1'.
Default objects statistics : 83 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
lnxone-mq $ strmqm QM1
Before restoring our queue manager object definitions, we created the necessary user IDs
and groups that are required for our application to function correctly. Review the QM1AUT.mqsc
file and determine which privileges might be required for your application to function. When in
doubt, consult with your application representatives.
Example 6-21 shows how we retrieved the list of required IDs and groups from our
QM1AUT.mqsc file. We then proceeded with the creation of the required accounts.
Example 6-21 Retrieve and create the necessary queue manager groups and IDs
lnxone-mq $ awk -F'-p' '{ print $2 }' QM1AUT.mqsc | awk '{ print "User: ",$1 }' |
sort -V | uniq
User: mqm
User: appuser
User: wassrvr
lnxone-mq $ awk -F'-g' '{ print $2 }' QM1AUT.mqsc | awk '{ print "Group: ",$1 }' |
sort -V | uniq
Group: appdb
Group: mqm
lnxone-mq $ groupadd appdb
lnxone-mq $ useradd -c ‘WebSphere process ID’ -g appdb wassrvr
lnxone-mq $ useradd -c ‘Nodejs appuser’ -g appdb appuser
Finally, we imported the source queue manager configuration and its authorities, as shown in
Example 6-22.
Example 6-22 Restore the queue manager objects and its authorities
lnxone-mq $ runmqsc QM1 < /mq_data/QM1.mqsc > /tmp/QM1.log
lnxone-mq $ $ tail -4 /tmp/QM1.log
:
*******************************************************************************
231 MQSC commands read.
No commands have a syntax error.
All valid MQSC commands were processed.
1 : refresh security(*)
AMQ8560I: IBM MQ security cache refreshed.
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
6.3.1 Moving the IBM MQ Web Server and the RESTful API component
IBM MQ introduced the support for RESTful APIs in its version 9.0.4. It allows users and
applications to send POST and DELETE HTTP commands against specific URLs to put and
receive data from IBM MQ queues, respectively. RestAPIs allow for much faster integration of
IBM MQ with other technologies, such as cloud and container-based solutions.
IBM frequently improves and updates the RestAPI functions in IBM MQ. For more information
about all new features that were introduced since v9.0.4, see the IBM MQ product
documentation.
Our application depends on the RestAPI functions that are present on IBM MQ to function
correctly. To enable the required functions, ensure that the MQSeriesWeb component is
installed along with the IBM MQ product, as shown in Example 6-24.
From the source x86 system, copy the mqwebuser.xml file to the destination LinuxONE server.
This file is typically in the following folder:
<QM_PREFIX> /web/installations/installationName/servers/mqweb
If your solution requires receiving RestAPI calls from systems other than localhost, enable
remote connections to the mqweb server. Example 6-25 shows how to bind the mqweb
server to all available network interfaces.
Example 6-25 Enable the mqweb server on all available network interfaces
lnxone-mq $ setmqweb properties -k httpHost -v "*"
MQWB1100I: The 'setmqweb' command completed successfully.
Carry over all of the modified settings to the IBM LinuxONE server. Finally, start the mqweb
server component, as shown in Example 6-27.
Analyze your web server console.log file for possible errors that might occur. Typical
problems include missing keystores and truststores for TLS connectivity. Example 6-28
shows how to create a self-signed certificate by using the runmqckm tool. After these
certificates are created, update the mqwebuser.xml file to point to the created keystores. See
the IBM MQ product documentation for more information about how to set up TLS
connectivity, such as the use of CA signed certificates.
# Truststore
runmqckm -keydb -create -db trust.jks -pw password -type jks -stash
runmqckm -cert -import -db user.p12 -pw password -target trust.jks -target_pw
password -target_type jks
Figure 6-3 on page 143 shows our IBM MQ web server running after logging in with our
administrative account. We navigated to Manage → Queues → DEV.QUEUE.1 → Create to
PUT a message on that queue.
Figure 6-4 shows our queue current depth. Our message was successfully put as the mqm
user ID. Do not delete that message now. It is now time to migrate our Node.js application,
which is described next.
Figure 6-4 DEV.QUEUE.1 current depth via the IBM MQ web server console
Red Hat Enterprise Linux 8 introduced the concept of modules, which groups several
packages by function and compatibility. The use of modules is an elegant way to allow
different versions of software to be installed, depending on the user requirements. The
release of Node.js enabled by default is v10.19. As described in 6.1.2, “Software products
and tools checklist” on page 129, our application requires Node.js v12.16.1. Example 6-29
shows how to enable and install the Node.js module in the release we require.
We then copied the necessary application files from our source x86 server to IBM LinuxONE.
Next, we created the user account that runs our program. For security reasons and unless
strictly necessary, always avoid running processes as the root user. Finally, we installed our
program dependencies and ran it, as shown in Example 6-30.
lnxone-node $ node -v
v12.16.1
Our application successfully used the message that we manually put as described in 6.3.1,
“Moving the IBM MQ Web Server and the RESTful API component” on page 141. We then
configured a cron job so that it can now run on its own, as shown in Example 6-31.
Various methods are available to migrate a WebSphere Application Server ND cluster to IBM
LinuxONE. As described in Chapter 4, “Migration process” on page 47, the migration strategy
that is used often depends on the application’s architecture complexity and the preference of
the key business stakeholders.
A common migration strategy is to install and configure manually (or with the help of
automated wsadmin’s JACL or Jython scripts) all aspects of a cluster topology. In this
scenario, the WebSphere Application Server cluster is re-created from scratch and all
required components, such as JDBC data sources and JMS resources, are reconfigured.
Finally, the application that uses these resources is redeployed in the new cluster and tested.
This approach is typically done for simple applications that do not require much effort to be
redeployed.
Another migration strategy involves the migration of the entire cell configuration directly from
the source x86 servers to the new IBM LinuxONE systems. After the cell configuration is
imported, the application is ready for use with minimal manual adjustments necessary. In this
section, we migrate our WebSphere Application Server Network Deployment cell
configuration by using this migration technique.
Our source WebSphere Application Server cell is composed of two servers in a horizontal
topology, as described in 6.1, “Environment setup” on page 128. The deployment manager
node is in xrhrbres1, and this server also holds one node part of our application’s cluster. The
second node member of our cluster is in xrhrbres2.
Example 6-32 Backup all WebSphere Application Server profiles before proceeding
xrhrbres1 $ <DMGR_ROOT>/bin/backupConfig.sh /tmp/DmgrBackupBefore.zip -nostop
ADMU0116I: Tool information is being logged in file
/opt/WebSphere/AppServer/profiles/Dmgr/logs/backupConfig.log
ADMU0128I: Starting tool with the Dmgr profile
ADMU5001I: Backing up config directory
/opt/WebSphere/AppServer/profiles/Dmgr/config to file
/tmp/DmgrBackupBefore.zip
(...) output suppressed (...)
ADMU5002I: 1,700 files successfully backed up
Because we are migrating our profiles to a different target architecture, we must ensure that
the WebSphere Application Server release is under the same level at the source and target
systems. Otherwise, we first must install the same level that we are migrating to into the
source x86 system. Because we are performing an “as-is” migration, the WebSphere
Application Server product is installed into our target LinuxONE servers under the same level
as our source x86 systems.
Note: For migrating between different WebSphere Application Server releases, see the
WebSphere Application Server Network Deployment traditional documentation that is
available at IBM Knowledge Center.
The next step is to transfer the profile’s configuration from our source x86 systems to our
target LinuxONE server. Because we are keeping the same topology, we transferred the
deployment manager profile and the application server profile from xrhrbres1 to
lnxone-was-1. We also copied the application server profile from xrhrbres2 to lnxone-was-2.
The next step is to create the profiles that compose our cluster topology into our LinuxONE
servers. It is important to keep the same node and cell names as we had on the source x86
systems. Example 6-33 shows the Deployment Manager profile creation at lnxone-was-1.
Example 6-33 Create the target Deployment manager profile using the same cell and node names
lnxone-was-1 $ <WAS_ROOT>/bin/manageprofiles.sh -create -profileName Dmgr
-profilePath /opt/WebSphere/AppServer/profiles/Dmgr -templatePath
/opt/WebSphere/AppServer/profileTemplates/management -serverType
DEPLOYMENT_MANAGER -nodeName AppCellNode -cellName AppCell -hostName lnxone-was-1
-isDefault=false -enableAdminSecurity=false -disableWASDesktopIntegration
The restored configuration still contains old references from the x86 servers. It is important
that we update the internal deployment manager files to point to our new IBM LinuxONE
servers. Example 6-35 shows how we updated the serverindex.xml file to point to our IBM
LinuxONE topology. Three files must be updated: two for lnxone-was-1 and one for
lnxone-was-2.
The final modification that we must perform is to update the x86 architecture references to the
one that LinuxONE uses. Example 6-36 provides a one-liner that does the job for us, and
checks the correct values to use.
Note: From this point on, it is recommended that you shut down your x86 cluster to avoid
problems. Do not proceed if you failed to perform any of the previous steps.
It is now time to start the deployment manager for the cell, as shown in Example 6-37.
After the deployment manager process is started, create the application server profiles,
restore their respective configurations, and synchronize them with the new deployment
manager. Example 6-38 shows how this process was done under lnxone-was-2. Remember
to use the same cell and node names as the source x86 system for the profiles that you
create.
Example 6-38 Application server configuration restore and synchronization with the new cell
lnxone-was-2 $ <WAS_INSTALL_ROOT>/bin/manageprofiles.sh -create -profileName
AppSrvr -profilePath /opt/WebSphere/AppServer/profiles/AppSrvr -templatePath
/opt/WebSphere/AppServer/profileTemplates/managed -nodeName AppNode2 -cellName
AppCell -hostName lnxone-was-2 -enableAdminSecurity=false -federateLater=true
-disableWASDesktopIntegration
INSTCONFSUCCESS: Success: Profile AppSrvr now exists. Please consult
/opt/WebSphere/AppServer/profiles/AppSrvr/logs/AboutThisProfile.txt for more
information about this profile.
/opt/WebSphere/AppServer/profiles/AppSrvr/config/cells/AppCell/nodes/AppNode2/serv
ers
ADMU2010I: Stopping all server processes for node AppNode2
ADMU5502I: The directory /opt/WebSphere/AppServer/profiles/AppSrvr/config
exists; renaming to
/opt/WebSphere/AppServer/profiles/AppSrvr/config.old
ADMU5504I: Restore location successfully renamed
You can now start your application server nodes, as shown in Example 6-39.
lnxone-was-2 $ <APP_PROFILE_ROOT>/bin/startNode.sh
(...) output suppressed (...)
The migration of WebSphere Application Server is now complete. You now can access the
administrative console and check the status of your cluster. The user name and password
credentials to authenticate typically are the same as in the source x86 system. Figure 6-5 on
page 150 shows our WebSphere Application Server topology that is fully synchronized with
our target IBM LinuxONE servers.
Before starting our application, we updated our data sources to reflect the new Db2 location.
Then, we tested the connection to ensure that everything was correctly set up. Figure 6-6
shows that the connectivity to our database running under IBM LinuxONE is successful.
We then released our application for testing. Figure 6-8 shows the application up and running
under IBM LinuxONE pulling data from the Db2 database that we migrated in 6.2, “Migrating
Db2 and its data” on page 134.
Example 6-40 One thousand simultaneous accesses to our WebSphere Application Server cluster
lnxone-test $ curl for i in {1..1000}; do curl https://was-lb-10/MyWebApp/Run
&>/dev/null & done
Then, we checked our IBM MQ queue depth by way of its web server console, as shown in
Figure 6-9.
Figure 6-9 IBM MQ queue depth shows that our cluster can hold over 1000 simultaneous requests
Finally, we ran our Node.js app and timed the amount of time it took to process our messages,
as shown in Example 6-41.
real 0m25.216s
user 0m1.182s
sys 0m0.142s
Our entire workload was successfully migrated and integrated to IBM LinuxONE. As is done
for other platforms, infrastructure components can be easily moved to LinuxONE, with the
benefit of much better throughput, availability, security, and response times.
Note: For more information about how to scale up your infrastructure with IBM LinuxONE,
see Scale up for Linux on LinuxONE, REDP-5540.
Every migration poses a large challenge for IT organizations because each stakeholder has
different expectations and requirements from the project. Most of the topics after migration
center around performance and functionality. IT organizations face the following difficult
questions:
What exactly has been done?
Is there anything missing?
Is everything working?
Is the performance as expected?
Is the process completed?
Did we get approvals?
To answer these questions, you must take some important steps before and after the
migration implementation phase.
Acceptance requires an understanding of the big picture before and after migration:
Before the implementation phase starts, complete these tasks:
– Decide and document test scope.
– Decide and document test case (including test scenario).
– Create post migration checklists for all components.
– Collect performance data about the system.
– Get acceptance from the stakeholders for testing.
After the implementation is done, complete these tasks:
– Use the post-migration checklists and check whether the implementation is complete.
– Test the system by using documented test cases (complete and document all test
scenarios).
– Measure performance and compare it with the previous performance data.
– If necessary, perform performance tuning.
Based on project scope and context, items that are used for acceptance testing can differ, but
the following list is the most common acceptance tests that are performed before gaining
stakeholder acceptance:
Application testing
In some cases, usability testing might be required.
Functional testing
Performance testing
Security testing
User acceptance testing
This section also covers monitoring commands and tools that can assist you in identifying and
resolving performance inhibitors.
The initial performance of a new system is often not as expected, especially when changing
hardware platforms. Therefore, tuning must be undertaken to improve the performance of the
target system. Without having proper metrics, it is impossible to validate the performance of
the new platform relative to the former platform. For this reason, the migration project team
first needs to agree on what performance metrics from the source platform will be used in the
migration project plan to measure the performance of the target platform.
Response time
Response time is the measure of the time it takes for something to happen in a computer
system. Generally, the response time of a unit of work called a transaction is measured. This
transaction can entail something as simple as checking an account balance, or something as
complex as the time taken to issue a new insurance policy or open a new bank account.
The point to remember with computer systems is that the response time of a single
transaction is the sum of a number of response times. Figure 7-1 shows the various
components that make up user response time.
Figure 7-1 shows that there are two points where response time can be measured: System
response time and user response time. When you are trying to understand the relative
performance improvement from a new system, the only point to measure response time is
from when a system receives the request and when it provides a response of some sort to the
request.
To compare the source and target systems directly, measure system response time on the
source system, and assuming that the application has not changed greatly, measure the
system response time on the target platform.
Transaction throughput
The transaction throughput performance metric might provide a more meaningful measure of
system performance because it measures the number of transactions processed over a
period of time. This period is typically one second, but can be any time period that you prefer.
In both cases, you have baseline performance metrics for the source system to properly
compare the old and new systems.
Regardless of which tools you choose, the best methodology for analyzing the performance
of a system is to start from the outside and work down to the small tuning details in the
system. Gather data about overall health of the system hardware and processes. The
following list is a sampling of the types of questions to answer about both your source and
target systems:
How busy is the processor during the peak periods of each day?
What happens to I/O response times during those peaks?
Do peaks remain fairly consistent, or do they elongate?
Does the system get memory constrained every day, causing page waits?
Can current system resources provide user response times that meet service level
agreements?
It is important to know what tuning tools are available and what type of information they
provide. Equally important is knowing when to use those tools and what to look for. How can
you know what is normal for your environment and what is problematic unless you check the
system activity and resource utilization regularly? Conducting regular health checks on a
system also provides utilization and performance information that you can use for capacity
planning.
Tuning is not a one-size-fits-all approach. A system that is tuned for one type of workload
often performs poorly with another type of workload. This consideration means that you must
understand the workload that you want to run and be prepared to review your tuning efforts
when the workload changes.
Multi-step tuning process requires the skills of a systems performance detective. A systems
performance analyst identifies IT problems by using a detection process similar to that of
solving a crime. In IT systems performance, the crime is a performance bottleneck or sudden
degrading response time. The performance analyst asks questions, searches for clues,
researches sources and documents, formulates a hypothesis, tests that hypothesis by tuning
or other means, and eventually solves the mystery. This process results in improved system
performance. Bottleneck analysis and problem determination are facilitated by sophisticated
tools such as IBM Tivoli OMEGAMON® XE on z/VM and Linux. These tools detect
performance problems and alert a system administrator before degraded response time
becomes evident.
Change management that is not strictly related to performance tuning is probably the single
most important factor for successful performance tuning. The following considerations
highlight this point:
Implement a proper change management process before tuning any system.
Never start tweaking settings on a production system.
Never change more than one variable at a time during the tuning process.
Retest parameters that supposedly improve performance. Sometimes statistics come into
play.
Document successful parameters and share them with the community no matter how
trivial you think they are. System performance can benefit greatly from any results
obtained in various production environments.
Part 3 Deployment
This part of the book describes deploying workloads and various applications to assist you
during your deployment.
As described in 5.3, “Application analysis” on page 80, many workloads are a “good fit” on
LinuxONE. Not all can be demonstrated in this book. The migration of some practical
applications, such as IBM Db2, are shown as a hands-on exercise in Chapter 6, “Hands-on
migration” on page 127.
Mission critical applications, ERP, CRM, business intelligence, and more, are good to run on
LinuxONE, but only generic examples can be included in a guide such as this. Your specific
migration does not necessarily distill into a demonstration. Following the guides, the
checklists, and the information in this book, and using this chapter of examples, will lead you
to success.
Standard infrastructure applications are also well suited to the IBM LinuxONE, and these are
just as critical. In this chapter, the deployment of some standard services is demonstrated.
Such an illustration of deploying standard services should likewise represent a pattern that
can be followed.
Although the underlying hardware is ready for a highly scalable environment, advantages and
disadvantages exist that are specific to having the solution on a container or virtual machine
(VM). Containers can allow you to have many more applications in a single physical server
than a VM can. However, a business might need application deployments that are based on
VMs. All aspects of the enterprise application must be considered before deciding whether to
run it under containers or in a single VM.
The following are the deciding factors for determining whether the solution should be on
containers or VMs:
Application packaging: If you want to run multiple copies of a single app, say MongoDB,
use containers. However, if you want the flexibility of running multiple applications
(MongoDB with a Java based Homegrown Application), use a VM.
Dependencies: Usually, containers tend to lock in to a particular version of an operating
system and its subsystems and libraries. This feature can be an advantage for an
administrator, because with containers you can create a portable, consistent operating
environment including programs and libraries for development, testing, and deployment.
From a VM perspective, no matter what hypervisor you use, you can deploy any operating
environment. This feature is especially useful with in-house applications with specific
dependencies.
Resources: From a resource perspective, containers share an operating system, kernel
instance, network connection, and base file system. Each instance of the application runs
within a separate user space. This configuration significantly cuts back on the CPU usage
that is associated with running multiple operating systems because a new kernel is not
needed for each user session. This is one of the major reasons why containers are often
used for running specific applications.
Automation: Concerning speed to production, with the advent of the cloud and DevOps
mode of application development, containers have an advantage because each container
provides a microservice and can be part of a larger solution. This feature provides
containers with the advantage of scale over the VM.
Security: Without any alterations to the container, a VM is more secure than a container.
VMs have the advantage of featuring hardware isolation, whereas containers share kernel
resources and application libraries. This feature means that if a VM breaks down, it is less
likely to affect other VMs in the same operating environment. For now, regular containers
do not have hardware isolation. If your organization has high security requirements, stick
with VMs.
Most organization run a mix both containers and VMs in their clouds and data centers. The
economy of containers at scale makes too much financial sense for anyone to ignore. At the
same time, VMs still have their virtues and use cases. LinuxONE provides the best in class
features for running containers and VMs.
IBM provides a secure platform for containers on IBM LinuxOne III hardware, called IBM
Secure Service Containers. This technology allows mission-critical applications to be
securely built, deployed, and managed in hybrid multicloud environments. For more
information about IBM SSC, see IBM Knowledge Center.
Alongside the change, Red Hat developed tools, such as podman, skopeo, and buildah,
which all assist in configuring and maintaining a container workflow with minimum overhead.
These tools provide the following functions:
podman: Client tool to manage containers. Replaces most features that are provided by the
docker command, which focuses on individual containers or images.
skopeo: Tool to manage images by copying them to and from registries.
runc: Runtime client for running and working with Open Container Initiative (OCI) format.
buildah: Tool to manage OCI-compliant images.
For more information about OpenShift on IBM LinuxONE platform, see Red Hat OpenShift on
IBM Z Installation Guide, REDP-5605.
By running that command, Docker and its related tools are installed.
SUSE
The Docker packages on SUSE are available by way of the Container module. For more
information about how to set up that module, see your distribution manual. The installation
step is also simple, as shown in Example 8-2.
Log off and log in again for the user privilege to take effect. You can then verify the Docker
commands, as shown in Example 8-5.
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
With the performance and virtualization capabilities of LinuxONE, it makes an ideal platform
for scaling out and scaling up MongoDB based NoSQL workloads. This section looks at the
steps for deploying MongoDB (as a Docker container) onto LinuxONE.
Important: The Docker installation package available in the official Ubuntu 20.04
repository might not be the latest version. To get the latest version, install Docker from the
official Docker repository.
After Docker is configured, enable its service and run it on the host operating system.
Example 8-7 on page 168 shows verifying the Docker configuration.
Server:
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.13.8
Git commit: afacb8b7f0
Built: Thu Jun 18 08:26:54 2020
OS/Arch: linux/s390x
Experimental: false
containerd:
Version: 1.3.3-0ubuntu2
GitCommit:
runc:
Version: spec: 1.0.1-dev
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
lnxadmin@rdbk86ub:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
lnxadmin@rdbk86ub:~$
IBM has been working on containerizing important open source products and tools for its
various platforms and also making them available on Docker Hub public registry for download.
Docker Hub is a cloud-based registry service that allows you to link to code repositories, build
your images and test them, and store manually pushed images and links to Docker Cloud so
you can deploy images to your hosts. It provides a centralized resource for container image
discovery, distribution, and change management.
Run the docker search command to search for repositories specific to a platform in Docker
Hub, as shown in Example 8-8. The command returns the pre-built Docker images for
LinuxONE from the public registry.
Verify that the image was correctly registered with the local Docker registry and allocated a
local image ID, as shown in Example 8-10.
When the MongoDB container was built, the directories /data/configdb and /data/db were
used as mount points for external storage, and it exposes ports 27017 and 28017. This
technique allows connections from outside the container to access the mongodb container.
Example 8-11 shows the configuration.
The response to the successful instantiation would be a return of a hash that is the full ID of
the new running container.
For more information about the container, inspect the container by running the docker
inspect <container id> command.
This example uses the latter option. Therefore, start a Docker interactive shell into the Mongo
container and start a Mongo shell for creating a sample database (see Example 8-15).
This method provides a quick way to have a highly scalable environment to work on for any
solution that involves MongoDB containers that are deployed on LinuxONE.
Attention: This method might not be suited for all installations and environments,
depending on database size or other constraints. Consult the MongoDB manuals for more
options about migrating data.
Example 8-17 Compressing and transferring the dump to the target system
root@xrhrbres2:/data/db/backup# tar -jcf 20-10-23.tar.bz2 20-10-23
root@xrhrbres2:/data/db/backup# rsync -v 20-10-23.tar.bz2
lnxadmin@rdbk86ub.pbm.ihost.com:/data/
root@xrhrbres2:/data/db/backup#
We can now import data onto MongoDB. In this case, we must open a shell to the active
container and then, run the mongorestore command to import the data, as in Example 8-19.
In this example, the database that was created on a x86 server installation was moved to a
LinuxONE server that is running MongoDB as a container without any issues or special
processes.
The Linux environment on x86 is largely the same as it is on LinuxONE, with a few notable
exceptions. Configuration files on the x86 are in the same place on your Linux images on
LinuxONE, unless you deliberately choose to keep them in a different place. Hence, the
MariaDB configuration files, for example, typically only need to be copied from the x86 server
to the LinuxONE server and placed in the same location in the file system (/etc/my.cnf).
Migrating to LinuxONE should be tested first in a test environment before performing the
migration to the production environment.
For this example scenario, the Linux image is set up and a minimal Linux operating system
installed. The Linux guest is called rdbk86sl and is running SUSE Linux Enterprise Server 15
SP2, with one virtual CPU and 1 GB of virtual memory. It is assumed that an adequate
package management (RPM) repository for installation source is set up and available for the
installation of the application software.
Example 8-21 shows the helpful information that is displayed about LAMP by running the
following command:
zypper info -t pattern lamp_server
Note: The lamp_server pattern includes the Apache and MariaDB components, but is
missing the PHP component. That is because the “P” in “LAMP”, stands for “Perl,” which is
often used as the server-side dynamic web page engine.
Install the packages under that pattern by running the following command:
zypper install -t pattern lamp_server
The zypper command reports which packages are expected to be installed, then prompts for
confirmation to continue. Press y and Enter to install the packages.
Attention: The PHP packages are under SLES Web and Scripting Module. For more
information about activating this support module, see the distribution’s documentation.
The Apache and MariaDB configurations in this example scenario are simple, whereas your
configuration might be more complex. Migrating the Apache and MariaDB configurations can
be a more complex process. This example presumes that MediaWiki is the only application
configured for Apache and that no other data exists in the MariaDB database than what is
used by MediaWiki.
Confirm that the version of Apache is what is expected. A common method of displaying the
version is by running the apachectl -v command, which is available by default on the
distribution.
Example 8-22 shows the version of apache2 as displayed by running the apachectl -v
command in SUSE Linux Enterprise Server.
Historically, it was common to have the installed services started automatically when the
package was installed. Today, it is more common that the installer help ensure that other
software is not started automatically. Therefore, it is necessary to start Apache manually, and
to set it to start automatically each time the system is started.
Apache services
Set the apache2 service to automatically start each time that the server is started and start the
service by running the commands that are shown in Example 8-23.
After confirming the Document Root of the Apache server, a one-line PHP script is created
that prints the standard PHP installation information. Using vi or some other suitable text
editor, create a script file called phpinfo.php, as shown in Example 8-25, and place the script
file in the appropriate DocumentRoot directory.
The necessary modules to run the PHP script were installed as described in “Installing LAMP
on SUSE Linux Enterprise Server” on page 174 but are not enabled by default. To enable the
PHP 7 module under Apache2, run the commands as shown in Example 8-26.
With the PHP script file in the DocumentRoot directory and the module enabled, the PHP
script can be run by using a web browser. Connect to your web server, using the following
URL as an example:
http://9.12.7.90/phpinfo.php
With the admin password set for the root user, all future interactions with the MariaDB
database require providing a password. General administrative functions require the root
password, whereas commands that involve MediaWiki use a different password.
Do not use quotations marks unless you are certain that they are necessary. Copying a
string from somewhere and pasting the string as the password can give unexpected
results, and might make reproducing the password later an inconvenient mystery.
Example 8-29 Output from the “show tables” mysql command after providing password
rdbk86sl:~ # mysql -u root -p -e "show tables" mysql
Enter password:
+---------------------------+
| Tables_in_mysql |
+---------------------------+
| column_stats |
| columns_priv |
| db |
| event |
| func |
| general_log |
| global_priv |
| gtid_slave_pos |
| help_category |
| help_keyword |
| help_relation |
| help_topic |
| index_stats |
| innodb_index_stats |
| innodb_table_stats |
| plugin |
| proc |
| procs_priv |
| proxies_priv |
| roles_mapping |
| servers |
| slow_log |
| table_stats |
| tables_priv |
| time_zone |
| time_zone_leap_second |
| time_zone_name |
| time_zone_transition |
| time_zone_transition_type |
| transaction_registry |
| user |
+---------------------------+
rdbk86sl:~ #
To correct this problem, run the mysqladmin command again as shown in Example 8-28 on
page 177, taking extra care to set the password to a value that you remember. If the original
password cannot be remembered or is otherwise lost, you must reinstall MariaDB.
With the preliminary Apache, MariaDB, and PHP configurations functioning properly on the
new LinuxONE server, the application can now be migrated from the x86 server.
A .sql file is created that includes the databases contents. This file now must be transferred
to the target system by using rsync.
After the copy, verify the configuration files for apache and mariadb as needed.
No output means that the command completed successfully. Now, we can run some SQL
commands to verify that the data was inserted into the database (see Example 8-34).
Note: Make sure that the application includes the correct password that is configured on
the source and target system (in case they differ).
Figure 8-3 MediaWiki running on the target LinuxONE server with migrated data
LDAP is widely used throughout the industry for directory services as an open standard
running over an IP network. Although several commercial LDAP products are available,
OpenLDAP is the implementation that is most commonly used in Linux. OpenLDAP is a fully
featured suite of tools and applications. It is readily available as a workload on LinuxONE from
all available distributions. LDAP is a perfect workload for LinuxONE, due to the centrality of
LinuxONE among many other systems and services, its fast I/O, and its low CPU and
memory usage. And OpenLDAP is open source. Migrating OpenLDAP to LinuxONE is
straightforward.
We installed a LAMP server with MediaWiki, and iSCSI external storage was used to facilitate
the migration in 8.4, “Deploying MediaWiki and MariaDB” on page 173. In this example, the
LDAP database on an x86 server is exported, the database is transferred to a Linux guest
running on LinuxONE, and the data is imported into the LDAP service.
This example assumes that the Linux guest is set up and a minimal Linux operating system is
installed. The Linux guest is called rdbk86sl and is running SUSE Linux Enterprise Server 15
SP2, with four virtual CPUs and 8GB of virtual memory. An OpenLDAP server typically does
not require a large amount of CPU or RAM running on LinuxONE. It is presumed that an
adequate RPM repository installation source is already set up and available for the installation
of the application software.
The x86 server is called zs4p01-r1 and is running RHEL 7. For this example, this is the
current OpenLDAP server that provides directory services for the hypothetical organization.
This server has a rudimentary (small) LDAP directory already configured.
Although there is much to consider when setting up an enterprise directory service, a simple
OpenLDAP scenario is covered here. More extensive documentation is available at hthis
website.
f you are going to install OpenLDAP on SUSE Linux Enterprise Server, run the following
command to install the package:
zypper install openldap2
Note: OpenLDAP maintains its configuration using one of two different configuration
methods. The “old” method involves maintaining the primary configuration in
/etc/openldap/slapd.conf. This method is simple, but does not have as many features.
The “new” way (called the cn=config format) uses several configuration files below
/etc/openldap/slapd.d/. The default behavior with OpenLDAP 2.4 is to use the cn=config
method.
From a command prompt, start YaST, calling specifically the ldap-server module:
yast2 ldap-server
When all fields are completed, we can proceed and YaST will install and activate the
service.The confirmation message is displayed at the end. The use of a suitable SSL
certificate is recommended, but not necessary for this demonstration.
Note: Using SSL for LDAP (also known as Lightweight Directory Access Protocol Over
Secure Socket Links [LDAPS]) is essential. Without LDAPS, passwords and other sensitive
data are exchanged with the LDAP server in plaintext. This method makes the system
vulnerable and is not recommended for production.
In a production environment, the correct distinguished name (DN) data must be entered, but it
is adequate for this demonstration to use the sample values that are supplied by YaST. What
is most important here is providing an administrator password. This password should not be
the same as the system’s root password. All other preferred practices for creating an
administrative password should likewise be used.
With all the configuration information sufficiently gathered, the YaST configuration steps can
be completed. The configuration files are written, and the slapd daemon is started. The
running daemon process is shown in Example 8-35.
Example 8-35 slapd daemon shown running using the cn=config method
rdbk86sl:~ # ps -ef | grep slapd
dirsrv 41259 1 2 13:36 ? 00:00:01 /usr/sbin/ns-slapd -D
/etc/dirsrv/slapd-itsorh -i /run/dirsrv/slapd-itsorh.pid
root 41871 36821 0 13:36 pts/0 00:00:00 grep --color=auto slapd
rdbk86sl:~ #
4. After the database is successfully imported, OpenLDAP can be started again and is ready
to receive queries:
service slapd start
Example 8-38 Output from ldapsearch, showing user fred exists in the directory
rdbk86sl:- # ldapsearch -xLLL -H ldapi:/// -b "dc=itso,dc=ibm,dc=com" uid=fred sn
givenName cn
dn: uid=fred,ou=employees,dc=itso,dc=ibm,dc=com
sn: frandsen
cn: fred
Example 8-39 Output from ldapsearch, querying the LDAP directory over the network
zs4p01-s1:~ # ldapsearch -xLLL -H ldap://9.12.7.90 \
> -b "dc=itso,dc=ibm,dc=com" \
> uid=fred sn givenName cn
dn: uid=fred,ou=employees,dc=itso,dc=ibm,dc=com
sn: frandsen
cn: fred
To create a centralized application log server, use the default log daemon from SUSE,
rsyslog (version 8.39). It is also the default logging server on RHEL 8 and Ubuntu 20.04.
Note: After systemd was introduced, system and service messages are logged by
systemd-journald and stored in its own format. To forward system-specific logs, see your
distribution manual for more information about configuring journald.
Global directives
You can specify several global options (called directives) in the statements of your rsyslog
configuration file. You can define how the host name of the client appears in the log files,
enable or disable DNS cache, use the ownership of the files, enable and configure modules,
and some other features that you use depending on the size or specific requirements of your
environment.
Selectors
A selector is a combination of a facility and a priority, separated by a period “.”. They can be
found on the man page of syslog(3). They can be represented by an asterisk “*”, meaning
all items under that category. Example 8-40 shows some of the default facilities and priorities.
Example 8-40 Sample facilities and priorities as seen on rsyslog.conf under SUSE 15
*.info;mail.none;authpriv.none;cron.none
authpriv.*
mail.*
Configuration modularity
The configuration model also supports modular configuration files. A statement in the main
rsyslog.conf includes all files under /etc/rsyslog.d/*.conf and reads them at service
startup. This eases the maintenance of custom configuration files throughout the
infrastructure.
For example, a configuration file can be created to forward all the server’s logs to a central
logging server and then, deploy that configuration file on all systems. If an update is needed,
only that file must be updated without touching the main configuration file.
Actions
A rule’s action field defines what is the destination of the messages. This destination is
usually a log file, but it can also be used to forward the message to a logging server by way of
the network.
Rules
A rule integrates a Selector and an Action, which defines how each facility is treated. How a
default configuration handles cron related messages is shown in the following example:
cron.* /var/log/cron
In this example, all cron message levels are forwarded to the /var/log/cron log file.
To enable the listener, define remote listen statements, as shown in Example 8-41.
Restart the syslog service to load the rsyslog configuration file, as shown in Example 8-42.
To test the listener, run the lsof command, as shown in Example 8-43.
You can also use TCP, but only the udp statement is needed to start a simple server, as shown
in Example 8-41 on page 190.
In this example, all logs from the client are also sent to 9.12.7.88 (rdbk86sl) through UDP
using port 514. This configuration is simple, but you can set up filters, new destinations, and
sources, depending on the requirement of your environment.
Example 8-47 shows the results from the log server (rdbk86sl).
For alternative setups, see the man pages for the product.
If you are migrating a rsyslog server/client, check whether rsyslog is installed on the target
server. Also, ensure that the former configuration file is compatible with the version available
on the target distribution. To migrate the old data, you can use an LVM snapshot to transfer
the logical volume to the new server. Other commands, such as tar and rsync, can be used
to transfer the old log files. For more information about a practical example of LVM snapshot,
tar, and rsync, see Set up Linux on IBM System z for Production, SG24-8137.
Before you deploy Samba, ensure that appropriate analysis and planning has been
performed before any migration activity. The checklists that are provided in this book help
identify the many areas to take into consideration to help prevent problems during migration.
This example assumes that the z/VM guest is set up and a minimal Linux operating system is
installed. The Linux guest is named rdbk86sl, and includes SUSE Linux Enterprise Server 15
SP2 installed with one virtual CPU and 1 GB of virtual memory.
Like LDAP, a Samba server typically does not require a large amount of CPU or RAM to run
on LinuxONE. It is presumed that an adequate RPM repository installation source is set up
and available for the installation of the application software.
This example is a stand-alone server with a local, non-replicated directory service. Migrating
a Samba installation on x86 to LinuxONE should be straightforward.
After the installation is complete, see your distribution’s manual to create any firewall rules
necessary to serve a Samba instance.
After the initial installation step is completed, restart and enable the service, as shown in
Example 8-49.
Example 8-51
root@rdbk86ub:/localmigration# ls -l
total 102400
-rwxr-xr-x 1 root root 104857600 Oct 20 13:48 database_export.tar.gz
root@rdbk86ub:/localmigration#
Note: If you intend to use basic Linux authentication, that is, using the passwd file, you must
change the Samba user password by running the smbpasswd -a <userid> command.
For more information about how to set up LDAP on Samba, see this website.
Configuration files
You can manually set up configuration files for Samba. The main configuration file on SUSE
Linux Enterprise Server is stored in /etc/samba/smb.conf, and has two sections:
[ global ] for general settings
[ share ] to specify specific settings about sharing files and printers
For more information about Samba configuration, see your distribution’s documentation.
Note: RHEL and Ubuntu use the similar structures as SUSE Linux Enterprise Server 15 for
the Samba main configuration files.
Part 4 Appendix
This section contains Appendix A, “Additional use case scenarios” on page 197, that provides
more use cases.
This appendix provides more use case scenarios in which a telecommunications company, a
healthcare company, and an energy and utilities company all want to migrate from x86 to
LinuxONE. It describes the challenges that are inherent to each industry and their respective
migration scenarios.
Fictional Telco Company T1 uses the following build monitoring and system management
tools:
IBM Tivoli OMEGAMON on z/VM and Linux: Information about your Linux instances
running as z/VM guests and the Linux workloads reveal how they are performing and
affecting z/VM and each other:
– Compare Linux operations side by side with detailed performance metrics.
– Data collection from the Performance Toolkit for VM (PTK is a prerequisite)
complements data collection by the IBM Tivoli Monitoring for Linux for LinuxONE
agent.
– With new Dynamic Workspace Linking, you can easily navigate between Tivoli
Enterprise Portal workspaces.
– View and monitor workloads for virtual machines, groups, response times, and LPAR
reporting, as well as view reports about z/VM and Linux usage of resources such as
CPU utilization, storage, mini-disks, and TCP/IP.
– High-level views help executives understand how systems performance influences
business and the bottom line.
– With granular views, IT staffs can more easily track complex problems that span
multiple systems and platforms, and share related information.
IBM Wave for z/VM: IBM Wave is a virtualization management product for z/VM and Linux
virtual servers that uses visualization to dramatically automate and simplify administrative
and management tasks:
– Automate, simplify management, and monitor virtual servers and resources, all from a
single dashboard.
– Perform complex virtualization tasks in a fraction of the time compared to manual
execution.
– Provision virtual resources (servers, network, storage) to accelerate the transformation
to cloud infrastructure.
Figure A-1 shows the solution architecture overview for a cloud solution that uses LinuxONE.
The solution also reduces multi-platform development costs. Linux distributions on LinuxONE
provides a standards-based platform and allows Hospital H1 to use third-party libraries and
frameworks as easily as they did on x86.
Figure A-2 Access from a mobile device to a back-end transactional core system
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
Consolidation Planning Workbook Practical Migration from x86 to IBM LinuxOne,
REDP-5433
Advanced Networking Concepts Applied Using Linux on IBM System z, SG24-7995
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters and Live
Guest Relocation, SG24-8036
Experiences with Oracle 11gR2 on Linux on System z, SG24-8104
Experiences with Oracle Solutions on Linux for IBM System z, SG24-7634
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
IBM z Systems Connectivity Handbook, SG24-5444
IBM Wave for z/VM Installation, Implementation, and Exploitation, SG24-8192
IBM zEnterprise EC12 Technical Guide, SG24-8049
Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V7.6, SG24-7933
Introduction to Storage Area Networks, SG24-5470
Introduction to the New Mainframe: z/VM Basics, SG24-7316
An Introduction to z/VM Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8006
Linux on IBM eServer zSeries and S/390: Application Development, SG24-6807
Linux on IBM System z: Performance Measurement and Tuning, SG24-6926
Security for Linux on System z, SG24-7728
Security on z/VM, SG24-7471
Set up Linux on IBM System z for Production, SG24-8137
Using z/VM v 6.2 Single System Image (SSI) and Live Guest Relocation (LGR),
SG24-8039
The Virtualization Cookbook for IBM z Systems Volume 1: IBM z/VM 6.3, SG24-8147
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and additional materials at the following website:
ibm.com/redbooks
H
C High Availability
Capacity for a Planned Event (CPE) 114 Disaster Recovery (HADR) 119
CBU High Availability (HA) 112
Capacity BackUp 114 homemade applications 78
Collaborative Memory Management Assist (CMMA) 40
concurrent user 88
Confidentiality analysis 105 I
Configuration file IBM Tivoli System Automation 119
syslog-ng 188 incremental backup 94
Continuous Availability (CA) 113 Infrastructure Service 84
Continuous Operations (CO) 113 Integrated Cryptographic Service Facility (ICSF) 98
Control 37 Integrated Facility
Control Program (CP) 37 for Linux 20
Conversation Monitor System (CMS) 38 Intellectual property 122
Conversational Monitor System (CMS) 38 IP address 71, 88, 104
Cooperative Memory Management (CMM) 40 ISV Application 51, 54
Customer Initiated Upgrade (CIU) 114
J
D Java Data Base Connector (JDBC) 120
data center 81, 112 Java Virtual Machine (JVM) 86
data migration 71 Just-In-Time (JIT) 86
database server 67, 80, 99 JVM switch 86
DB2 data
replication feature 119
DB2MOVE command 92
L
Layer 2 62
designated IP address
layer 2 VSWITCH 64
Linux servers QDIO devices 71
Layer 3 62
Disaster Recovery 112
LDAP
predefined capacity 114
user information 107
Discontiguous Saved Segment (DCSS) 40
Lightweight Directory Access Protocol (LDAP) 107
disk device
Linux 53, 63, 71, 80, 88, 98
access 92
distribution 78, 96, 106
guest 64, 75
E image 40, 99
Enterprise authentication options 107 kernel 41
Evaluation Acceptance Level (EAL) 99 Linux guest
external firewall 69, 100 administration tasks 49
log 37
Linux kernel
F maintenance 114
Fibre Channel Protocol (FCP) 42 Linux OS 75
U
UNIX administrator 49
unneeded process 40
user acceptance testing
virtual servers 85
V
V-DISK device 91
virtual machine
complete System z environment 37
non-disruptive addition 115
z/VM directory 37
virtual machine (VM) 37, 71, 115
Virtual Machine Resource Manager (VMRM) 40
VSWITCH 43, 62
W
WebSphere application 115
WebSphere Application Server setup 118
wide area network (WAN) 50
Z
z/VM layer 99
z/VM maintenance 114
z/VM session 37
Linux guest logs 37
z/VM system 37
administrator 38
Virtual Network 114
Index 207
208 Practical Migration from x86 to LinuxONE
Practical Migration from x86 to LinuxONE
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover
SG24-8377-01
ISBN 0738459305
Printed in U.S.A.
®
ibm.com/redbooks