Migration To SysPlex
Migration To SysPlex
Migration To SysPlex
Frank Kyne
Jeff Belot
Grant Bigham
Alberto Camara Jr.
Michael Ferguson
Gavin Foster
Roger Lowe
Mirian Minomizaki Sato
Graeme Simpson
Valeria Sokal
Feroni Suhood
ibm.com/redbooks
SG24-6818-00
Take Note! Before using this information and the product it supports, be sure to read the general
information in Notices on page xi.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Why this book was produced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Starting and ending points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 BronzePlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 GoldPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 PlatinumPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Structure of each chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Are multiple subplexes supported or desirable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Considerations checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Implementation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.4 Tools and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Terminology and assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Duplicate data set names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Duplicate volsers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.3 TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.4 System Logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.5 Sysplex HFS sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.6 Legal implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Updates to this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 2. Sysplex infrastructure considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 One or more XCF subplexes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 CDS considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 MAXSYSTEM CDS parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 General CDS guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Cross-System Coupling Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Considerations for merging XCFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Coupling Facility Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Considerations for merging CFRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Sysplex Failure Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Considerations for merging SFM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Automatic Restart Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.1 Considerations for merging ARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Tools and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
16
16
16
17
18
19
27
28
31
33
33
34
35
37
38
38
38
39
iii
51
52
52
52
52
54
68
69
69
69
71
72
72
72
72
73
74
80
80
80
81
81
81
82
83
84
84
90
90
91
115
116
117
131
134
134
134
iv
136
137
139
139
139
139
140
142
142
142
142
143
144
144
147
147
149
150
153
157
158
161
162
162
163
165
165
165
166
167
168
168
168
169
170
171
172
172
173
174
175
176
177
178
178
179
181
182
183
184
185
Contents
vi
189
190
191
193
194
197
198
198
198
198
199
206
207
208
209
209
209
212
213
214
214
218
219
221
222
222
235
236
237
237
241
242
242
251
252
254
254
255
258
259
260
261
262
263
264
270
286
291
292
292
298
298
298
299
301
302
303
308
309
309
309
311
314
314
316
316
320
320
320
321
321
322
322
322
323
323
323
324
324
324
327
328
328
328
333
334
335
337
337
338
338
339
339
340
341
341
Contents
vii
viii
341
342
342
343
344
344
344
344
345
345
346
346
347
347
347
347
348
348
348
349
351
352
352
352
353
354
354
354
355
355
355
356
358
360
365
366
367
370
371
371
372
373
375
377
379
379
383
384
385
392
392
395
395
395
395
396
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
397
397
397
398
399
399
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Contents
ix
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xi
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Eserver
Eserver
Redbooks (logo)
ibm.com
z/Architecture
z/OS
zSeries
Advanced Peer-to-Peer Networking
CICS
CICSPlex
DB2
DFSMSdfp
DFSMSdss
DFSMShsm
DFSMSrmm
DFSORT
Enterprise Storage Server
ESCON
FICON
Hiperbatch
IBM
IMS
IMS/ESA
Language Environment
MQSeries
MVS
MVS/ESA
MVS/SP
NetView
OS/2
OS/390
Parallel Sysplex
PR/SM
Redbooks
Requisite
RACF
RAMAC
RMF
S/390
Sysplex Timer
SOM
SOMobjects
Tivoli
VTAM
WebSphere
xii
Preface
This IBM Redbook provides information to help Systems Programmers plan for merging
systems into a sysplex. zSeries systems are highly flexibile systems capable of processing
many workloads. As a result, there are many things to consider when merging independent
systems into the more closely integrated environment of a sysplex. This book will help you
identify these issues in advance and thereby ensure a successful project.
xiii
xiv
Sue Hamner
IBM USA
Johnathan Harter
IBM USA
Evan Haruta
IBM USA
Axel Hirschfeld
IBM Germany
Gayle Huntling
IBM USA
Michael P Kasper
IBM USA
John Kinn
IBM USA
Paul M. Koniar
Metavante Corporation, USA
Matti Laakso
IBM Finland
Tony Langan
IBM Canada
Jim McCoy
IBM USA
Jeff Miller
IBM USA
Bruce Minton
CSC USA
Marcy Nechemias
IBM USA
Mark Noonan
IBM Australia
Bill Richardson
IBM USA
Alvaro Salla
Maffei Informtica, Brazil
Sim Schindel
IBM Turkey
Norbert Schlumberger
IBM Germany
Preface
xv
William Schoen
IBM USA
Gregory Silliman
IBM USA
Nat Stephenson III
IBM USA
Dave Sudlik
IBM USA
Kenneth Trowell
IBM Australia
Susan Van Berkel
IBM Canada
Geert Van de Putte
IBM Belgium
Tom Wasik
IBM USA
Bob Watling
IBM UK
Gail Whistance
IBM USA
Mike Wood
IBM UK
Bob Wright
IBM USA
Dave Yackel
IBM USA
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
xvi
Chapter 1.
Introduction
This chapter discusses the reason for producing this book and introduces the structure used
in most of the subsequent chapters. It also describes some terms used throughout the book,
and some assumptions that were used in its production. Summary information is provided
about which system components must be merged as a set, and the sequence of those
merges.
Do you plan on implementing data sharing and workload balancing across all the systems
in the target sysplex, either now or at some time in the future? If so, you would need to
have all DASD volumes accessible to all systems.
Are there duplicate high level qualifiers on the user volumes? The easiest way to check
this is to do an IDCAMS LISTCAT ALIAS against the Master Catalogs of the incoming
system and the target sysplex, and check for duplicate HLQs that are not referring to the
same files. This should be obvious if the same alias in the two Master Catalogs points to
two different user catalogs. Obviously you will have some duplicates for system files, but
you must check for user data. If there are duplicates, this will make it more complex to
share all volumes, especially if there are many duplicates. This is discussed in more detail
in 10.4, Sharing Master Catalogs on page 150.
If you are going to share all the volumes between all systems, realize that this means that
you are probably also going to need a single security environment, single JESplex, single
HSM, and single SMS. A single Master Catalog is desirable, but not a necessity.
Having access to all DASD from all systems gives you the capability to potentially restart
any work on any system. If there is a planned outage, the applications from the affected
system can be moved and run on one of the other systems. If you are not exploiting data
sharing, this is an attractive way of minimizing the impact of planned (or unplanned)
outages. If you want this capability, all DASD must be accessible from all systems.
An additional important consideration that we have to point out is that the original concept of a
sysplex is a group of systems with similar characteristics, similar service level objectives, and
similar workloads, sharing data between the systems and doing dynamic workload balancing
across all systems. While this is the ideal Parallel Sysplex implementation which will allow a
customer to derive maximum benefit fromt the technology,, we realize that many customers
may not, for business or technical reasons, be able to implement it. Therefore, asymmetric
configurations where some of the workloads on each system are not shared are certainly
supported. However, if you find that you have a Parallel Sysplex in which the majority of
systems and/or workloads are completely disjoint thatthat is, they have nothing in
commonthen you should give additional consideration as to whether those systems should
really reside in the same sysplex.
Specifically, sysplex was not designed to support subplexesthat is, subsets of the sysplex
that have nothing in common with the other members of the sysplex except that they share
the sysplex couple data sets and Sysplex Timer. For example, the design point is not to
have development and production systems in the same sysplex. While some products do
support the idea of subplexes (DFSMShsm, for example), others do not (TCP/IP, for
example).
While Parallel Sysplex provides the capability to deliver higher application availability than any
other solution, the closer relationship between the systems in a sysplex mean that it is
possible for a problem on one system to have an impact on other members of the sysplex.
Such problems, while rare, are more likely to arise if the sysplex consists of widely disparate
systems (for example, test and production). For the highest levels of availability, therefore, we
recommend against mixing very different types of systems, that are completely unrelated, in
the same sysplex.
On the other hand, the pricing mechanisms implemented through Parallel Sysplex License
Charging (PSLC) and Workload License Charging (WLC) do unfortunately provide a financial
incentive to place as many systems as possible in the same sysplex.
At the end of the day, if you are in the position of deciding whether to merge a completely
unrelated system into a sysplex, you need to balance the financial and technical (if any)
benefits of the merge against the possible impact such a move could have on the availability
of all the systems in the sysplex.
Chapter 1. Introduction
Similarly, if the merge would result in a very large number of systems in the sysplex (and
those systems are not all related to each other), you need to consider the impact to your
business if a sysplex outage were to take out all those systems. While the software savings
could be significant (and are easily calculated), the cost of the loss of all systems could also
be significant (although more difficult to calculate, unfortunately).
Once you have decided how the user DASD are going to be handled, and have satisfied
yourself with the availability and financial aspects, you are in a position to start planning for
the merge. This document is designed to help you through the process of merging each of the
affected components. However, because there are so many potential end points (ranging
from sharing nothing to sharing everything), we had to make some assumptions about the
objective for doing the merge, and how things will be configured when the exercise is
complete. We describe these assumptions in the following sections.
1.2.1 BronzePlex
Some customers will want to move systems that are completely unrelated into a sysplex
simply to get the benefits of PSLC or WLC charging. In this case, there will be practically no
sharing between the incoming system and the other systems in the target sysplex. This is a
typical outsourcing configuration, where the sysplex consists of systems from different
customers, and there is no sharing of anything (except the minimal sharing required to be part
of a sysplex) between the systems. We have used the term BronzePlex to describe this type
of sysplex.
1.2.2 GoldPlex
Other customers may wish to move the incoming system into the sysplex, and do some fairly
basic sharing, such as sharing of sysres volumes, for example. In this case, the final
configuration might consist of more than one JES2 MAS, and two logical DASD pools, each of
which is only accessed by a subset of the systems in the sysplex. We have included in this
configuration all of the components that can easily be merged, and do not require a number of
other components to be merged at the same time. This configuration provides more benefits,
in terms of improved system management, than the BronzePlex, so we have called this
configuration a GoldPlex.
1.2.3 PlatinumPlex
The third configuration we have considered is where the objective is to maximize the benefits
of sysplex. In this case, after the merge is complete, the incoming system will be sharing
everything with all the other members of the sysplex. So there will be a single shared sysres
between all the systems, a single JES MAS, a single security environment, a single
automation focal point, and basically just one of everything in the sysplex. This configuration
provides the maximum in systems management benefits and efficiency, so we have called it
the PlatinumPlex.
Depending on which type of plex you want to achieve and which products you use, some or all
of the chapters in this book will apply to you. Table 1-1 contains a list of the topics addressed
in the chapters and indicates which ones apply to each plex type. However, we recommend
that even if your objective is a BronzePlex you should still read all the chapters that address
products in your configuration, to be sure you have not overlooked anything that could impact
your particular environment.
BronzePlex
GoldPlex
PlatinumPlex
System Logger
WLM
GRS
Language Environment
SMF
JES2
Shared HFS
Parmlib/Proclib
VTAM
TCP
RACF
SMS
HSM
RMM
OPC
Automated operations
Physical considerations
Operations
Maintenance
In addition, in each chapter there is a table of considerations specific to that topic. One of the
columns in that table is headed TYPE, and the meaning of the symbols in that column are
as follows:
B
Some considerations apply across all target environments, and some only apply to a subset
of environments.
We also thought it would be helpful to identify up front a suggested sequence for merging the
various components. For example, for some components, most of the merge work can and
should be done in advance of the day the system is actually moved into the sysplex. The
merge of other components must happen immediately prior to the system joining the sysplex.
Chapter 1. Introduction
And still others can be merged at some time after the system joins the sysplex. In Table 1-2
on page 6, we list each of the components addressed in this book, and indicate when the
merge work can take place. This table assumes that your target environment is a
PlatinumPlex. If your target is one of the other environments, refer to the specific chapter for
more information.
Table 1-2 Timing of merge activities by component
Component
Sequence
Before
System Logger
WLM
Before
GRS
Before
Language Environment
Before
SMF
JES2
Shared HFS
After
Before
VTAM
Before
TCP
Before
RACF
SMS
HSM
RMM
OPC
Automated operations
Before
Physical considerations
Before
Before
Software Licensing
Before
Operations
Maintenance
Before
Before
In Table 1-2 on page 6, the entries in the Sequence column have the following meanings:
Before
The bulk of the work required to merge these components can and
should be carried out in the weeks and months preceding the day
the incoming system is brought up as a member of the target
sysplex.
Immediately preceding At least some of the work to merge this component must be done
in the short period between when the incoming system is shut
down, and when it is IPLed as a member of the target sysplex.
After
The work required to complete the merge for this component can
actually be postponed until after the incoming system is IPLed into
the target sysplex.
1.4.1 Plexes
In this document we frequently talk about the set of systems that share a particular resource.
In order to avoid repetitive use of long-winded terms like all the systems in the sysplex that
share a single Master Catalog, we have coined terms like CATALOGplex to describe this set
of systems. The following are some of these terms we use later in this document:
CATALOGplex
DASDplex
The set of systems in a sysplex that all share the same DASD
volumes.
ECSplex
The set of systems that are using Enhanced Catalog Sharing (ECS) to
improve performance for a set of shared catalogs. All the systems
sharing a given catalog with ECS must be in the same GRSplex.
GRSplex
The set of systems that are in a single GRS complex and serialize a
set of shared resources using either a GRS Ring or using the GRS
Star structure in the Coupling Facility (CF).
JESplex
The set of systems, either JES2 or JES3, that share a single spool. In
JES2 terms, this would be a single MAS. In JES3 terms, this would be
a Global/Local complex.
HFSplex
HMCplex
HSMplex
OAMplex
The set of OAM instances that are all in the same XCF group. The
scope of the OAMplex must be the same as the DB2 data sharing
group that contains the information about the OAM objects used by
those instances.
OPCplex
The set of systems whose batch work is managed from a single OPC
Controller. The OPCplex may consist of systems in more than one
sysplex, and may even include non-MVS systems.
RACFplex
RMFplex
The set of systems in a sysplex that are running RMF and are using
the RMF sysplex data server. All such RMF address spaces in a
sysplex will connect to the same XCF group and therefore be in the
same RMFplex.
RMMplex
SMSplex
TCPplex
VTAMplex
WLMplex
Table 1-3 summarizes how many of each of these plexes are possible in a single sysplex.
Note that there are often relationships between various plexes - for example, if you have a
single SMSplex, you would normally also have a single RACFplex. These relationships are
discussed in more detail in the corresponding chapters of this book.
Table 1-3 Number of subplexes per sysplex
Plex
1 per sysplex
CATALOGplex
DASDplex
ECSplex
GRSplex
JESplex
HFSplex (if using sysplex HFS sharing)
X
X
HMCplex
HSMplex
OAMplex
OPClpex
RACFplex
RMFplex
RMMplex
SMSplex
Chapter 1. Introduction
Plex
1 per sysplex
TCPplex
VTAMplex
WLMplex
There are also other plexes, which are not covered in this book, such as BatchPipesPlex,
CICSplex, DB2plex, IMSplex, MQplex, TapePlex, VSAM/RLSplex, and so on. However, we
felt that data sharing has been adequately covered in other books, so we did not include
CICS, DB2, IMS, MQSeries, or VSAM/RLS in this book.
1.5 Gotchas
While adding completely new systems to an existing sysplex is a relatively trivial task, adding
an existing system (complete with all its workloads and customization) to an existing sysplex
can be quite complex. And you do not have to be aiming for a PlatinumPlex for this to be the
case. In some ways, trying to establish a BronzePlex can be even more complex, depending
on how your systems are set up prior to the merge and how stringent the requirements are to
keep them apart.
In this section, we review some situations that can make the merge to a BronzePlex or
GoldPlex configuration difficult or maybe even impossible. We felt it was important to highlight
these situations up front, so that if any of them apply to you, you can investigate these
particular aspects in more detail before you make a final decision about how to proceed.
10
GRS
11
12
10
SUBPLEXA
3
4
8
7
SYSB
SYSA
BIG.PAY.LOAD
SUBPLEXB
FRED
COMPRESS EXEC PGM=IEBCOPY
INOUT DD DSN=BIG.PAY.LOAD
BIG.PAY.LOAD
In this case, systems SYSA and SYSB are in one subplex, SUBPLEXA (these might be the
original target sysplex systems). System FRED is in another subplex, SUBPLEXB (this might
be what was the incoming system). There are two data sets called BIG.PAY.LOAD, one that is
used by system FRED, and another that is shared between systems SYSA and SYSB.
If the SYSA version of the data set is being updated by a job on SYSA, and someone on
system FRED tries to update the FRED version of the data set, GRS will make the FRED
update wait because it thinks someone on SYSA is already updating the same data set.
This is called false contention, and while it can potentially happen on any resource that is
serialized by GRS, it is far more likely to happen if you have many duplicate data set names,
or even if you have a small number of duplicate names, but those data sets are used very
frequently.
To identify if this is likely to cause a problem in your installation, the first thing to do is to size
the magnitude of the problem. A good place to start is by looking for duplicate aliases in the
Master Catalogs. If you do not have a clean Master Catalog (that is, the Master Catalog
contains many data sets other than those contained on the sysres volumes), then you should
also check for duplicate data set entries in the Master Catalogs. If these checks indicate that
duplicates may exist, further investigation is required. You might try running DCOLLECT
against all the volumes in each subplex and all the DFSMShsm Migration Control Data Sets
in each subplex, then use something like ICETOOL to display any duplicate records. While
this is not ideal (it only reflects the situation at the instant the job is run, and it does not read
information from the catalogs, such as for tape data sets), at least it will give you a starting
place. For more information about the use of DCOLLECT, refer to z/OS DFSMS Access
Methods Services for Catalogs, SC26-7394.
Chapter 1. Introduction
11
1.5.3 TCP/IP
There have been many enhancements to TCP/IP in an OS/390 and z/OS environment to
improve both performance and availability. Many of these enhancements depend on a TCP/IP
capability known as Dynamic XCF.
In addition to enabling these new enhancements, however, Dynamic XCF also automatically
connects all TCP/IP stacks that indicate that they wish to use this feature. If you do not wish
the TCP/IP stack on the incoming system to connect to the TCP/IP stacks on the target
sysplex systems, then you cannot use Dynamic XCF on the incoming system. If the incoming
system is actually just a single system, then this may not be a significant problem, however if
the incoming system actually consists of more than one system, and you would like to use
the TCP/IP sysplex enhancements between those systems, then this may be an issue.
If this is a potential concern to you, refer to Chapter 12, TCP/IP considerations on page 175
before proceeding.
12
The reason for avoiding multiple subplexes being connected to a single Logger structure is
that it is possible for any system that is connected to a Logger structure to do offload
processing for a logstream in that structure. Therefore, the DASD volumes that will contain
the offload data sets for those logstreams must also be shared by all systems in the sysplex.
And because the offload data sets must be cataloged, the user catalog or catalogs that will
contain the data set entries must also be shared.
An additional consideration is how do you manage the offload data sets. You cant use
DFSMShsm to migrate them unless you have a single HSMplex, and you would normally only
have a single HSMplex in a PlatinumPlex. The reason for this is that if DFSMShsm in
SUBPLEXA migrates the data set, the catalog will be updated to change the volser of the
migrated data set to MIGRAT. If a system in SUBPLEXB needs to access data in the migrated
offload data set, it will see the catalog entry indicating that the data set has been migrated,
call DFSMShsm on that system to recall it, and the recall will fail because the data set was
migrated by a DFSMShsm in a different HSMplex. If you wish to use DFSMShsm to manage
the Logger offload data sets, all systems that are connected to a Logger structure must be in
the same HSMplex.
Furthermore, if you currently place your offload data sets on SMS-managed volumes (as
many customers do), you will have to discontinue this practice if all the systems in the new
enlarged sysplex will not be in the same SMSplex.
Therefore, if you are considering a BronzePlex or a GoldPlex configuration, you must give
some thought to how your Logger structures will be used (especially for OPERLOG and
LOGREC), and how the offloaded Logger data will be managed.
Chapter 1. Introduction
13
14
Chapter 2.
Sysplex infrastructure
considerations
This chapter discusses the following aspects of moving a system into an existing sysplex:
Is it necessary to merge sysplex infrastructure (that is, XCF) environments and definitions,
or is it possible to have more than one XCFplex in a single sysplex?
A checklist of things to consider when merging a system into a sysplex.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
15
16
17
Use logical volume backup processing so that the backups are done on a data set
level, rather than on a track-image level. For more information, see z/OS
DFSMSdss Storage Administration Guide, SC35-0423.
Security considerations
It is the responsibility of the installation to provide the security environment for the CDSs.
Consider setting up a security profile specifically for the CDSs, and do not give any TSO
users access to the profile. This protects against accidental deletion of CDSs.
18
Authorized programs can join XCF groups and use XCF facilities to communicate with other
group members. In addition to providing the communication facilities, XCF also informs
members of the group when a member joins or leaves the group. XCF provides a relatively
simple and high performance mechanism for programs to communicate with other programs
resident in the same sysplex. As a result, there are many users of XCF, both IBM and
non-IBM products.
A program on any system in the sysplex can communicate with another program on any other
member of the sysplex as long as they are both connected to the same XCF group. Some
products provide the ability to specify the name of the XCF group they will use, or provide the
ability to control whether they connect to the group or not. Other products do not provide this
level of control. This is an important consideration when determining whether your target
sysplex will be a BronzePlex, a GoldPlex, or a PlatinumPlexif you wish to maintain
maximum separation between the incoming system and the other systems in the target
sysplex, how products use XCF may have a bearing on that decision.
If your target environment is a BronzePlex, the incoming system must share the sysplex
CDSs and take part in XCF signalling with the systems in the target sysplex. In fact, even if
you share absolutely nothing else, to be a member of the sysplex, the incoming system must
share the sysplex CDSs.
If your target environment is a GoldPlex, the incoming system will share the sysplex CDSs
and take part in XCF signalling with the systems in the target sysplex.
Finally, if your target environment is a PlatinumPlex, the incoming system will again share the
sysplex CDSs and take part in XCF signalling with the systems in the target sysplex.
Note
Type
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
Done
The Type specified in Table 2-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 2-1 are described below:
1. If you need to increase the MAXSYSTEM value in the sysplex CDS, you will need to
allocate a new CDS using the IXCL1DSU program. This is discussed in 2.2, CDS
considerations on page 16.
19
In addition to the MAXSYSTEM value, there are other relevant attributes of the sysplex
CDSs that need to be considered:
ITEM NAME(GROUP) NUMBER( ) Specifies that the sysplex CDS supports group
data. The NUMBER value includes not only the number of XCF groups needed for
multisystem applications, but also the number of XCF groups that z/OS system
components need. To determine the number of z/OS system groups currently in use,
issue the DISPLAY XCF,G command on both the incoming system and the target
sysplex. It is advisable to add a contingent number of groups for future growth.
(Default=50, Minimum=10, Maximum=2045).
ITEM NAME(MEMBER) NUMBER( ) Specifies that the sysplex CDS supports member
data. The NUMBER value specifies the largest number of members allowed in a single
group. Determine which group has the largest number of members and specify a value
somewhat larger than this amount, allowing for the fact that bringing the incoming
system into the target sysplex will probably increase the number of members in most, if
not all, XCF groups. To verify the number of members in your groups at the moment,
issue the D XCF,G command on both the incoming system and the target sysplex, as
shown in Figure 2-2 on page 20. The number contained in brackets after the group
name is the number of members in that group. In this example, the largest number of
members in a group is 8, for the SYSMCS and SYSMCS2 groups. In a normal
production environment, the groups with the largest numbers of members are usually
those associated with CICS, JES3, and CONSOLE. (Minimum=8, Maximum=1023). To
find out who is connected to a given group, use the D XCF,GROUP,groupname,ALL
command.
D XCF,G
IXC331I 23.35.26
GROUPS(SIZE):
COFVLFNO(3)
D#$#(3)
EZBTCPCS(3)
INGXSGA0(2)
ISTXCF(3)
IXCLO00D(3)
SYSBPX(3)
SYSGRS(3)
SYSIGW00(4)
SYSIGW03(3)
SYSJES(3)
SYSRMF(3)
XCFJES2A(3)
CSQGMQ$G(3)
DR$#IRLM(3)
IDAVQUI0(3)
IRRXCF00(3)
ITSOIMS(1)
IXCLO001(3)
SYSDAE(7)
SYSGRS2(1)
SYSIGW01(4)
SYSIKJBC(3)
SYSMCS(8)
SYSTTRC(3)
XCFJES2K(1)
2. If your target environment is a BronzePlex, you will not be sharing the Master Catalog, so
you must ensure that the CDSs are in the Master Catalog of each system. Remember that
when the incoming system joins the target sysplex, it should use the same COUPLExx
Parmlib member as the systems in the target sysplex, or at least use a member that is
identical to the COUPLExx member used by those systems. If your target environment is a
GoldPlex or a PlatinumPlex, you will be sharing SYS1.PARMLIB, so all systems should
share the same COUPLExx member.
3. Prior to the merge, you should review your XCF performance in the target sysplex to
ensure there are no hidden problems that could be exacerbated by adding another system
to the sysplex. You should use WSC Flash 10011, Parallel Sysplex Performance: XCF
Performance Considerations, to help identify the fields in RMF reports to check. If any
problems are identified, they should be addressed prior to the merge.
20
After the merge, you should again use the XCF Activity Report to check the XCF
performance in the target sysplex. Once again, if any problems are identified, they should
be addressed now. In addition to the WSC Flash, you can use Chapter 6, Tuning a
Sysplex, in z/OS MVS Setting Up a Sysplex, SA22-7625 to help with tuning XCF.
4. A transport class is z/OS's way of enabling you to associate XCF messages (based on
similar signaling requirements) and then assign them signaling resources (signaling paths
and message buffers). A transport class allows you to segregate message traffic
according to the XCF group a message is related to, or according to the length of the
message, or both. In general, we recommend using transport classes for one of the
following reasons:
To assign signalling resources to groups of messages based on the message sizes;
this ensures the most efficient use of the signalling resources.
To get information about the number of messages and the size of the messages
associated with a given XCF group. We recommend that once you have obtained the
required information, you remove the transport class and group the associated
messages in with all other messages based solely on their size.
For instance, you can define a transport class that optimizes signaling resources for XCF
messages of a certain size. Messages larger than this size can still be processed,
however, there may be an overhead in processing those messages. While it is possible to
assign messages to a transport class based on their XCF group, we recommend using the
message size rather than the XCF group as the mechanism to assign messages to a
given transport class. The following are examples of how you would define two transport
classes and assign all messages to one or the other based purely on message size
(specifying GROUP(UNDESIG) indicates that messages from all XCF groups are
candidates for selection for this message class):
CLASSDEF CLASS(DEFSMALL) CLASSLEN(956) GROUP(UNDESIG)
CLASSDEF CLASS(DEFAULT) CLASSLEN(16316) GROUP(UNDESIG)
Transport classes are defined independently for each system in the sysplex using the
CLASS keyword of the CLASSDEF statement or, after IPL, using the SETXCF
START,CLASSDEF command.
Example 2-2 Syntax for CLASSDEF Statement of COUPLExx Parmlib Member
[CLASSDEF
[
[
[
[
CLASS(class-name)
[CLASSLEN(class-length)
]
[GROUP(group-name[,group-name]...)]
[MAXMSG(max-messages)
]
]
]
]
]
]
On a system, each transport class must have a unique name. The class name is used in
system commands and shown in display output and reports.
By explicitly assigning an XCF group to a transport class, you give the group priority
access to the signaling resources (signaling paths and message buffer space) of the
transport class. All groups assigned to a transport class have equal access to the
signaling resources of that class.
You should check to see if any transport classes are defined in either the incoming system
or in the target sysplex. If there are, first check to see if they are still requiredas a
general rule, we recommend against having many transport classes unless absolutely
necessary. If you determine that the transport classes are still necessary, you should then
assess the impact of the incoming system, and make sure you add the appropriate
definitions to the COUPLExx member that will be used by that system after it joins the
sysplex.
21
5. Table 2-2 contains a list of XCF groups with the owner of each. The corresponding notes
indicate whether the group name is fixed or not, and if not, where the group name is
specified. If your target environment is a BronzePlex and you want to maintain strict
segregation of the workloads, you should pay special attention to any exploiters that have
fixed group names.
Table 2-2 XCF groups in the sysplex
22
Note
APPC, ASCH
SYSATBxx
CICS MRO
DFHIR000
CICS VR
DWWCVRCM
Console services
SYSMCS, SYSMCS2
DAE
SYSDAE
DB2
SYSIGW00, SYSIGW01
ENF
SYSENF
SYSGRS, SYSGRS2
HSM
ARCxxxxxx
IMS
ESCM
IOS
SYSIOSxx
IRLM
JES3 complex
MQSeries
CSQGxxxx
xxxxxxxx
OMVS
SYSBPX
RACF
IRRXCF00
RMF
SYSRMF
RRS
ATRRRS
INGXSGxx
Tape Sharing
SYSIEFTS
TCP/IP
EZBTCPCS
aa
Trace
SYSTTRC
ab
TSO Broadcast
SYSIKJBC
ac
Note
VLF
COFVLFNO
ad
VSAM/RLS
IDAVQUI0, IGWXSGIS,
SYSIGW01, SYSIGW02,
SYSIGW03
ae
VTAM, TCP/IP
ISTXCF, ISTCFS01
af
WLM
SYSWLM
ag
XES
IXCLOxxx
ah
23
j. When GRS is operating in Ring mode, the SYSGRS group is used to communicate the
RSA, acknowledgement signals, GRS join requests, and ECA requests. In this mode,
there is no SYSGRS2 group.
When GRS is operating in Star mode, the SYSGRS group is used to communicate
GQSCAN requests and ECA requests. In this mode, the SYSGRS2 group only
contains a list of the members of the GRSplexit is not used for signalling.
k. HSM Secondary Host Promotion uses an XCF group to notify the members of the HSM
XCF group should one of the members fail. The group name is ARCxxxxx, with the
xxxxx value being specified via the HSM SETSYS PLEXNAME command. The default
group name is ARCPLEX0. For more information, refer to DFSMShsm Implementation
and Customization Guide, SC35-0418.
l. IMS OTMA uses XCF to communicate between IMS and the OTMA clients. The name
of the XCF group used for this communication. The name is installation-dependent,
and is specified on the GRNAME parameter in the IMS startup JCL.
IMS also uses an XCF group to allow the Fast Database Recovery (FDBR) region to
monitor the IMS regions it is responsible for. The group name is installation-dependent,
and is specified on the GROUPNAME parameter in the IMS startup JCL. If not
specified, if defaults to FDRimsid, where imsid is the IMSID of the IMS region that is
being tracked.
If using IMS Shared Message Queues, IMS also has an XCF group that is used by the
IMS regions in the Shared Queues group. The name of the group is CSQxxxxx, where
xxxxx is a one-to-five character value that is specified on the SQGROUP parameter in
the DFSSQxxx Proclib member.
m. The I/O Operations component of System Automation for OS/390 (previously
ESCON Manager) attaches to a group with a fixed name of ESCM. All commands
passed between I/O Ops on different systems are actually passed via VTAM (because
I/O Ops supports systems spread over multiple sysplexes), however within a sysplex,
I/O Ops uses the ESCM group to determine the VTAM name of the I/O Ops component
on each system.
n. The I/O Supervisor component of MVS, via the IOSAS address space, connects to an
XCF group with a name of SYSIOSxx. The xx suffix is determined dynamically when
the system is IPLed. There is one SYSIOSxx group per LPAR clusterso, if you had
three LPAR clusters in the sysplex, there would be three SYSIOSxx groups. These
groups are used by DCM to coordinate configuration changes across the systems in
the LPAR cluster.
o. IRLM uses an XCF group to communicate between the IRLM subsystems in the data
sharing group. The name of the IRLM XCF group is specified on the IRLMGRP
parameter in the IRLM started task JCL. This can be any 8-character name you wish.
p. JES2 uses two XCF groups.
One group is used for communicating between the members of the JES2 MAS. The
default name for this group is the local node name defined on the NAME parameter of
the local NODE(nnnn) initialization statement. We recommend that the default name
be used, unless it conflicts with an existing XCF group name. Alternately, the XCF
group name used by JES can be specified on the XCFGRPNM parameter of the
MASDEF macro.
The other group, with a fixed name of SYSJES, is used for collecting and sharing
diagnostic information about JES/XCF interactions. It is also used in conjunction with
TSO Generic Resource support.
24
25
more than one System Automation for OS/390 instance in a single MVS system, each
instance must have connect to a unique XCF group.
y. The automatic tape sharing feature of z/OS (in z/OS 1.2 and later) uses an XCF group
with a fixed name of SYSIEFTS to pass information about tape drive allocations
between all the members of the sysplex. You cannot control the name of the group, nor
can you stop any system from connecting to the group.
z. All the TCP stacks in the sysplex join an XCF group called EZBTCPCS. This group is
used by the stacks to exchange all the information they need to share for all the sysplex
functions like Dynamic XCF, Dynamic VIPAs, Sysplex Distributor, and the like. The
name of this group is fixed, so all the TCPs in the sysplex will potentially be able to
communicate with each other using this group. This is discussed further in Chapter 12,
TCP/IP considerations on page 175.
aa.The Tivoli Workload Scheduler for z/OS connects to one, or optionally, two XCF
groups. One of the groups is used to communicate between the controller and trackers,
and can be used to submit workload and control information to the trackers connected
to the same group. The trackers use XCF services to transmit events back to the
controller. The same group is used to give you the ability to define a hot-standby
controller that takes over when XCF notifies it that the previous controller has failed.
In addition, if you are using the data store feature, you need a separate XCF group.
The XCF Group names are specified on the XCFOPTS and DSTGROUP initialization
statements. More information about these features can be found in Tivoli Workload
Scheduler for z/OS V8R1 Installation Guide, SH19-4543.
ab.The MVS transaction trace facility uses an XCF group with a fixed name of SYSTTRC.
Transaction trace provides a consolidated trace of key events for the execution path of
application- or transaction-type work units running in a multi-system application
environment. TRACE TT commands issued on any system in the sysplex affect all
systems. The same filter sets are made active throughout the sysplex as a
consequence of TRACE TT commands, and the systems using those filter sets are
displayed in the output from the DISPLAY TRACE,TT command. The TRACE address
space in every system in the sysplex automatically connects to the SYSTTRC XCF
group, and the XCF group is used to communicate filter set information to all the
systems in the sysplex. However, no trace data is sent over the XCF group, so it is
unlikely that there are any security concerns due to the fact that all systems are
connected to the same group.
ac. The SYSIKJBC XCF group, which has a fixed name, is used to send information to all
the systems in the sysplex about whether the SYS1.BRODCAST data set is shared or
not, and also to notify systems whenever there is an update to that data set. No actual
data, such as the text of messages, for example, is passed through the group.
ad.The MVS Virtual Lookaside Facility (VLF) is used, together with LLA, to improve the
performance of loading frequently used load modules and PDF members. During
initialization, VLF on each system connects to an XCF group called COFVLFNO. This
group is used by VLF to notify VLF on other systems when a member in a
VLF-managed PDS is changed, thus ensuring that the other systems do not use
in-storage, but downlevel versions of the member.
ae.VSAM/RLS uses a number of XCF groups, all of which have names that are
determined by the system and cannot be controlled by the user. The IDAVQUIO group
is used for RLS Quiesce Functions. The SYSIGW02 group is used for RLS Lock
Manager Functions. The SYSIGW03 and IGWXSGIS groups are used for RLS Sharing
Control Functions, and the SYSIGW01 group is for a common lock manager service
which is used by both RLS and PDSE. These XCF groups are used to send messages
around the sysplex in order to allow all of the SMSVSAM address spaces to
communicate with each other. The groups will be connected to by any system where
26
VSAM/RLS is started. Note that you can only have one VSAM/RLS data sharing group
per sysplex, because the lock structure and the XCF group names are all fixed.
af. VTAM joins an XCF group called ISTXCF to communicate with other VTAMs in the
sysplex. There is no way to change the name of the group that VTAM joins, so if you
have two independent groups of VTAMs in a single sysplex (for example, a production
network and a test network), they will automatically communicate. The ISTXCF group
is also used for TCP/IP session traffic between stacks in the same sysplex.
VTAM also joins a second XCF group called ISTCFS01. ISTCFS01 is used to
determine when another VTAM's address space terminates so the VTAM CF structures
can be cleaned up. This structure is also used by MultiNode Persistent Session
support to send messages for planned takeovers. Even if XCFINIT=NO is specified,
VTAM will still join the ISTCFS01 group.
More information about the interaction between VTAM and XCF is provided in
Chapter 11, VTAM considerations on page 167.
ag.Every WLM in the sysplex joins an XCF group called SYSWLM. This group is used by
all the WLMs to exchange information about service classes, service class periods,
goal attainment, and so on. The name is fixed, and every WLM automatically connects
to it, even when WLM is running in compatibility mode. This means that every WLM in
the sysplex has information about what is happening on every other system in the
sysplex. This is discussed further in Chapter 4, WLM considerations on page 51.
ah.XES creates one XCF group for each serialized list or lock structure that it connects to.
The name of each group is IXCLOxxx, where xxx is a printable hex number. There is
one member per structure connection.
6. It is possible to use both CF structures and CTCs for XCF signalling. If using CTCs, you
must keep in mind that the formula for the number of the CTC definitions is n x (n-1),
where n is the number of systems in the sysplex. For example, a sysplex composed of 12
systems has to define 132 CTCs (12 x 11).
Another issue that should be considered is performance. Generally speaking, the
performance of CF structures for XCF signalling is equivalent to that of CTCs, especially
for XCF message rates below 1000 messages per second. If most XCF messages are
large, CF structures may provide a performance benefit over CTCs. On the other hand, if
most XCF messages are small, CTCs may provide better performance.
Given that the performance for most environments is roughly equivalent, the use of CF
structures for XCF signalling has one significant advantageCF structures are far easier
to manage than CTCs. To add another image to the sysplex, all you have to do is use the
existing COUPLExx member on the new member, and possibly increase the size of the
XCF CF structures. This will take a couple of minutes, compared to hours of work
arranging the hardware cabling, updating HCD definitions, and setting up and updating
COUPLExx members to add the new CTC definitions. If you decide to use CF structures
instead of CTCs, just remember that you should have two CFs, and you should never
place all your XCF structures in the same CF.
27
The most complex scenario, from the point of view of merging the incoming system, is if the
incoming system is attached to a CF and has some structures in that CF. In reality, it is
unlikely that a standalone system would be attached to a CF, but we will cater for that situation
in this section.
If your target environment is a BronzePlex, the incoming systems use of the CF is likely to be
limited to sharing the XCF structures, IEFAUTOS for tape sharing (for systems prior to z/OS
1.2), and the GRS Star structure. You may also wish to use the OPERLOG and
SYSTEM_LOGREC facilities, in which case you should refer to 1.5.4, System Logger on
page 12. In addition, if the incoming system is using any CF structures prior to the merge, you
will probably set up those in the CF, with only the incoming system using those structures.
If your target environment is a GoldPlex, the incoming system will probably share at least the
XCF and GRS structures. It may also share other structures like the Enhanced Catalog
Sharing structure, OPERLOG and LOGREC, JES2 checkpoint, and so on. Once again, if you
are considering using OPERLOG and/or LOGREC, you should refer to 1.5.4, System
Logger on page 12. In addition, if the incoming system is using any CF structures prior to the
merge, you will probably set up those in the CFin this case, it may or may not share those
structures with the other systems in the target sysplex.
If your target environment is a PlatinumPlex, the incoming system will probably share all the
structures in the CF with the other systems in the target sysplex. In addition, if the incoming
system is using any CF structures prior to the merge, you will probably set up those in the
CFin this case, it may or may not share those structures with the other systems in the target
sysplex.
28
Note
Type
Ensure all systems in the sysplex have at least two links to all
attached CFs.
B, G, P
Check that the CFs have enough storage, and enough white
space, to handle the additional structures as well as existing
structures that will increase in size.
B, G, P
B, G, P
Check that performance of CFs are acceptable and that CFs have
sufficient spare capacity to handle the increased load.
B, G, P
B, G, P
B, G, P
Check the size of all structures that will be used by the incoming
system.
B, G, P
B, G, P
Done
Consideration
Note
Type
B, G, P
B, G, P
B, G, P
10
B, G, P
Done
The Type specified in Table 2-3 on page 28 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 2-3 on page 28 are described below:
1. If the incoming system resides in an LPAR on a CPC that already contains other members
of the target sysplex, it is likely that you will use MIF (Multiple Image Facility, formerly
known as EMIF) to share the existing CF links between the incoming system and those
other members. In this case, you should check the utilization of those links in advance of
the merge. You use the PTH BUSY field in the RMF CF Subchannel Activity Report as an
indicator of CF link utilizationthe percentage of PTH BUSYs should not exceed 10%.
After the merge, you should review the RMF reports again to check contention for CF
paths. If the percentage of PTH BUSYs exceeds 10%, you should consider dedicating the
CF links to each image or adding additional CF links.
2. Prior to the merge, check the performance of all the CFs in the target sysplex. We
recommend that average utilization of CFs should not exceed 50%. This caters for the
situation where you have to failover all your structures into a single CF, and you need to
maintain acceptable performance in that situation. You want to ensure that not only is the
current performance acceptable, but also that there is spare capacity sufficient to handle
the estimated increase in utilization after the merge. For more information, refer to OS/390
Parallel Sysplex Configuration Volume 2: Cookbook, SG24-5638.
After the merge, you should once again check the performance of all CFs to ensure the
performance is still acceptable.
3. If the incoming system is using a CF prior to the merge, you must ensure that the CFs in
the target sysplex have at least the same CF Level as those in use by the incoming system
today. The CF Level dictates the functions available in the CFfor example, MQSeries
shared queues require CF Level 9. More information about CF Levels, including which
levels are supported on various CPC types is available at:
http://www.ibm.com/servers/eserver/zseries/pso/cftable.html
4. If you need to increase the MAXSYSTEM value in the CFRM CDS, you will need to
allocate a new CDS using the IXCL1DSU program. This is discussed in 2.2, CDS
considerations on page 16.
5. In order to provide optimal performance and availability, it is important that all CF
structures have appropriate sizes. Some structures are sensitive to the number of
connected systems, or the MAXSYSTEM value as specified in the sysplex CDS.
Therefore, you should review the structure sizes in any of the following situations:
A change in the number of connected systems
A change in the MAXSYSTEM value in the sysplex CDS
A significant change in workload
29
A change in the CF Levelas additional functions are delivered, structure sizes tend to
increase, so any time you change the CF Level you should review the structure sizes.
The only way to get an accurate size for a structure is to use the CFSizer tool available at:
http://www.ibm.com/servers/eserver/zseries/cfsizer/
Note: The PR/SM Planning Guide no longer can be used to calculate structure sizes.
The formulas in that document have not been updated since CF Level 7, and will not give
correct values for any CF Level higher than that. Also, the recommended structure values
in the IBM Redbook OS/390 Parallel Sysplex Configuration Volume 2: Cookbook,
SG24-5638 should also be ignored as they are based on old CF Levels.
6. If the incoming system is already using a CF before the merge, make sure the functions
being used in that environment are also available in the target sysplex. The functions are
usually specified in the CFRM CDS, and may have co-requisite CF Level requirements.
You should list the options specified in the CFRM CDS in both incoming system and target
sysplex. Compare the options that have been specified (System Managed Rebuild,
System Managed Duplexing, and so on) and make sure the target sysplex will provide the
required level of functionality. The sample job in Example 2-3 shows how the options are
listed, and shows the available options.
Example 2-3 List of CFRM Couple Data Set
//ITSO01A JOB (ITSO01),'ITSO-LSK051-R',
//
CLASS=A,NOTIFY=&SYSUID
//LOGDEFN EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
DATA TYPE(CFRM) REPORT(YES)
DATA TYPE(CFRM)
ITEM NAME(POLICY) NUMBER(5)
ITEM NAME(STR) NUMBER(200)
ITEM NAME(CF) NUMBER(8)
ITEM NAME(CONNECT) NUMBER(32)
ITEM NAME(SMREBLD) NUMBER(1)
ITEM NAME(SMDUPLEX) NUMBER(1)
...
7. If the incoming system is using structures that are not being used in the target sysplex,
those structures (for example, the structure for a DB2 in the incoming system) must be
added to the CFRM policy in the target sysplex.
Other structures that may be in use in the incoming system, like the XCF structures, will
not be moved over to the target sysplex because they will be replaced with shared ones
that are already defined in the target sysplex. In that case, you must update any
references to those structure names in the incoming system.
8. For structures that existed in both the incoming system and the target sysplex, like the
XCF structures, make sure that the definition of those structures in the target sysplex is
compatible with the definitions in the incoming system. For example, if the incoming
system uses Auto Alter for a given structure and the target sysplex does not, you need to
investigate and make a conscious decision about which option you will use.
9. Some structures have fixed names that you cannot control. Therefore, every system in the
sysplex that wants to use that function must connect to the same structure instance. At the
time of writing, the structures with fixed names are:
OPERLOG
LOGREC
GRS Lock structure, ISGLOCK
30
31
If your target environment is a BronzePlex, you need to decide how you want to use SFM. In a
sysplex environment, it is vital that dead systems are partitioned out of the sysplex as quickly
as possible. On the other hand, if a subset of the systems in the sysplex are running work that
is completely independent of the other systems in the sysplex, you may prefer to have more
control over what happens in the case of a failure. You may decide to specify CONNFAIL NO
in this situation, which will cause the operator to be prompted to decide which system should
be removed from the sysplex.
Figure 2-3 shows a typical configuration, where systems SYSA and SYSB were in the original
target sysplex, and SYSC is the incoming system which is now a member of the sysplex. If
the link between systems SYSB and SYSC fail, the sysplex can continue processing with
SYSA and SYSB, or SYSA and SYSC. If the applications in SYSA and SYSB support data
sharing and workload balancing, you may decide to remove SYSB as the applications from
that system can continue to run on SYSA. On the other hand, if SYSA and SYSB contain
production work, and SYSC only contains development work, you may decide to remove
SYSC.
SYSA
SYSB
SYSC
You can see that the decision about how to handle this scenario depends on your specific
configuration and availability requirements. For each system, you then need to carefully
evaluate the relative weight of that system within the sysplex, and whether you wish SFM to
automatically partition it out of the sysplex (by specifying ISOLATETIME), or you wish to
control that process manually (by specifying PROMPT). If you can assign a set of weights that
accurately reflect the importance of the work in the sysplex, we recommend that you use
ISOLATETIME.
If your target environment is a GoldPlex, your SFM decision will be based on the level or
workload sharing across the systems in the sysplex. Once again, you want to partition dead
systems out of the sysplex as quickly as possible. If the work is spread across all the systems
in the sysplex, we recommend that you specify CONNFAIL YES, to quickly remove any
32
system that has lost XCF signalling connectivity, and allow the other systems to continue
processing. For each system, you then need to carefully assign an appropriate value for
WEIGHT and specify ISOLATETIME to automatically partition that system out of the sysplex
in a timely manner.
Finally, if your target environment is a PlatinumPlex, which implies full workload balancing,
your decision is actually easier. You should specify CONNFAIL YES, assign appropriate
WEIGHT values to each LPAR, and specify ISOLATETIME.
Note
Type
B, G, P
B, G, P
B, G, P
Done
The Type specified in Table 2-4 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 2-4 are described below:
1. In the SFM policy, you can have a default definition (by specifying SYSTEM NAME(*)), and
the values specified on that definition will be applied to any system that is not explicitly
defined. However, we recommend that there is an explicit definition for each system in the
sysplex. Depending on your target environment, you should update the SFM policy with an
entry for the incoming system, specifying the WEIGHT and ISOLATETIME values. For
more information on using and setting up SFM, refer to z/OS MVS Setting Up a Sysplex,
SA22-7625.
2. Depending on your target environment (BronzePlex, GoldPlex, or PlatinumPlex), you may
wish to change the setting of your CONNFAIL parameter.
3. SFM supports the movement of processor storage from one named LPAR to another
named LPAR in case of a failure. Depending on your target environment (BronzePlex,
GoldPlex, or PlatinumPlex), you may wish to change the action that SFM takes when a
system fails.
33
In the ARM policy, you identify, among other things, the ARM element name of the job or STC
that you want ARM to restart, and which systems it can be restarted on.
If your target environment is a BronzePlex, we assume that the incoming system will not be in
the same MAS as the other systems in the sysplex. In this case, the work from any of the
existing systems in the target sysplex cannot be restarted on the incoming system, and the
the jobs from the incoming system cannot be restarted on the other systems in the target
sysplex.
In your ARM policy, you can simply merge the two ARM policies together because ARM will
only attempt to restart an element on a system in the same JES2 MAS/JES3 Complex as the
failed system. You can continue to specify RESTART_GROUP(*) to specify defaults for all
restart groups in the sysplex or TARGET_SYSTEM(*) for the restart group associated with
the element(s) from the incoming system as long as these statements name systems that are
in the same JES2 MAS/JES3 Complex as the failing system. If you use this statement and a
system fails, and the ARM policy indicates that the element should be restarted, ARM will
attempt to restart it on any other system in the same MAS/Complex. Also, remember that the
ARM policy works off the element name of jobs and started tasks, which must be unique
within the sysplex, so even if you have a started task with the same name in more than one
system, you can still define them in the ARM policy as long as the application generates a
unique element name.
If your target environment is a GoldPlex, we assume once again that you are not sharing the
JES spool between the incoming system and the other systems in the target sysplex.
Therefore, the same considerations apply for a GoldPlex as for a BronzePlex.
Finally, if your target environment is a PlatinumPlex, which infers full workload balancing, you
can potentially restart any piece of work on any system in the sysplex. In this case, you need
to review your existing ARM definitions to decide if the incoming system should be added to
the TARGET_SYSTEM list for the existing ARM elements. In addition, you should define the
work in the incoming system that you want ARM to manage, and decide which of the other
systems in the sysplex are suitable candidates to restart that work following a failure.
Note
Type
B, G, P
B, G, P
Done
The Type specified in Table 2-5 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 2-5 are described below:
1. Bearing in mind that ARM will only restart work on systems in the same JES MAS, for
each element (batch job or started task) of ARM from the incoming system, you need to:
Determine which batch jobs and started tasks will be using ARM for recovery
purposes. For the IBM products that use ARM, read their documentation for any policy
considerations.
34
Check for interdependencies between workthat is, elements that need to run on the
same system (RESTART_GROUP).
Note that any elements that are not explicitly assigned to a restart group become part
of the restart group named DEFAULT. Thus, if these elements are restarted, they are
restarted on the same system.
Determine if there is an order in which MVS should restart these elements if any
elements in the restart group are dependent upon other elements being restarted and
ready first (RESTART_ORDER).
Determine if the elements in a restart group need to be restarted at specific intervals
(RESTART_PACING).
Determine if the element should be restarted when only the element fails, or when
either the element or the system fails (TERMTYPE).
Determine whether specific JCL or command text is required to restart an element
(RESTART_METHOD).
Determine if a minimum amount of CSA/ECSA is needed on the system where the
elements in a restart group are to be restarted (FREE_CSA).
Determine if you want the elements in the restart group to be restarted on a specific
system (TARGET_SYSTEM). Consider requirements like:
The workloads of the systems in the sysplex, and how they might be affected if
these jobs were restarted.
DASD requirements.
Which systems have the class of initiator required by batch jobs that might need to
be restarted on another system.
Determine if an element should be restarted and, if so, how many times it should be
restarted within a given interval (RESTART_ATTEMPTS). If an element should not be
restarted, set RESTART_ATTEMPTS to 0.
Determine how long MVS should wait for an element to re-register once it has been
restarted (RESTART_TIMEOUT).
Determine how long MVS should wait for an element to indicate it is ready to work after
it has been restarted (READY_TIMEOUT).
2. Most automation products provide the ability to specify whether a given piece or work will
be managed by ARM or by the automation product. If you will be adding products to ARM
as a part of the merge process, you must update any affected automation products.
35
The SYS1.SAMPLIB library contains a number of sample jobs to allocate the various
CDSs and create the policy for those CDSs. The member names all start with IXC, and the
ones that create the policies end with a P.
The following documentation may be helpful as you merge the entities discussed in this
chapter:
OS/390 MVS Parallel Sysplex Capacity Planning, SG24-4680
OS/390 Parallel Sysplex Configuration Volume 2: Cookbook, SG24-5638
WSC Flash 10011, entitled XCF Performance Considerations
WSC Flash 98029, entitled Parallel Sysplex Configuration Planning for Availability
z/OS MVS Setting Up a Sysplex, SA22-7625
36
Chapter 3.
37
38
Another point about Logger that you must understand is how the offload process works for log
streams in the CF. In the LOGR policy, when you define each log stream, you specify which
CF structure that log stream will reside in. You can have many log streams per structure.
When a log stream reaches the high threshold specified in the LOGR policy, a notification is
sent to all the MVS images connected to that log stream. The connected images will then
decide between themselves which one will do the offload; you cannot control which system
does the offload.
Furthermore, if a system fails, and that system was the last one connected to a given log
stream, any system that is connected to the structure that the log stream resided in becomes
a candidate to offload that log stream to DASDthis process is called Peer Recovery and is
intended to ensure that you are not left with just a single copy of the log stream data in a
volatile medium. For example, if system MVSA fails and it was the only system connected to a
particular log stream, any of the systems that are connected to the structure that log stream
resided in may be the one that will do the offload. This design is important as it affects how
you will design your Logger environment, depending on whether your target environment is to
be a BronzePlex, a GoldPlex, or a PlatinumPlex.
Note: The concern about which system does the offload for a log stream only relates to log
streams in CF structures. Log streams on DASD (DASDONLY) are only offloaded by the
system they are connected to, so the situation where the offload data sets are not
accessible from one of the connectors does not arise.
The final thing we want to discuss is the correlation between the LOGR CDS and the staging
and offload data sets. The LOGR CDS contains information about all the staging and offload
data sets for each log stream. This information is maintained by Logger itself as it manages
(allocates, extends, and deletes) the data sets, and there is no way to alter this information
manually. So, for example, if you delete an offload data set manually, Logger will not be aware
of this until it tries to use that data set and encounters an error. Equally, if you have an existing
offload data set that is not known to the LOGR CDS (for example, if you have a brand new
LOGR CDS), there is no way to import information about that data set into Logger.
So, consider what will happen when you merge the incoming system into the target sysplex. If
you did not delete the staging and offload data sets before the merge, and if you are using the
same user catalog, high level qualifier, and DASD volume pool after the merge, Logger will
attempt to allocate new staging and offload data sets. However, it will encounter errors,
because the data set name that it is trying to use is already defined in the user catalog and on
the DASD volumes. The best way to address this situation varies depending on the target
environment, and is discussed in the notes attached to Table 3-1 on page 41.
39
will allocate an offload data set on one of its DASD and offload the data. Now, what happens
if one of the systems in the target sysplex needs to retrieve data from that log stream, and the
required data has been offloaded by the incoming system? The offload data set will not be
accessible to the target sysplex system and the request will fail. Therefore, in a BronzePlex,
you must ensure that none of the log streams are connected to by systems in both subplexes.
Furthermore, consider what happens during Peer Recovery. In this case, you can end up with
log stream data on an inaccessible device because the structure was shared between the
incoming system and one or more of the systems in the target sysplex. So, in a BronzePlex,
not only should you not share log streams between the incoming system and the target
sysplex systems, you should also not share any Logger structures in this manner either.
Similarly, in a GoldPlex, you would not normally share user catalogs or user DASD volumes.
So, once again, you must ensure that the Logger structures are only connected to by systems
in one of the subplexes. However, what if you decide to share the user catalog containing the
Logger offload data sets and the DASD volumes containing those data setsdoes this
remove this requirement? It depends on how you manage the offload data sets. If you use
DFSMShsm to migrate the offload data sets, then you have to be sure that any system in the
LOGRplex can recall an offload data set that has been migrated; and this means that all the
systems in the LOGRplex are in the same HSMplex, which effectively means that you have a
PlatinumPlex (which we will discuss next). However, if you do not have a single HSMplex,
then you cannot use DFSMShsm to manage the Logger offload data sets.
Finally, we have the PlatinumPlex. In a PlatinumPlex, everything is sharedall the user
DASD, DFSMShsm, user catalogs, and so on. In this case, you still have the problem of
ensuring that none of the data that is currently in the log streams of the incoming system will
be required after the merge; however, you do not have any restrictions about which systems
can connect to any of the Logger structures or log streams.
40
Note
Type
Which Logger structure will each of the log streams be placed in?
B,G,P
Do the staging and offload data set names match the requirements of
your target environment?
B,G
How to handle staging and offload data sets from the incoming system?
B,G,P
B,G,P
B,G,P
B,G,P
B,G,P
B,G,P
B,G,P
Done?
The Type specified in Table 3-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 3-1 are described below:
1. When you move to a single LOGR policy, you need to decide which Logger structure each
log stream will be placed in after the merge. The simplest thing, of course, would be to
leave all the log streams in the same structures they are in prior to the merge, and this is a
viable strategy. However, you must ensure that there are no duplicate log stream or Logger
structure names between the incoming system and the target sysplex. Any duplicate log
stream names will need to be addressed (this will be discussed in subsequent notes).
Duplicate Logger structure names will also need to be addressed. Changing the Logger
structure names consists of:
Adding the new structure names to the CFRM policy.
Updating the LOGR policy to define the new structure names (on the DEFINE
STRUCTURE statements) and updating the log stream definitions to assign a new
structure to the affected log streams.
Note: This consideration does not apply to DASDONLY log streams.
2. There is a general consideration that applies to all of the applications that use log streams
(either CF log streams or DASDONLY), and it is related to the offload data sets. Any
Logger subsystem that has a connection to a Logger structure can potentially offload a log
stream from that structure to an offload data set (allocating a new offload data set in the
process, if necessary). If the target environment is a PlatinumPlex, this does not represent
a problem, as the SMS configuration, DFSMShsm environment, user DASD, and all the
catalogs will be shared by all the systems in the sysplex. However, if the target
environment is a BronzePlex or a GoldPlex, some planning is required when defining the
log stream.
To avoid problems with potential duplicate data set names or duplicate Master Catalog
aliases, you should use different high level qualifiers for the offload data sets associated
with each subplex. The default HLQ for offload data sets is IXGLOGR; however, you can
override this on the log stream definition using the HLQ (or, in z/OS 1.3 and later, the
EHLQ) keyword. This keyword also controls the HLQ of the staging data sets (if any)
associated with each log stream. Using different HLQs for each subplex means that you
41
can use different catalogs for the Logger data sets, have different (non-clashing) RACF
profiles, and different SMS constructs in each subplex. Avoiding duplication now means
that should you decide to move to a PlatinumPlex in the future, that move will be a little bit
easier.
3. Earlier in this chapter we discussed the relationship between the LOGR CDS, the user
catalog(s) associated with the staging and offload data sets, and the volumes containing
those data sets and the problems that can arise should these get out of sync with each
other. The options for addressing this situation depends on your target environment.
If your target environment is a BronzePlex or a GoldPlex, the incoming system will
probably be using the same user catalog and same DASD volumes after the merge as it
did prior to the merge. This means that the old staging and offload data sets will still exist
on the DASD and be cataloged in the user catalog. As soon as Logger on the incoming
system tries to allocate new versions of these data sets after the merge, you will encounter
an error. There are two ways to address this:
a. You can delete all the old offload and staging data sets on the incoming system after
you have stopped all the Logger applications and before you shut down that system
just prior to the merge. You do this by deleting the log stream definitions from the
Logger policy on that system. This will ensure that you do not encounter duplicate data
set name problems when you start up again after the merge. However, it means that
the log streams will need to be redefined if you have to fall back.
b. You can change the HLQ associated with the log streams from the incoming system
when defining those log streams in the target sysplex LOGR CDS. In this case, you do
not have to delete the log stream definitions from the incoming system (making fallback
easier, should that be necessary) as the new offload and staging data sets will have a
different high level qualifier than the existing data sets.
If your target environment is a PlatinumPlex, the incoming system will probably switch to
using the user catalog that is currently used for the offload and staging data sets on the
target sysplex systems. This means that you would not have the problem of duplicate user
catalog entries; however, you would need to decide how to handle the existing staging and
offload data sets. Once again, you have two options:
a. You can delete all the old offload and staging data sets on the incoming system after
you have stopped all the Logger applications and before you shut down that system
just prior to the merge. You do this by deleting the log stream definitions from the
Logger policy on that system. You would do this if the DASD volumes containing the
offload and staging data sets will be accessible in the new environment.
b. You can leave the old offload and staging data sets allocated, but make the volume
containing those data sets inaccessible to the new environment. This means that those
data sets are still available should you need to fall back. However, it may be complex to
arrange if the offload and staging data sets are on the same volumes as other data
sets that you wish to carry forward.
4. The operations log (OPERLOG) is a log stream with a fixed name (SYSPLEX.OPERLOG)
that uses the System Logger to record and merge communications (messages) about
programs and system functions from each system in a sysplex. Only the systems in a
sysplex that have specified and activated the operations log will have their records sent to
OPERLOG. For example, if a sysplex has three systems, SYSA, SYSB, and SYSC, but
only SYSA and SYSB activate the operations log, then only SYSA and SYSB will have
their information recorded in the OPERLOG log stream. Because the OPERLOG log
stream has a fixed name, there can only be one OPERLOG within a sysplexthat is, you
cannot have SYSA and SYSB share one OPERLOG, and have SYSC with its own,
different, OPERLOG.
42
The operations log is operationally independent of the system log. An installation can
choose to run with either or both of the logs. If you choose to use the operations log as a
replacement for SYSLOG, you can prevent the future use of SYSLOG; once the
operations log is started with the SYSLOG not active, enter the WRITELOG CLOSE
command.
Although the operations log is sysplex in scope, the commands that control its status and
the initialization parameter that activates it have a system scope, meaning that a failure in
operations log processing on one system will not have any direct effect on the other
systems in the sysplex. You can set up the operations log to receive records from an entire
sysplex or from only a subset of the systems, depending on the needs of the installation.
Depending on the type of the target environment, some configuration considerations
apply: in a BronzePlex or a GoldPlex where there is need for separation of data between
the two subplexes, OPERLOG should only be active for the images that belong to one
subplex, with the other images continuing to operate through SYSLOG.
On the other hand, in a BronzePlex/GoldPlex where there is an operational need to have
all the messages accumulated into a single OPERLOG, additional planning is required to
determine how to handle the offload of this log stream; since offload can take place on any
image connected to that structure, the catalog that the offload data sets are cataloged in
must be shared, the pool of volumes that the offload data sets will reside on must be
shared, the SMS constructs in each subplex should be identical in their handling of Logger
offload and staging data sets, and it is not possible to use DFSMShsm to migrate them. In
a PlatinumPlex, OPERLOG should collect data from all the images and there should no
additional considerations for the offload data sets, since all the storage environment
should be shared.
In light of the fact that the OPERLOG data, as is also true with SYSLOG records, is not
used for backup, recovery, or repair for any applications or programs, the retention or
transportation of OPERLOG data is not critical as it would be for an application which
depended upon its log stream data for recovery, such as CICS.
Initially the OPERLOG messages which live in the log stream are in MDB format. Typically
the IEAMDBLG sample program (or a modified version thereof) is run to move the
OPERLOG messages out of the log stream into flat files where they can be archived in a
SYSLOG format. Because of this capability, the data in the OPERLOG log stream of the
incoming system does not need to be lost as part of the mergehowever, that data will
now need to be accessed from outside Logger.
5. Before discussing the LOGREC log stream, we wish to point out that IBM recommends
that you IPL with a LOGREC data set initialized by IFCDIP00. If you do not IPL with a
LOGREC data set, you cannot subsequently change the LOGREC recording medium
from LOGSTREAM to DATASET using the SETLOGRC command. When planning for the
merge, ensure that you have taken this action so that in the event that you experience any
problems re-establishing the LOGREC log stream, you will at least have a medium to
which you can fall back.
The LOGREC log stream provides an alternative, shared, medium for collecting LOGREC
data. Like OPERLOG, the use of the LOGREC log stream can be controlled on a
system-by-system basis. So, none, some, or all of the systems in the sysplex can be set
up to use the LOGREC log stream. Similarly, like OPERLOG, the LOGREC log stream
has a fixed name, SYSPLEX.LOGREC.ALLRECS, so you can only have one LOGREC log
stream per sysplex.
43
When you merge the sysplexes, there is no need to carry forward the LOGREC data.
There are two reasons for this:
a. The first is that the old LOGREC data may be archived using the IFCEREP1 program
to move the log stream data to a flat file. Once this is done, you have (and can retain)
this data from the old sysplex for as long as necessary.
b. The other reason is that the target sysplex may have no knowledge of the previous
systems and cannot be configured to be able to read the old sysplexs log stream
LOGREC records.
Because the LOGREC log stream is like the OPERLOG log stream in terms of having a
fixed name, the same considerations apply for BronzePlex, GoldPlex, and PlatinumPlex
environments as we described above for the OPERLOG log stream.
6. The IMS Common Queue Server (CQS) uses the System Logger to record information
necessary for CQS to recover its structures and restart following a failure. CQS writes log
records for the CF list structures to log streams (the Message Queue structure and its
optional associated overflow structure to one log stream, and the optional EMH Queue
and EMH Overflow structures to a second log stream). The log streams are shared by all
the CQS address spaces that share the structures. The System Logger therefore provides
a merged log for all CQS address spaces that are sharing queues.
Merging the System Logger environments will mean that any information in the CQS log
streams from the incoming system will no longer be accessible after the merge. This
means that all the IMSs on the incoming system will need to be shut down in such a
manner that they when they are started up again (after the merge) they will not need to get
any information from the log stream. You should work with your IMS system programmers
to ensure that whatever process they normally use to shut down IMS prior to an IMS and
CQS cold start is completed successfully. Moving to a new Logger environment is
effectively no different, from a shared queue perspective, than a normal cold start of CQS.
As long as you have removed the need for IMS to require log stream data to start up, the
only consideration to be taken into account is to move all the CQS log stream definitions
into the LOGR policy of the target sysplex.
Note: We are assuming that as part of the merge you are not merging the IMS from the
incoming system into the shared queue group from the target sysplex. If your intent is to
place all the IMSs in a single shared queues group, such a change would normally not
take place at the same time as the merge of the sysplexes.
7. CICS uses System Logger services to provide logging in the following areas: backward
recovery (backout), forward recovery, and user journals.
Backward recovery, or backout, is a way of undoing changes made to resources such as
files or databases. Backout is one of the fundamental recovery mechanisms of CICS. It
relies on recovery information recorded while CICS and its transactions are running
normally. Before a change is made to a resource, the recovery information for backout, in
the form of a before-image, is recorded in the CICS system log (also known as DFHLOG).
A before-image is a record of what the resource was like before the change. These
before-images are used by CICS to perform backout in two situations:
In the event of failure of an individual in-flight transaction, which CICS backs out
dynamically at the time of failure (dynamic transaction backout), or,
In the event of an emergency restart, where CICS backs out all those transactions that
were in-flight at the time of the CICS failure (emergency restart backout)
44
Each CICS region has its own system log written to a unique System Logger log stream.
Since the CICS system log is intended for use only for recovery purposes during dynamic
transaction backout or during emergency restart, there is no need to keep the logged data
if the CICS region has been shut down correctly. If CICS regions are moved from one
Logger environment to a new one, there is no need to bring the CICS system log data as
long as you can ensure that CICS will not need to do any sort of recovery when it starts in
the new environment.
Note: What do we mean by shutting CICS down correctly? The objective is to ensure
that there were no in-flight transactions when CICS stops. Normally these in-flight
transactions would be handled when CICS restarts because it is able to obtain
information about these in-flight transactions from the system log log stream. However,
we are going to be discarding the system log log stream, so you have to ensure that
there are no in-flight transactions.
The way to do this is to use CEMT to list all active tasks and cancel any that have not
completed. When you cancel the task, CICS will back out the associated transaction,
ensuring that no transactions are only half done. Once all active tasks have been
addressed, CICS will then shut down cleanly and you can do an INITIAL start when
CICS is started after the merge.
Forward recovery logs are used when some types of data set failure cannot be corrected
by backward recovery; for example, failures that cause physical damage to a database or
data set. To enable recovery in these situations you must take the following actions:
a. Ensure that a backup copy of the data set has been taken at regular intervals.
b. Set up the CICS regions so that they record an after-image of every change to the data
set in the forward recovery log (a log stream managed by the System Logger).
c. After the failure, restore the most recent backup copy of the failed data set, and use the
information recorded in the forward recovery log to update the data set with all the
changes that have occurred since the backup copy was taken.
The forward recovery log data usually does not reside in the log stream for a long time.
Normally an installation will use a forward recovery product, such as CICSVR, to manage
the log stream. These products extract the data from the log stream, synchronized with the
data sets backup copy, and store the forward recovery data in an archive file. After they
extract the required records from the log stream, they tell Logger to delete those records.
Should a forward recovery be required, the product will use the information from the
archive file or the forward recovery log stream, depending on where the required records
reside at that time.
If you backup all the CICS files that support forward recovery immediately after you shut
down CICS in preparation for the merge, all the required records will be moved from the
forward recovery log stream to the archive files, meaning that there is no required data left
in the log stream. If, for whatever reason, you need to restore one of the CICS data sets
before you restart CICS, all the required information will be available through the backups
and archive files.
User journals are not usually kept for a long time in the log stream, and the data in the log
stream is not required again by CICS. They are usually offloaded using the DFHJUP utility
in order to be accessed by user applications, vendor products, and to be preserved by
being archived. For this reason, as long as you extract whatever data you need from the
log stream before shutting the incoming system down for the merge, the user journal log
streams will not contain any data that is required after the merge.
45
Assuming that you will not be merging CICS Logger structures as part of the merge, the
considerations for the CICS log streams are the same regardless of whether the target
environment is a BronzePlex, a GoldPlex, or a PlatinumPlex.
After the merge, all the CICS regions on the incoming system should be started using an
INITIAL restart.
8. Depending on the target environment, you can have different RRS configurations. In a
BronzePlex or a GoldPlex, you will wish to keep the RRS environment for the incoming
system separate from that of the target sysplex. On the other hand, in a PlatinumPlex, you
should have a single RRS environment across the entire sysplex. However, even if you will
keep two separate RRS environments, there is still no way to carry the data in the
incoming systems log streams over into the target environment. Therefore, the following
considerations apply regardless of the target environment type.
RRS uses five log streams, each of which is shared by all the systems in the RRSplex. An
RRSplex is the set of systems that are connected to the same RRS log streams. The RRS
term for an RRSplex is logging group. So, for example, you can have production systems
sharing one set of RRS log streams and test systems sharing a different set of RRS log
streams in the same sysplex. A logging group is a group of systems that share an RRS
workload. The default log group name is the sysplex name; however, this can be
overridden via the GNAME parameter in the RRS startup JCL. The GNAME is used as the
second qualifier in the log stream name.
Note: Because there is only one RRS subsystem per system, each system can only be in
a single RRSplex.
The five log streams are:
ATR.gname.ARCHIVE. This log stream contains information about completed Units of
Recovery (URs). This log stream is optional and should only be allocated if you have a
tool to help you analyze the information in the log stream.
ATR.gname.RM.DATA. This log stream contains information about Resource Managers
that are using RRS services.
ATR.gname.MAIN.UR. This log stream contains information about active URs. RRS
periodically moves information about delayed URs into the DELAYED.UR log stream.
ATR.gname.DELAYED.UR. This log stream contains the information about active but
delayed URs that have been moved from the MAIN.UR log stream.
ATR.gname.RESTART. This log stream contains information about incomplete URs
that is required during restart.
When all the Resource Managers shut down in a clean way (that is, with no in-flight URs),
there is no information that is needed to be kept in the RRS log stream. Therefore, in order
to be able to restart RRS after the merge with a new set of empty log streams, you need to
ensure that all the Resource Managers have shut down cleanly.
Note: IMS is one of the potential users of RRS. However, if RRS services are not
required by IMS (that is, you are not using ODBA or Protected Conversations), you can
use the RRS=N parameter (added to IMS V7 by APAR PQ62874) to stop IMS from
connecting to RRS.
If you do require IMS use of RRS, the /DIS UOR IMS command displays any in-doubt
UORs involving RRS.
46
If your target environment is a BronzePlex or a GoldPlex, you should modify the RRS
startup JCL on the incoming system to specify a GNAME other than the sysplex name. If
you can set GNAME to be the same as the sysplex name of the incoming system, you can
copy the Logger policy definitions for the RRS log streams exactly as they exist in the
incoming system.
If your target environment is a PlatinumPlex, all of the systems must be in the same
RRSplex. In this case you may need to review the log stream sizes as currently defined in
the target sysplex to ensure they are large enough for the additional load that may be
generated by the incoming system.
9. WebSphere for z/OS uses a Logger log stream to save error information when
WebSphere for z/OS detects an unexpected condition or failure within its own code, such
as:
Assertion failures
Unrecoverable error conditions
Vital resource failures, such as memory
Operating system exceptions
Programming defects in WebSphere for z/OS code
The name of the log stream is defined by the installation, and you can either have a
separate log stream per WebSphere Application Server (WAS) server, or you can share
the log stream between multiple servers. If you have a separate log stream for each
server, the log stream can be a CF log stream or it can be a DASDONLY one.
Regardless of the log stream location, or the target environment type, because the data in
the log stream is not required for anything other than problem determination, it is not
necessary to preserve the data after a clean shutdown of the server. You can use the
BBORBLOG REXX exec, which allows you to browse the error log stream to ensure that
you have extracted all the information you require about the errors before the log stream
information is discarded.
47
At the end of this step you should have decided what you are going to call all the log streams
and Logger structures, which structure each log stream will reside in, and any changes that
may be required to existing structure and log stream definitions.
As long as all the Resource Managers have been shut down cleanly, you will not require any
of the information from the RRS log streams. To find out if any Resource Managers are still
registered with RRS, or if there are any active URs, you can use the RRS panelsthere is no
way to display this information from the console. If there are any Resource Managers still
registered or any in-flight URs, you need to resolve that situation before the shut down. Once
you are happy that there are no in-flight URs, you do not have to do anything specific with
RRS prior to shutting down the system.
Step 5 - Cleanup
The final step (unless you decided to do this prior to the shutdown of the incoming system) is
to delete the old staging and offload data sets that were being used by the incoming system
prior to the merge.
If you are using different High Level Qualifiers and are in a BronzePlex or a GoldPlex
environment, the data sets should be available through the normal catalog search order and
can be deleted using DELETE CLUSTER IDCAMS commands. If the data sets cannot be
accessed through the normal catalog search order and are on SMS-managed volumes, you
must use a DELETE NVR IDCAMS command to delete them (remember that you cannot use
STEPCAT or JOBCAT with SMS-managed data sets).
49
50
Chapter 4.
WLM considerations
This chapter discusses the following aspects of moving a system that is in WLM Goal mode
into a Parallel Sysplex where all of the existing systems are also in WLM Goal mode:
Is it necessary to merge WLM environments, or is it possible to have more than one WLM
environment in a single Parallel Sysplex?
A checklist of things to consider should you decide to merge the WLMs.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
51
52
53
medium, and fast. These might be 20%, 40%, and 60%, respectively. The higher velocity
goals are more sensitive to processor differences, so when selecting a fast velocity goal,
select one that is attainable on the smaller processors. For more information on velocity,
see the paper entitled Velocity Goals: What You Don't Know Can Hurt You on the WLM
home page at:
http://www.ibm.com/servers/eserver/zseries/zos/wlm/
Work in SYSSTC is not managed by WLM, so review the classification rules for tasks
assigned to SYSSTC and make sure the work you classify there is necessary and
appropriate for this service class. Make use of the SPM SYSTEM and SPM SYSSTC
rules.
Since the classification rules are an ordered list, review the order of work in the
classification rules and place the more active and repetitive work higher in the list. Use
Transaction Name Groups and Transaction Class Groups where possible.
Use the new OS/390 2.10 enhanced classification qualifiers if it is necessary to treat
similarly named work differently on different systems.
Review the IEAOPT parms of all images to ensure consistency.
Optimizing the number of logical CPUs benefits workloads that have large amounts of
work done under single tasks, and minimizes LPAR overhead for all workloads. The WLM
Vary CPU Management feature of IRD can help automate this process.
Before and after making any WLM changes, extract RMF reports for both systems
(incoming system and target sysplex). Verify the goals and redefine them, if necessary. It
is very important to maintain the goals with the same availability and performance.
54
Consideration
Notes
Type
Ensure the incoming system supports the Function level of the target
sysplex WLM CDS.
B, G, P
Check that the WLM CDSs are large enough to contain the merged
definitions.
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
10
B, G, P
Done
Consideration
Notes
Type
11
B, G, P
12
B, G, P
13
B, G, P
Check usage of response time rather than velocity goals for CICS and
IMS.
14
B, G, P
15
B, G, P
16
B, G, P
17
B, G, P
18
B, G, P
19
B, G, P
20
B, G, P
Done
The Type specified in Table 4-1 on page 54 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 4-1 on page 54 are described the following:
1. There are two attributes of the target sysplex WLM CDS that you must consider: the
Format Level, and the Function Level. Both can be determined by issuing a D WLM
command. The format level controls which WLM functions can be used, and is determined
by the level of OS/390 or z/OS that was used to allocate the CDS. At the time of writing,
the highest Format Level is 3, and this is the Format Level that will be created by OS/390
2.4 and all subsequent releases. Considering how long this Format Level has been
around, it is unlikely that anyone is still using a CDS with Format Level 1 or 2.
The other thing to consider is the Function Level. The Function Level indicates which
functions are actually being used in the CDS. For example, if you are using any of the
WLM enhancements delivered in OS/390 2.10, the Function Level of the CDS would be
011. If you then start using the IRD capability to manage non-z/OS LPARs (a new function
delivered in z/OS 1.2) the Function Level of the CDS would change to 013 as soon as the
new policy is installed (Level 012 was reserved).
The coexistence and toleration PTFs required in relation to the WLM CDS are
documented in the chapter entitled Workload Management Migration in z/OS MVS
Planning: Workload Management, GA22-7602. We recommend using the lowest
functionality system to do maintenance, and to not enable any functionality not supported
on the lowest-level image in the sysplex. This ensures that every system in the sysplex is
able to activate a new policy.
2. You may need to resize the WLM CDSs for storing the service definition information. One
of the inputs to the program that allocates the WLM CDS is the maximum number of
systems in the sysplex. If the addition of the incoming system causes the previous
MAXSYSTEM value to be exceeded, then you must allocate a new CDS (and dont forget
to allocate an alternate and spare at the same time) that reflects the new maximum
number of systems in the sysplex. Note that if you used the WLM dialog to allocate the
WLM CDSs, it will automatically set MAXSYSTEM to be 32, the highest value possible.
55
If you are running a sysplex with mixed release levels, you should format the WLM CDS
from the highest level system. This allows you to use the latest level of the WLM
application. You can continue to use the downlevel WLM application on downlevel systems
provided that you do not attempt to exploit new function.
To allocate a new WLM CDS you can either use the facility provided in the WLM
application, or you can run an XCF utility (IXCL1DSU). In each case, you need to estimate
how many workload management objects you are storing on the WLM CDS. You must
provide an approximate number of the following:
a. Policies in your service definition
b. Workloads in your service definition
c. Service classes in your service definition
We recommend using the WLM dialog to allocate the new data sets because the dialog
will extract the values used to define the current data sets. If you dont use the dialog, then
you should save the JCL that you use to allocate the data sets for the next time you need
to make a change.
The values you define are converted to space requirements for the WLM CDSs being
allocated. The total space is not strictly partitioned according to these values. For most of
these values, you can consider the total space to be one large pool. If you specify 50
service classes, for instance, you are simply requesting that the space required to
accommodate 50 service classes be added to the CDSyou have not limited yourself to
50 service classes in the service definition. Although note that if you do define more than
50 service classes, you will use up space that was allocated for something else.
Example 4-1 Allocate Couple Data Set for WLM using CDS values
File Utilities Notes Options Help
-----------------------------------------------------------------------------!
Allocate couple data set for WLM using CDS values
!
! Command ===> ______________________________________________________________!
!
!
! Sysplex name
________ (Required)
!
! Data set name
______________________________________________ (Required) !
!
!
! ------------------------------------------------------------------------ !
!
|
!
! Size parameters
|
!
! (optional):
| Storage parameters:
!
!
|
!
! Service policies . . 10 (1-99)
| Storage class . . . ________
!
! Workloads . . . . . 35
(1-999)
| Management class . . ________
!
! Service classes . . 100 (1-999)
|
!
! Application
| or
!
! environments . . . . 50
(1-999)
|
!
! Scheduling
| Volume . . . . . . . ______
!
! environments . . . . 50
(1-999)
| Catalog data set?
_ (Y or N)
!
! SVDEF extensions . . 5
(0-8092) |
!
! SVDCR extensions . . 5
(0-8092) |
!
! SVAEA extensions . . 5
(0-8092) |
!
! SVSEA extensions . . 5
(0-8092) |
!
------------------------------------------------------------------------------------------------------------------------------------------------------! Size parameters are initialized from service definition in the WLM couple !
! data set. (IWMAM861)
!
----------------------------------------------------------------------------
56
For recovery purposes, you should define an alternate WLM CDS (similar to the sysplex
alternate CDS). You can define an alternate WLM CDS using the same method (either the
ISPF application or the XCF utility), but specifying a different data set name. You should
also define one spare WLM CDS. See the section entitled Increasing the Size of the WLM
CDS in z/OS MVS Planning: Workload Management, SA22-7602, for information about
how to allocate the new CDSs.
3. WLM manages a service class period as a single entity when allocating resources to meet
performance goals. You can define a maximum of 100 service classes in the WLM policy,
and therefore there is a limit of 100 service classes in the whole sysplex.
If your target is a PlatinumPlex, where all workloads are potentially distributed across all
the systems in the sysplex, you should aim for a minimum number of service classes. If
your target is a BronzePlex or a GoldPlex, where the incoming system will not share any
workloads with the existing systems, it may be more effective to have a separate set of
service classes for the non-system (that is, non-SYSTEM and non-SYSSTC service class)
workloads in the incoming system.
We recommend that you analyze the incoming systems workloads and service classes
and bring them into line with the target systems service classes and classification rules
where appropriate. Simply creating a superset policy, which contains all the service
classes from both policies, could result in WLM being less responsive because of the large
number of service classes, especially if the workloads in the sysplex run across all the
systems.
To help you merge the WLM service classes, it is helpful to map out the service classes of
the different policies into a matrix, ordering the service classes by importance as shown in
Table 4-2 on page 58. This matrix will help you identify service classes with duplicate
names and service classes with similar goals, and gives you a good overview of how the
various workloads relate to one another with regard to their respective objectives.
Remember the recommendation that for maximum efficiency, there should be not more
than 30 active service class periods on a system at a given time.
In the interests of readability, we have not included the service class SYSTEM in this
matrix; however, when doing your own table, you should include this service class. We do
show the SYSSTC service class to position it in relation to the other service classes.
Remember that SYSSTC has a dispatching priority of 254, which places it ahead of all
importance 1 work.
In the matrix, we also do not include additional attributes such as CPU critical or the
duration of each period; however, these must be addressed during your analysis. In our
example, the incoming systems service classes are represented in bold characters;
however, when doing your own matrix, it might be a good idea to use different colors to
differentiate the incoming systems service classes from those of the target sysplex.
When comparing the service class definitions in the two environments, you need to be
sure to consider the following attributes of each service class:
Its name. Does the same service class name exist in both environments?
The number of periods in the service class, and the duration of each period. You may
have the same service class name in both policies; however, if the period durations are
very different, the performance delivered could differ significantly.
The Importance for each period.
The target for each period.
Whether the CPU critical indicator is set.
Whether working set protection is used for workloads assigned to this service class.
The work assigned to the service class. This is discussed further in item 4 on page 59.
57
IMP1
JES2
ONLINES VEL
60
JES2
CICSHIGH P1
90% in 0.5%
sec
LLA
EXPRESS
VEL 50
LLA
HIGHPRTY
VEL 90
NET
IMP2
IMP3
DDFTRAN
P2 VEL 30
DDFDEF P1 80%
in 2.5 sec
VLF
DDFHI GH
P1 90% in 1
sec
VLF
UNIXHI
VEL 50
RACF
UNIX
P1 VEL 30
TCPIP
TSOUSER P1
90% in 1 sec
SERVERS
VEL 60
DDFTRAN
P3
DDFDEF
P1 VEL 20
DDFHIGH P2
VEL 40
UNIXTRAN
P1 VEL 30
UNIX
P2 VEL 20
UNIXTRAN
P2 VEL 20
UNIXTRAN
P3
UNIX
P3 VEL 10
TSOHI P2 VEL
30
TSOUSER
P2 VEL 70
TCPIP
RMF
DISC
CICSLO
P1 80% in 2
sec
NET
TSOHI P1 R/T
90% in 1 sec
IMP5
TESTONL VEL
40
DDFTRAN
P1 85% in 1
sec
RACF
IMP4
STCHI VEL
60
TSOAPPL P1
R/T 85% in 2 sec
TSOAPPL
P2 VEL 20
STCMED VEL 30
STCSLOW
VEL 20
RMF
PGN99
RMFGAT
LIMIT
RMFGAT
HOTBATCH
VEL 40
TRACE
BATCHHI VEL
30
BATCHHI
VEL 20
TRACE
BATCH P1 VEL
20
BATCH
P2 VEL 10
SYSBMAS
58
HOTTEST
VEL 30
BATCH P3
BATCHTST
VEL 20
BATCHTS
T
In Table 4-2 on page 58, for each of the policies to be merged, you can begin to see where
similar service class periods and workloads are defined and where you can begin the
merging process. The objective is to combine into one service class those workloads you
know have similar attributes and behavior. For some workloads, like TSO, this may be very
obvious, but for other workloads such as batch, the task of determining the workload
characteristics may require further analysis. Having RMF data available to run reports on
these service classes provides you with the data necessary to make a decision on what
type of service class would be best suited for the workload. You may discover that an
existing service class or service class period is adequate for the incoming system
workload. Or you may need to create a new one to support the new work. Customers
should choose a busy time frame (between 1 and 3 hours) as a measurement period for
analysis.
The RMF spreadsheet reporter can be a useful tool for this purpose. It can produce
overview reports, as can the postprocessor which can be used to capture the most
meaningful metrics. The best report to gather initially from an RMF perspective is the
service class period report which is created using the following control card with the RMF
postprocessor JCL: SYSRPTS(WLMGL(SCPER)). Information about this report format
and what the reported data values mean can be found in the z/OS Resource
Measurement Facility Report Analysis, SC33-7991. Another source of useful information
is Resource Measurement Facility Performance Management Guide, SC33-7992.
4. Do the classification rules clash? For example, on the incoming system, do you assign
CICP* to service class ONLINEP, and assign it to service class PRODONL on the target
sysplex? If so, you have two choices:
You can merge the ONLINEP and PRODONL service classes into a single service
class. However, you have to make sure that the goals for the two service classes are
compatible (similar importance and target velocities/response times).
You can use new support delivered in OS/390 2.10 to let you assign different service
classes, depending on which system the workload is running on.
We already discussed merging service classes, so here we will just discuss assigning
service classes based on which system (or group of systems) the work is running on.
First, you have to define the classification group using the system name (SY) or system
name group (SYG). After this, define the classification rule as shown in Example 4-2.
Example 4-2 Use of system name group in classification rules
Subsystem-Type Xref Notes Options Help
-------------------------------------------------------------------------Modify Rules for the Subsystem Type
Row 1 to 11 of 20
Command ===> ____________________________________________ SCROLL ===> PAGE
Subsystem Type . : STC
Fold qualifier names?
Description . . . Started Tasks classifications
Action codes:
Action
____ 1
____ 2
A=After
B=Before
C=Copy
D=Delete row
--------Qualifier-------Type
Name
Start
SYG
INCOMING ___
TN
CICSPROD ___
(Y or N)
M=Move
R=Repeat
I=Insert rule
IS=Insert Sub-rule
<=== More
Storage
Manage Region
Critical
Using Goals Of
NO
TRANSACTION
YES
REGION
59
The use of system name or system name group is supported for address spaces whose
execution system is known at classification time (ASCH, TSO, OMVS, STC). It is not
supported for CICS or IMS, for example, so you cannot classify CICS or IMS transactions
by system group. In Example 4-2 on page 59, we are saying that all the started tasks in
the system group called INCOMING should be managed to transaction goals (where
applicable), except the CICSPROD region which is to be managed to region goals.
JES is not supported because the system on which JCL conversion occurs (which is
where classification occurs) may not be the system on which the job runs.
Subsystem-defined transactions (CICS or IMS) and enclaves (the remainder) are not
bound to an execution system at classification time either.
5. WLM provides a capability to group similar entities into Classification Groups. In the
classification rules, you can then use the classification group name to assign all those
entities to a given service class. This makes the classification rules much easier to
maintain and understand.
When preparing for the merge, you should review the classification groups that are defined
in each environment. Are there duplicate group names? If so, do they contain entities that
you will wish to assign to the same service class in the new environment? If they do, you
simply need to expand the classification group in the target sysplex to include the entities
being assigned to the same-named group in the incoming system.
On the other hand, if there are groups with identical names, but you do not wish to assign
the related entities to the same service class, you will need to set up a classification group
with a new name in both the incoming system and target sysplex, and move all the entities
into this new group. Remember to update the classification rules to pick up the new group
name.
If there are classification groups in the incoming system that are not in the target sysplex,
you may be able to just add those groups to the target sysplex exactly as they are defined
in the incoming system. Once they are defined, you should then use them in the
classification rules to assign them to the appropriate service classes.
Finally, you need to check the contents of the classification groups on both environments.
Are there any entities that are assigned to one group in one environment, but assigned to
a different group in the other environment? If so, you must resolve this conflict in advance
of the merge.
6. A workload, in WLM terms, is a named collection of work to be reported as a unit.
Specifically, a workload is a collection of service classes. In RMF, you can request a report
for all workloads or a named workload or workloads.
When merging WLMs, you need to check to see what workload names you have defined
in each environment, and then check to see which service classes are associated with
each workload. There is a good likelihood that the workload names on the different WLMs
will be different. In this case, you can define the workloads names that are used in the
target sysplex in the incoming system prior to the cutover. When you have defined the new
workload names, you then should go through and re-assign the service classes to the
appropriate workload. This will give you an opportunity to detect and resolve any problems
that might arise as a result of the change. As workloads are purely a reporting entity, they
have no effect on performance, and will probably only be used in performance or service
level reporting jobs.
7. You need to check for duplicate report class names in both environments. Given that
report classes normally have names that identify the business entity being reported on
(like BILLCICS or PAYRIMS), you may not have many clashes. Regardless, you must
check for duplicates and address any that you find. Any that only exist in the incoming
system must be added to the target sysplex, and you have to check the classification rules
60
in the target sysplex to make sure that the same set of work is assigned to the report class
in the target sysplex as is being assigned currently in the incoming system.
8. If the work in both the incoming system and the target sysplex will be using the same
service class name, but you want to be able to continue to report separately on the
workloads from each environment, update the classification rules to assign the same
service class to each workload but with a different report class. This allows you to continue
to report on the performance and goal achievement of each workload, but allows WLM to
function more efficiently. See the following example:
SYSTEM
RULE
SERVICE CLASS
REPORT CLASS
incoming system
TN IBMU*
BATCHHI
RINBATCHH
target sysplex
TN JC01*
BATCHHI
RTGBATCHH
Remember that if you created a new report class in your sysplex, you may need to change
your reporting/chargeback applications to get the correct values for the chargeback
process.
There were enhancements to reporting classes in z/OS 1.2 so that it now provides
information at the service class period level, which was not supported previously. In
addition, reporting classes can now provide response time distribution information,
whereas previously, only average response times and counts were provided. Note that to
get meaningful performance information reported in a report class, all the work in the
report class should be of the same type, and ideally in the same service class.
9. If you currently use WLM resource groups in either environment, you must review their
impact when you add another system to the sysplex. Remember that the number of
service units specified in the resource group is the number that can be used in total across
the whole sysplex. If you have a service class in the target sysplex that is assigned to a
resource group, and some work in the incoming system will be assigned to the same
service class, you now have more work trying to get a share of the same limited amount of
resource. This will impact not only the work on the incoming system, but also any existing
work in the target sysplex, as that work will now be granted less resource.
If you decide that you want to assign more work to a service class that is associated with a
resource group, check if the resource group definitions are adequate to handle the total
amount of new work. If you are going to increase the limit for the resource group, you may
want to hold that change until the actual time of the cutoverotherwise, work in the
associated service classes will be given more resource, and hence better service, but only
until the day of the cutover.
Also, remember that resource groups can also be used to guarantee a minimum amount
of service that the associated work is to receive. Once again, this value applies across the
whole sysplex. So, if there will be more systems with work that is covered by that resource
group, the guaranteed minimum will have less impact because that has to be shared
around more work.
For more information about resource groups, refer to z/OS MVS Planning: Workload
Management, GA22-7602.
10.This consideration only applies if you use OPCs WLM Support capability to assign a
different service class to work that is behind its target. If you use this capability, and
change service class names as part of the merge, remember to update OPC with the
revised service class names.
11.A scheduling environment is a list of resource names, along with their required states, that
allows scheduling of work in an asymmetric sysplex. If a system image satisfies all the
scheduling environment requirements associated with a unit of work, then that work can
be started on that image. If any of the resource requirements are not satisfied, the unit of
61
work cannot be assigned to that z/OS image. Scheduling environments and resource
names reside in the service definition and can be used across the entire sysplex. Each
element in a scheduling environment consists of the name of a resource, and a required
state, which is either ON or OFF. Each resource represents the potential availability of a
resource on an individual system. The resource can represent an actual physical entity
such as a database management system, or it can be an intangible quality such as a time
of day. You can have up to 999 unique scheduling environments in a service definition.
If both the incoming system and the target sysplex use scheduling environments, you must
check for duplicate resource names. If there are duplicate names, you must check to see if
they represent the same resource. For example, if you have a resource name of
CICPDOWN in both environments, you may find that a batch job that normally runs on the
incoming system when CICS on that system is down might now start on a different system
when the CICS on that system goes down - in this case, duplicate resource names would
be a problem. On the other hand, if the resource name represents something like the time
of day, that applies equally to all systems, then a duplicate name should not cause a
problem.
If your target is a PlatinumPlex, and you are not using scheduling environments today, we
recommend that you consider the use of them to make sure that jobs in the merged
configuration run in the desired system(s). If your target is a BronzePlex or a GoldPlex, the
JES of the incoming system is unlikely to be merged with the JES of the target sysplex, so
you should not have any problems controlling where the jobs will run after the merge.
12.An application environment is a group of application functions requested by a client that
executes in server address spaces. WLM can dynamically manage the number of address
spaces to meet the performance goals of the work requests. Each application environment
should represent a named group of server functions that require access to the same
libraries. The following conditions are necessary for application environment usage:
The work manager subsystem must have implemented the WLM services that make
use of application environments; today, this includes:
SOMobjects (subsystem type SOM) for SOM client object class binding
requests
Internet Connection Server (subsystem type IWEB) for Hyper-Text Transfer Protocol
(HTTP) requests
If you are not using WLM-managed initiators in either environment today, we recommend
that you postpone implementing them until after the merge.
If you are using WLM-managed initiators, and your target is a BronzePlex or a GoldPlex
where the JES of the incoming system will not be merged with the JES of the target
sysplex, then it is possible to have WLM control the number of initiators for a given class in
one JES complex but not in the other (that is, one MAS can specify MODE=JES for a
class, and the other can specify MODE=WLM). Jobs submitted within each JES complex
execute only on the images that are members of that complex.
If your target is a PlatinumPlex, where the incoming JES will be merged into the same JES
complex as the existing systems, then the initiator control that is currently in use on the
target sysplex will also apply to the incoming system as soon as it is merged.
You can limit the number of jobs in each class that can execute simultaneously in the MAS
by using the XEQCOUNT=MAXIMUM= parameter on the JOBCLASS statement. This
applies to JES2- and WLM-managed job classes, and can be altered with the $T
JOBCLASS command. There is no facility to control the number of jobs in execution by
class on an individual system basis. Similarly, there is no way to stop WLM-managed
initiators on just one or a subset of systems. Of course, you could use SYSAFF= or
scheduling environments to limit where the job can run, but that requires an overt action to
implement.
As of OS/390 2.10 you may also have different WLM classification rules for each JES MAS
using the subsystem collection qualifier type. For example, using this qualifier, you can
assign class A jobs to a different service class in each MAS. If you need to restrict the
execution of a job to a subset of images in the same MAS, then you could use scheduling
environments or system affinity.
Note that WLM uses the Performance Index (PI) of a service class when deciding if it
should start more initiators for a given job class. Therefore, we recommend that a given
service class should not include jobs that run in both WLM-managed and JES-managed
initiators. If the jobs in the JES-managed classes are the ones that are causing a poor PI,
WLM may try to improve the PI by starting more initiators, which in this case may not have
the desired effect.
z/OS 1.4 was announced just before this book went to press. One of the significant
enhancements in that release is a change to WLM to make it far more responsive in
balancing the work across multiple systems in the sysplex when using WLM-managed
initiators. If you have significant amounts of batch work that runs on multiple members of
the sysplex, you should investigate whether it would be wise to wait until this release is
installed before starting extensive use of WLM-managed initiators.
14.Are either the incoming system or the target sysplex using response time rather than
velocity goals for CICS and IMS? Prior to OS/390 2.10, once you start using response
time goals for any CICS or IMS region, response time goals will apply to all CICS or IMS
regions in the sysplex. In OS/390 2.10 and subsequent releases, when you classify CICS
or IMS in the STC classification rules, you can specify whether the named region should
be managed using TRANSACTION (response time) or REGION (velocity) goals. An
example is shown in Figure 4-1 on page 64. In the example shown, the CICS region called
#@$C1T2A will be managed to region goals, but all the other CICS regions in system
#@$2 will be managed to transaction goals. Note that TRANSACTION is the default,
however, it only applies to started tasks that support that type of goalCICS or IMS, for
example. Even though TRANSACTION is specified for the SYSTEM service class, all the
started tasks in that class will actually be managed to REGION goals because they dont
support TRANSACTION goals.
63
If you are using response times goals for CICS or IMS in either environment, you need to
decide how you want to handle things after the merge. If you do not wish to manage all
your regions in this manner, you should determine the regions that you wish to manage
with velocity goals and identify them as such in the classification rules. If you do wish to
manage all your regions in this way, you should review the classification rules for the CICS
and/or IMS Subsystem types and ensure that the goals specified are appropriate. In a
BronzePlex or a GoldPlex, you may decide to use different service classes for the regions
in the incoming system, assuming this does not cause you to exceed the maximum
number of service classes in the WLM policy.
Action
____
____
____
____
____
____
____
____
____
____
____
1
1
2
2
1
1
2
1
1
1
1
A=After
B=Before
(Y or N)
C=Copy
M=Move
D=Delete row R=Repeat
--------Qualifier-------Type
Name
Start
I=Insert rule
IS=Insert Sub-rule
<=== More
Storage
Manage Region
Critical
Using Goals Of
SPM
SPM
TN
TN
TN
SY
TN
TNG
TN
TN
TN
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
SYSTEM
SYSSTC
D#$*
I#$*
#@$C1T2A
#@$2
#@$C*
SYST_TNG
MQ%%MSTR
PSM%MSTR
CB*
___
___
___
___
___
___
___
___
___
___
___
TRANSACTION
TRANSACTION
TRANSACTION
TRANSACTION
REGION
TRANSACTION
TRANSACTION
TRANSACTION
TRANSACTION
TRANSACTION
TRANSACTION
64
effective than average response time goals when there are a small number of long-running
transactions in the service class. If you decide to use CICSPlex SM in Goal mode, you
must bear in mind that changing WLM from percentile response time goals to average
response time goals may have an adverse effect on WLM effectiveness.
If you decide that you want to implement CICSPlex SM Goal mode, the TORs must be
able to communicate with WLM to get the service classes associated with their
transactions. When CICSPlex SM is operated in Goal mode, the following events occur:
a. A transaction arrives at a CICS terminal-owning region (TOR).
b. The TOR passes the transactions external properties, such as LU name, userID, and
so on, to the z/OS WLM.
c. z/OS WLM uses this information to assign a service class. The service class name is
passed back to the TOR.
d. The TOR calls DFHCRP for transaction routing. Amongst other information, the service
class name is passed in a comm_area.
e. DFHCRP in turn calls EYU9XLOP (CICSPlex SM).
f. If CICSPlex SM does not already have information about the goal for that service class,
it will request that information from WLM.
g. Having the goal of the transaction, CICSPlex SM selects the best AOR. The name of
this AOR is passed back to the TOR, which then routes the transaction to the selected
AOR. These AORs could be in the same image, or in another image in the Parallel
Sysplex, or even in a remote z/OS.
What are reasonable goals?
Do not expect any form of magic when you define response time goals for transactions
running in a CICSPlex SM environment. CICSPlex SM and the z/OS WLM cannot
compensate for poor design. If a transaction demonstrates poor response time because of
poor design or coding, or simply because of the amount of processing it needs to do, there
is no benefit in setting unrealistic goals with the expectation that workload management
will fix the problem!
When CICSPlex SM runs in queue mode, CICSPlex SM routes the transaction to the AOR
with the shortest queue independently of what is specified in the z/OS WLM. When
CICSPlex SM runs in goal mode, CICSPlex SM calls the z/OS WLM to get the goals
associated with a transaction. CICSPlex SM gets the response time goal, but not the
importance.
CICSPlex SM does not call the z/OS WLM for every transaction, but builds a look-aside
table. When a transaction is initiated, CICSPlex SM first checks to see if the goals for that
transaction are already in the table, and only if CICSPlex SM does not find it there does it
call the z/OS WLM. CICSPlex SM keeps track of how healthy various AORs are, and how
well those AORs have done at executing transactions, to assist in deciding to which of the
available AORs to route a transaction. CICSPlex SM then makes its recommendation to
the TOR on where the transaction should run.
We recommend that you merge CICS service classes, using different report classes to get
information on the performance of the service classes for each system. Be careful to
define reasonable goals.
65
CICS
CICS
TRANSACTION
LEVEL RULES
QUALIFIER
CLASSES
TYPE
NAME
SERVICE
REPORT
SI
CICSTA*
CICSDEF
RTCCDEF
TN
CEMT
CICSHIGH
RTCCEMT
TN
FP*
CICSLO
RTCCFPAG
SI
CICSIN*
CICSDEF
RICCDEF
TN
CEMT
CICSHIGH
RICCCEMT
TN
FP*
CICSLO
RICCFPAG
In Table 4-3, for each rule defined we use the same service class for the transactions
defined in both CICS regions (target sysplex and incoming system), but we associated
different report classes with each one for future analysis.
16.The Intelligent Resource Director.
Intelligent Resource Director (IRD) extends the concept of goal-oriented resource
management by allowing you to group z/OS system images that are resident on the same
CPC running in LPAR mode, and in the same Parallel Sysplex, into an LPAR cluster. This
gives WLM the ability to manage processor and channel subsystem resources, not just in
one single image but across all the images in the LPAR Cluster.
IRD is described in detail in the IBM Redbook z/OS Intelligent Resource Director,
SG24-5952. However, there are a number of salient points that we want to reiterate here:
A z/OS system automatically becomes part of an LPAR cluster when it is IPLed on a
zSeries processor. The LPAR cluster name is the same as the sysplex name. If more
than one system in the same sysplex is IPLed on a given CPC, all those systems that
are in the same sysplex will automatically be part of the same LPAR cluster. There is
nothing you have to do to make this happen, and equally, there is nothing you can do to
stop this happening.
The WLM LPAR CPU Management feature can be turned on and off on an LPAR by
LPAR basis using the HMC.
The Channel Subsystem I/O Priority queueing feature is turned on or off at the CPC
level. Once it is turned on, the I/Os from all LPARs will be prioritized in the channel
subsystem.
Dynamic Channel-path Management (DCM) is automatically enabled if there are
managed channels and managed control units in the configuration of the LPAR. You
cannot decide to use DCM in one LPAR in an LPAR cluster, but not in another LPAR in
the same cluster.
Bearing these facts in mind, you should consider the following:
If the incoming system is on the same CPC as some of the systems in the target
sysplex, you now have the option (it is not mandatory) to use WLM LPAR CPU
Management in that LPAR.
66
If you decide to use WLM LPAR CPU Management for a given LPAR, you must make
sure that your WLM goals would make sense if all the work in the LPAR Cluster were
running in one z/OS imagethis is not the same as merging the WLM policies. You
can have a merged policy that works fine as long as Development and Production
never run in the same image, however, WLM LPAR CPU Management effectively
extends the idea of an image to cover all the images in the LPAR Cluster. You must
make sure that the relative Importance and goals of all the work in the LPAR Cluster
make sense in relation to each other.
If you are currently using DCM in the LPAR Cluster, remember that there will now be
another system that is able to use the managed paths. Therefore, you may want to
review the number of managed channels, bearing in mind the load and the workload
profile of the incoming system.
With DCM, managed channels can only be used by LPARs that are in the same LPAR
Cluster. If the control unit is shared with LPARs outside the LPAR Cluster, you need to
keep a number of non-managed channels that those LPARs can still use. By moving
the LPAR into the sysplex, you may reduce the number of non-managed channels that
you need for the control unit.
Alternately, if you have a control unit that is a candidate for DCM, but you have not
been using DCM because of the need to share the control unit with LPARs outside the
LPAR Cluster, moving the incoming system into the sysplex (and therefore into the
LPAR Cluster) may allow you to start using DCM with that control unit.
Before and after making the changes, extract RMF reports for both systems (incoming
system and target sysplex). Verify the goals and redefine them, if necessary. It is very
important to maintain the goals with the same availability and performance.
17.Parallel Access Volumes (PAV) is a capability delivered with the IBM ESS (2105) to let you
have more than one UCB for a given device. You can now have what is known as a base
UCB, which is always associated with the device, and alias UCBs, which can potentially
be moved between base devices. Having more than one UCB gives you the ability to have
more than one I/O going to the device at one time.
OS/390 2.7 introduced the ability for WLM to dynamically manage the movement of
aliases among devices, based on the IOS queueing for each device, and the importance
of the work on each device. This feature is known as dynamic PAV or dynamic alias
management.
WLM uses information about all the systems in the sysplex that are using a device when
deciding what to do about an alias; however it has no knowledge of how systems outside
the sysplex are using the device. For this reason, you should not use dynamic PAV with a
device that is shared outside the sysplex.
If merging the incoming system into the sysplex means that all sharing systems are now in
the same sysplex, you might be able to enable dynamic PAV for that device now.
18.Before you start work on merging the systems, you should review RMF Workload Activity
reports for the incoming system and all the existing systems in the target sysplex. You
should especially review the system PI for each service class that is active on each
system. Just because there are workloads on three systems that are using the same WLM
service class does not mean that each system is achieving the goal. If you find a service
class that rarely achieves its goal on a given system, you should review whether the
workload assigned to that service class should perhaps be assigned to another, more
realistic one. This will help you ensure, prior to the merge, that the work in each service
class is appropriate that service class, and that the goals are actually being achieved.
Following the merge, the RMF reports should be checked again to ensure that all the
goals are still being met.
67
19.OS/390 Release 10 addresses user requirements to set performance goals based on the
run-time location of the work. This release provides the ability to classify work based on
system name (or named groups of them).
Classification rules are the rules you define to categorize work into service classes, and
optionally report classes, based on work qualifiers. A work qualifier is what identifies a
work request to the system. The first qualifier is the subsystem type that receives the work
request.
There is one set of classification rules in the service definition for a sysplex. They are the
same regardless of what service policy is in effect; a policy cannot override classification
rules. You should define classification rules after you have defined service classes, and
ensure that every service class has a corresponding rule. Remember that to put the new
classification rules into effect, you need to install the service definition into the CDS and
then activate the policy.
The list of work qualifiers introduced in Release 10 and their abbreviations are:
SY
SYG
PX
SSC
SE
System name
System name group
Sysplex name
Subsystem collection name
Scheduling environment name
If you decide to use any of the new qualifiers, remember that the order of the classification
rules is critical. If you wish to have more specific assignments, such as those that pertain
to just a subset of systems, you must place those assignments first in the rule statements.
20.It is not uncommon for WLM constructs (service class, workload, report class) to be used
in service reporting and chargeback reports. You should review your chargeback and
accounting jobs to see what information they use to associate a given piece of work with a
reporting entity. If any of the WLM constructs are used, you need to make sure that any
changes in the WLM constructs assigned to a given piece of work are reflected back into
the reporting jobs. If you follow our recommendations and make the changes in the WLM
policy of the incoming system before the actual cutover, then there should be no surprises
following the cutover, when all systems are using the same WLM policy.
68
You will have to decide how to handle default service classes - do you use the one from
the incoming system, or the one currently used in the target sysplex?
The classification rules for the two policies can, and probably will, clash. That is, on the
incoming system you may assign JES job class A to PRODHOT, while on the target
sysplex it may be assigned to TESTSLO.
You will definitely have SYSTEM and SYSSTC service classes in both policies, but the
address spaces assigned to each may be different across the two policies.
So, while it may initially appear easier just to create a superset of the two policies, by the time
you are finished, it will have taken as much effort as if you had merged the service classes.
Which brings us to the other approach: merge the service classes from the two policies. This
may seem like more work up front, but it will result in a system that provides better
performance and is more manageable.
If you work through the checklist in 4.3, Considerations for merging WLMplexes on page 54,
by the time of the merge, there should be no further work required. The one thing you must do
immediately before and after the merge is to check the RMF Workload Activity reports for
each service class in each system in the plex. The objective is to ensure that bringing the
system into the sysplex has not had any adverse impact on performance.
4.5.1 Tools
The following tools are available, and may help you prepare for the merge of the WLMs:
The WSC Migration Checklist is available at:
http://www.ibm.com/servers/eserver/zseries/zos/wlm/pdf/wsc/pdf/v2.0_guide.pdf
4.5.2 Documentation
The following documentation may be useful during the merge process:
Resource Measurement Facility Performance Management Guide, SC33-7992
z/OS Intelligent Resource Director, SG24-5952
z/OS MVS Planning: Workload Management, GA22-7602
69
70
Chapter 5.
GRS considerations
This chapter discusses the following aspects of moving a system that is using GRS to
manage multi-system resource serialization into a sysplex where all the existing systems are
using GRS:
Is it necessary to merge GRS environments, or is it possible to have more than one GRS
environment in a single sysplex?
A checklist of things to consider should you decide to merge the GRSs.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
71
Mixed complex
Another possibility, if you are executing in Ring mode, is that the GRSplex contains systems
that are not in the sysplex. This configuration is called a mixed GRS complex. In this
environment, those systems that are in the sysplex will use XCF for GRS communication,
while those systems that are outside the sysplex will need their own dedicated CTCs between
each other and each of the systems in the sysplex to participate in the GRSplex.
72
GRS ISGNQXIT exit to change the resource name based on which subplex the resource
resides in; however, there are problems with this approach in that GQSCAN does not see the
modified resource name, and the D GRS command also does not show the modified resource
name.
73
In a Star complex, each system communicates directly with a CF instead of using the RSA to
communicate with each other. A global request can be satisfied much faster, requiring as little
as two signals to gain access to a resource. Typical GRS lock structure response times are in
the range of 10-30 microseconds, depending on the speed of the processors and links.
Improved availability
In a Ring complex, the ring is disrupted whenever a system fails unexpectedly. During ring
rebuild, global requests cannot be processed. Additionally, as the Ring complex grows, the
time to rebuild the ring increases for two reasons:
The total number of resources managed in the ring is the sum of all the resources on each
system, and each resource needs to be re-synchronized before a system can rejoin the
ring.
The number of systems that need to interact to rejoin the ring is increased. The ring is
rebuilt one system at a time.
In a Star complex, if a system fails, the surviving systems do not need to communicate to
continue global request processing. The failure of the system is perceived as a large DEQ
request (i.e. the DEQ of all resources held by requestors on the failed system).
Requirements
To ensure availability, you should have at least two CFs that are connected to all systems in
the sysplex, and have all those CFs specified on the PREFLIST for the lock structure. If the
CF containing the lock structure fails, or one of the systems loses connectivity to that CF, the
lock structure will automatically rebuild to an alternate CF. If any system is unable to access
the lock structure, GRS will put that system into a wait state. The two CFs do not have to be
failure-isolated.
If you are executing a mixed GRS complex, you will not be able to convert to Star mode. If you
want to convert to Star mode, you will have to either convert the non-sysplex systems to
sysplex systems, or remove them from the GRSplex. To decide which way to go, first you will
have to determine the level of data sharing between the sysplex and non-sysplex systems. If
there is a need to share the data, then you will have to convert the non-sysplex systems to
sysplex systems. If not, then you can remove the non-sysplex systems from the GRSplex.
Note
Type
B,G,P
B,G,P
B,G,P
If you are already using GRS Star in the target sysplex, ensure
the ISGLOCK structure is appropriately sized for the new number
of systems in the sysplex.
B,G,P
Review the GRS RNL of both the target sysplex and the incoming
system to ensure they adhere to the current IBM and ISV GRS
RNL recommendations.
B,G,P
Ensure all data set names on the incoming system conform to the
established sysplex-wide data set naming standards used in the
target sysplex.
B,G,P
B,G,P
B,G,P
All the systems in the GRSplex must use common GRS exits.
Ensure the GRS exits of the incoming system are exactly the
same as those in the target sysplex.
8,9
B,G,P
All the systems in the GRSplex must use a common RNL. Ensure
the RNL of the incoming system is exactly the same as that in the
target sysplex. Consider using the RNL wildcard character
support to make the RNL merge easier.
B,G,P
10
B,G,P
Done?
B,G,P
11
B,G,P
12
B,G,P
The Type specified in Table 5-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 5-1 are described below:
1. We recommend converting to Star mode to prevent any potential performance problems
caused by increasing the number of systems in the GRS Ring. For instructions on how to
convert to a Star complex, see Chapter 6, Steps in converting to a Star Complex in z/OS
MVS Planning: Global Resource Serialization, SA22-7600.
75
2. If you wish to use GRS in Star mode, the sysplex CDSs must be formatted (using the
IXCL1DSU program) with the GRS keyword. To find out if the sysplex CDSs have already
been formatted with this keyword, issue a D XCF,C,TYPE=SYSPLEX command. The response
will tell you whether GRS Star mode is supported by each CDS.
3. To convert to a Star complex, you will need to change the following members in Parmlib:
IEASYSxx. You will need to modify the GRS parameter to GRS=STAR.
GRSCNFxx. If you are using the default component trace Parmlib member called
CTIGRS00, this member is no longer required.
For instructions on what members to change in Parmlib to implement a Star complex, see
Chapter 5, Defining Parmlib Members for a Star Complex in z/OS MVS Planning: Global
Resource Serialization, SA22-7600.
4. For instructions on how to use the GRS tool ISGSCGRS to help size the ISGLOCK
structure, see Chapter 5, Sizing the ISGLOCK Structure in z/OS MVS Planning: Global
Resource Serialization, SA22-7600.
IBM provides a Web-based wizard to help you to size the ISGLOCK structure. The wizard
can be found at this IBM site:
http://www.ibm.com/servers/eserver/zseries/cfsizer/
5. Review the GRS RNL of both your target sysplex and your incoming system to ensure
they adhere to the current IBM and ISV RNL recommendations. Your review will help you
identify any redundant RNL entries which you can delete prior to merging the RNLs. You
may also identify some recommendations that are missing from your RNL.
IBM provides a list of default RNLs. This list is not very comprehensive because the
resources required in each RNL tend to vary from site to site. For the list of default RNLs,
see Chapter 2, RNL Considerations in z/OS MVS Planning: Global Resource
Serialization, SA22-7600.
IBM provides general recommendations, as well as specific suggestions, on known
resources that are good RNL candidates. See Chapter 2, RNL Considerations and RNL
Candidates in z/OS MVS Planning: Global Resource Serialization, SA22-7600. The list
covers the following topics:
CICS
CVOLs (hopefully you are not still using these!)
DAE
DB2
DFSMShsm
DFSMS/MVS
IMS
ISPF or ISPF/PDF
JES2
JES3
System Logger
RACF
Temporary data sets
TSO/E
VIO journaling data sets
VSAM
76
77
After this APAR is installed, the RNL exit point ISGGREX0, with the entry points of
ISGGSEEX, ISGGSIEX and ISGGCREX, will no longer be supported. These modules
receive control whenever an ENQ, DEQ, or RESERVE request is issued for a resource.
The default exits simply return to GRS without modifying the data.
During GRS initialization, if one or more of these modules are detected, message ISG351I
is issued to indicate these modules are no longer supported. This message is purely
informational. There is no action required if you are executing the default RNL modules.
The function provided by these modules has been replaced by a call to ISGNQXIT. If you
have modified these modules, you will need to review whether you need to duplicate the
function previously provided by the superseded RNL modules. For more information about
ISGNQXIT, see Chapter 32, ISGNQXIT - ENQ / DEQ Installation Exit in z/OS MVS
Installation Exits, SA22-7593.
A sample of the exit ISGNQXIT and its supporting documentation can be found on the IBM
Redbooks site:
ftp://www.redbooks.ibm.com/redbooks/SG246235/
9. GRS requires RNL consistency across the sysplex. If a system is IPLed with RNLs that do
not match the rest of the sysplex, GRS will put that system into a wait state. However the
GRS exits may make different decisions on different systemsGRS does not check that
the same exits are used on all systems.
GRS processing for ENQ, DEQ, or RESERVE requests should yield the same result on
every system in the GRSplex. If they dont, resource integrity cannot be guaranteed. To
ensure resource integrity, your ENQ/DEQ exit routines and your RNL should be identical
on all systems in the GRSplex.
The process for aligning the ENQ/DEQ modules and the RNL would be:
a. If required, merge the ENQ/DEQ exit requirements into a single ENQ/DEQ exit.
b. If required, implement the single ENQ/DEQ exit on all the merging systems.
c. Create a single RNL, by consolidating the RNL from the incoming system and the
active RNL from the target sysplex.
d. Activate the consolidated RNL on all the merging systems.
If you have customized the ENQ/DEQ modules, you will need to merge your requirements
into a single exit. Then you will have to install this exit on the incoming system and all the
systems in the target sysplex. The ENQ/DEQ exit points are:
ISGGREX0 (if your systems do not have APAR OW49779 installed)
ISGNQXIT (if your systems do have APAR OW49779 installed).
As previously mentioned, OW49779 is the compatibility APAR for z/OS 1.2. ISGNQXIT
has already been discussed in a previous consideration that discussed compatibility
maintenance. For information about ISGGREX0, see Section 2, ISGGREX0 in OS/390
MVS Installation Exits, SC28-1753.
Once you have installed the exits, create a single RNL by consolidating the RNL from the
incoming system with the RNL from the target sysplex.
During GRS initialization, if there is a mismatch between the RNL on the incoming system
and the active RNL in the target sysplex, the incoming system will not be allowed to join
the target sysplex. The incoming system will go into a non-restartable wait state x0A3.
If wildcard character support is available on all the merging systems, we recommend you
use this feature, as it may make RNL consolidation easier.
78
Wildcard character support has another advantage. It tends to make RNLs smaller and
simpler. The RNL has a maximum size of 61 k. If the merged RNL is larger than this, you
wont be able to implement it. Most sites wont reach the limit. For information on
determining the size of your RNL, see Chapter 16, ISG251I in z/OS MVS System
Messages Volume 9 (IGF-IWM), SA22-7639.
After you have made the changes to create a single RNL that can be used on all the
merging systems, use the Symbolic Parmlib Parser to check your changes are
syntactically correct. For more information, see Appendix B, Symbolic Parmlib Parser in
z/OS MVS Initialization and Tunning Reference, SA22-7592.
To change the active RNLs when executing in Star mode:
Use the SET GRSRNL command. For more information, see Chapter 6, Changing RNLs
for a Star Complex in z/OS MVS Planning: Global Resource Serialization, SA22-7600.
To change the active RNLs when executing in Ring mode, where the sysplex equals the
complex:
Use the SET GRSRNL command. For more information, see Chapter 9, Changing RNLs
for a Ring in z/OS MVS Planning: Global Resource Serialization, SA22-7600.
To change the active RNLs when executing in Ring mode, in a mixed GRS complex:
Remove the non-sysplex systems from the GRSplex.
Use the SET GRSRNL command. For more information, see Chapter 9, Changing RNLs
for a Ring in z/OS MVS Planning: Global Resource Serialization, SA22-7600.
Re-IPL the non-sysplex systems with the new consolidated RNL.
At this point, both your incoming system and your target sysplex will be using an identical
ENQ/DEQ exit and RNL.
10.In a mixed GRS complex, those systems participating in the sysplex will use XCF for GRS
communication. Those systems outside the sysplex will need their own dedicated CTCs
between each other and the systems in the sysplex in order to participate in the GRSplex.
You will need to provide dedicated communication links for GRS traffic between the
incoming system and the non-sysplex systems in the GRSplex. For information about
configuring the link, see Chapter 8, Designing a Mixed Complex in z/OS MVS Planning:
Global Resource Serialization, SA22-7600.
When the communication links are implemented, use the following process to enable the
non-sysplex systems for communication with the incoming system:
a. Modify GRSCNFxx on the non-sysplex systems. Add the CTC parameter to specify the
device numbers of the communications links to be used between the incoming system
and the non-sysplex systems.
b. IPL the non-sysplex systems so that the CTC links are enabled on those systems prior
to the incoming system merging with the target sysplex.
11.You will need to update your Operator instructions and make your Operations staff aware
of the following changes:
The output from a D GRS command will show all the systems in the merged sysplex, not
only the incoming system or the systems in the target sysplex.
When the incoming system joins the target sysplex, the V GRS command will no longer
be supported. To remove the incoming system from the merged sysplex, they will have
to use the V XCF,sysname,OFFLINE command.
For information about these commands, see Chapter 4, Controlling a Global Resource
Serialization Complex and Chapter 4, Removing a System from the XCF Sysplex in
z/OS MVS System Commands, SA22-7627.
79
12.If you were sharing resources between the incoming system and the target sysplex prior to
the merge, you may have been using Reserves to ensure data integrity for those
resources. An example is a catalog that is shared between systems that are not all in the
same GRSplex. Such resources should have an entry similar to the following:
RNLDEF RNL(EXCL) TYPE(SPECIFIC) QNAME(SYSIGGV2) RNAME(ucat.fred)
If the incoming system is the only system outside the target sysplex that is sharing a given
resource, you should be able to remove that entry from the Exclusion list after the merge is
complete. This will remove any unnecessary Reserves.
GRSCNFxx:
If you are using the default component trace Parmlib member called CTIGRS00,
this member is no longer required.
For the instructions on what members to change in Parmlib to implement a Star complex,
see Chapter 5, Defining Parmlib Members for a Star Complex in z/OS MVS Planning:
Global Resource Serialization, SA22-7600.
2. IPL your incoming system into your target sysplex.
80
GRSCNFxx:
Replace this member on the incoming system with a copy of this member from the
target sysplex. This is to ensure parameters such as TOLINT are consistent across
the entire sysplex.
For information about what members to change in Parmlib, and the meaning of the
different parameters, see Chapter 8, Processing Options in a Sysplex in z/OS MVS
Planning: Global Resource Serialization, SA22-7600.
2. IPL your incoming system into your target sysplex.
GRSCNFxx:
Replace this member on the incoming system with a copy of this member from the
target sysplex. This is to ensure parameters such as TOLINT are consistent across
the entire sysplex.
Modify the CTC parameters to specify the device numbers of the communications
links to be used between the incoming system and the non-sysplex systems.
For information about what members to change in Parmlib, and the meaning of the
different parameters, see Chapter 8, Processing Options in a Mixed Complex in z/OS
MVS Planning: Global Resource Serialization, SA22-7600.
2. IPL your incoming system into your target sysplex.
5.4.1 Tools
The following tools may be of assistance to you during the merge process:
ISGSCGRS
This tool will enable you to size the ISGLOCK structure. For more information, see Chapter 5,
Sizing the ISGLOCK Structure in z/OS MVS Planning: Global Resource Serialization,
SA22-7600.
Chapter 5. GRS considerations
81
5.4.2 Documentation
The following publications may be of assistance to you during the merge process:
OS/390 MVS Installation Exits, SC28-1753
z/OS DFSMShsm Implementation and Customization Guide, SC35-0418
z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
z/OS JES2 Initialization and Tuning Guide, SA22-7532
z/OS MVS Initialization and Tunning Reference, SA22-7592
z/OS MVS Installation Exits, SA22-7593
z/OS MVS Planning: Global Resource Serialization, SA22-7600
z/OS MVS Setting Up a Sysplex, SA22-7625
z/OS MVS System Commands, SA22-7627
z/OS MVS System Messages Volume 9 (IGF-IWM), SA22-7639
z/OS Security Server RACF System Programmers Guide, SA22-7681
z/OS TSO/E General Information, SA22-7784
82
Chapter 6.
SMF considerations
This chapter discusses the considerations for managing SMF data for a system that is being
moved into a sysplex.
83
Billing users
Reporting reliability
Analyzing the configuration
Scheduling jobs
Summarizing direct access volume activity
Evaluating data set activity
Profiling system resource use
Maintaining system security
SMF formats the information that it gathers into system-related or job-related records.
System-related SMF records include information about the configuration, paging activity, and
workload. Job-related records include information on the CPU time, SYSOUT activity, and
data set activity of each job step, job, APPC/MVS transaction program, and TSO/E session.
It is not possible to have one system collecting all SMF data for all systems within a sysplex.
Each system within the target sysplex will still need to collect system-specific SMF data.
However, some SMF records are sysplex-wide and only need to be collected by one system
within the sysplex. Post-processing of SMF data may change on the incoming system,
depending on the configuration within the target sysplex. To help understand the SMF
considerations when merging a system into a sysplex refer to Table 6-1 on page 84.
In line with the sysplex target environment options outlined in 1.2, Starting and ending
points on page 2, moving the incoming system into a BronzePlex would typically be done to
obtain the benefits of PSLC or WLC charging. This would mean that the incoming system
would have limited sharing and the collection, switching, management and post-processing of
SMF data would not have to change.
Moving the incoming system into a GoldPlex would be done to share the same system
software environment (sysres) and maintenance environment, and therefore you will need to
review this chapter to ensure any SMF considerations are addressed. The target sysplex may
differ from the incoming system in the way it switches, manages, and post-processes SMF
data.
Moving the incoming system into a PlatinumPlex would result in everything being shared, and
jobs typically being able to run on any system in the sysplex. This configuration would allow
you to obtain the maximum benefits from being in the sysplex and provide you the greatest
flexibility with the management of SMF data. You will need to review this chapter to ensure
any SMF considerations are addressed because the target sysplex may differ from the
incoming system in the way it switches, manages, and post-processes SMF data.
84
Consideration
Note
Type
B, G, P
Done?
Consideration
Note
Type
G, P
G, P
4, 5
G, P
G, P
G, P
G, P
Done?
The Type specified in Table 6-1 on page 84 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes in Table 6-1 on page 84 are described below:
1. Each system within the target sysplex will need to collect system-specific SMF data in
preallocated SMF data sets. You must allocate the data sets on DASD and catalog the
data sets. We recommend they be cataloged in the Master Catalog.
You should have a minimum of two data sets per system for SMF to use, and we
recommend that you run with at least three to ensure availability. Select DASD that can
handle the volume of data that SMF generates at your installation. If the device is too slow,
SMF places the data it generates in buffers. The buffers will eventually fill, which would
result in lost SMF data.
You will also need to consider the size of the SMF data sets. For example, if the incoming
system is to be used as a target for DB2 subsystem restarts using ARM or a similar
automatic restart mechanism, there may be additional SMF data collected on the
incoming system during these times, especially if you use DB2 tracing. For more
information about allocating SMF data sets, including VSAM control interval and buffer
sizes and so on, refer to Chapter 2, System Requirements and Considerations, in MVS
System Management Facilities, SA22-7630.
If you plan to have a shared Parmlib configuration, you should consider sharing the
SMFPRM member. Having a good naming convention for the SMF data sets will allow you
to share the SMFPRM member, as shown in Example 6-1. You will need to allocate the
appropriate data sets on the incoming system prior to the merge.
Example 6-1 Example data set specification in the SMFPRM Parmlib member
... ... ...
DSNAME(SYS1.&SYSNAME..MAN1,
SYS1.&SYSNAME..MAN2,
SYS1.&SYSNAME..MAN3)
... ... ...
/* INDIVIDUAL MANX
*/
2. When the current recording data set cannot accommodate any more records, the SMF
writer routine automatically switches recording from the active SMF data set to an empty
SMF data set, and then passes control to the IEFU29 SMF dump exit. A console message
is also displayed, informing the operator that the SMF data set needs to be dumped. There
are two ways of managing the SMF data set switch:
a. You could use the IEFU29 dump exit to initiate the SMF dump program, or
b. You may choose to use automation to trap the console message, and have automation
then initiate the SMF dump process.
85
Either way is valid, but we recommend having a consistent approach for all systems in the
target sysplex, especially if you have the IEFU29 exit active in a shared SMFPRM
member.
The SMF dump program (IFASMFDP) is used to empty the SMF data sets. It transfers the
contents of the full SMF data set to another data set, and resets the status of the dumped
data set to empty so that SMF can use it again for recording data. We recommend that
you run the SMF dump program on the system that owns the SMF data sets to be cleared;
do not run the SMF dump program from one system in an attempt to clear a SMF data set
used by another system within the target sysplex.
3. The SMF parameters are specified in the SMFPRM member of Parmlib. The parameters
allow you to:
A shared SMFPRM member can be set up similar to that shown in Example 6-2, using
system symbols in the DSNAME and SID parameters.
Example 6-2 Example SMFPRM Parmlib member
ACTIVE
/* ACTIVE SMF RECORDING
*/
DSNAME(SYS1.&SYSNAME..MAN1, /* SMF DATA SET NAMES
*/
SYS1.&SYSNAME..MAN2, /* SMF DATA SET NAMES
*/
SYS1.&SYSNAME..MAN3) /* SMF DATA SET NAMES
*/
NOPROMPT
/* NO PROMPT
*/
REC(PERM)
/* TYPE 17 PERM RECORDS ONLY
*/
INTVAL(05)
SYNCVAL(00)
MAXDORM(3000)
/* WRITE AN IDLE BUFFER AFTER 30 MIN*/
STATUS(010000)
/* WRITE SMF STATS AFTER 1 HOUR
*/
JWT(0010)
/* 522 AFTER 10 MINUTES
*/
SID(&SYSNAME(1:4))
/* SYSTEM ID IS &SYSNAME
*/
LISTDSN
/* LIST DATA SET STATUS AT IPL
*/
LASTDS(MSG)
/* DEFAULT TO MESSAGE
*/
NOBUFFS(MSG)
/* DEFAULT TO MESSAGE
*/
SYS(TYPE(30,70:79,89,100,101,110),
EXITS(IEFU83,IEFU84,IEFU85,IEFACTRT,
IEFUJV,IEFUSI,IEFUJP,IEFUSO,IEFUTL,IEFUAV),
INTERVAL(SMF,SYNC),NODETAIL)
SUBSYS(STC,EXITS(IEFU29,IEFU83,IEFU84,IEFU85,IEFUJP,IEFUSO,
IEFACTRT),
INTERVAL(SMF,SYNC),
TYPE(30,70:79,89,100,101,110))
You will need to review the SMFPRM member prior to sharing the member across
systems in the target sysplex to ensure the parameters are appropriate for the incoming
system.
86
For example, you will need to check if the record types or subtypes specified in the
SMFPRM member currently being used in the target sysplex are appropriate for the
incoming system to ensure that all the required record types are collected. You will also
need to check if the exits specified in the SMFPRM member are appropriate or available
on the incoming system. If you have differing requirements for SMF parameters between
systems, you will need to define a system-specific SMFPRM member in Parmlib.
4. If you already merge SMF data from other systems in the target sysplex, the SMF data
from the incoming system should be handled the same way. Merging the SMF data would
help with the centralization of report processing, like Workload Licensing Charge (WLC)
reporting. Any billing and other post-processing jobs (including performance reporting) on
the incoming system may need to be updated to point to the new merged SMF archive
data sets.
You will need to review the SMF data retention period of the target sysplex prior to merging
the data from the incoming system to ensure the retention period is appropriate. For
example, you don't want to merge data from the incoming system that needs to be kept for
seven years, into the target sysplex that has a SMF data retention period of only six
months!
You may also want to look at spinning off the SMF records required for catalog recovery to
a set of data sets with a different HLQ and different user catalog than the SMF archive
data sets. We recommend this in a shared or non-shared user catalog environment. For
more information on catalog backup and recovery, see the IBM Redbook ICF Catalog
Backup and Recovery: A Practical Guide, SG24-5644.
You can use the IBM DFSORT program product, or its equivalent, to sort and merge
SMF data sets from each system into a single dump data set. The MVS System Logger
provides a SAMPLIB program, IXGRPT1, to show how you might write a program to
analyze the dump data set input and summarize system logger activity across the sysplex.
Refer to 6.3.1, Tools on page 90 for more information.
5. You will need to ensure that the same user SMF record type is not being used by two
different products/applications within the merged sysplex. Record types 128 through 255
are available for user-written records, which could be used by IBM and ISV program
products or customer-written applications. For example, if a customer application on the
incoming system is writing user SMF record 255 and an ISV product is writing the same
record number on the target sysplex there will be issues when merging the SMF data, and
subsequently when using common post-processing or reporting programs.
This is primarily an issue if you merge the SMF data, but we recommend that any clash of
record type numbers be addressed when merging the incoming system into the target
sysplex to ensure there are no problems in the future. Most ISV products allow you to
select the user SMF record number to use.
6. SMF provides exits that allow installations to add installation-written routines to perform
additional processing. The SMF exits receive control at different times as a job moves
through the system. They receive control when specific events occur, such as when a job
CPU-time limit expires. The exit routines could collect additional information, cancel jobs,
or enforce installation standards.
The EXITS parameter within the SMFPRM member of Parmlib specifies which SMF exits
are to be invoked, as shown in Example 6-2 on page 86. If an exit is not specified, it is not
invoked. If this parameter is not specified, all SMF system exits are invoked. NOEXITS
specifies that SMF exits are not invoked. You can specify EXITS on the SYS and SUBSYS
statements of SMFPRM. Your choice of SYS or SUBSYS depends on the scope of work
you want to influence (system-wide or subsystem-wide), as follows:
On the SYS parameter, specifies the exits that are to affect work throughout the
system, regardless of the subsystem that processes the work.
87
On the SUBSYS parameter, specifies the exits that are to affect work processed by a
particular SMF-defined subsystem (JES2, JES3, STC, ASCH, or TSO). The SUBSYS
specification overrides the SYS specification.
Ensure the appropriate exits are available with the same, or similar, functionality, by
reviewing the function of each of the SMF exits active on the incoming system. This is
specifically required if the incoming system will be sharing the sysres with other systems
in the target sysplex.
Ensure the EXIT parameters are appropriate for the incoming system by reviewing the
SMFPRM member. This needs to be done prior to sharing the member across systems in
the target sysplex. If you have differing requirements for SMF parameters between
systems, you will need to define a system-specific SMFPRM member in Parmlib.
You can associate multiple exit routines with SMF exits, through the PROG Parmlib
member, at IPL, or while the system is running. To define SMF exits to the dynamic exits
facility, you must specify the exits in both PROG and SMFPRM. The system does not call
SMF exits that are defined to PROG only. If you do not plan to take advantage of the
dynamic exits facility, you need only define SMF exits in SMFPRM.
The SMF exits available are:
IEFACTRT: The termination exit. This exit receives control on the normal or abnormal
termination of each job-step and job.
IEFUAV: The user account validation exit. This exit is used to validate the accounting
information of Transaction Program (TP) users.
IEFUJI: The job initiation exit. This exit receives control before a job on the input queue
is selected for initiation.
IEFUJP: The job purge exit. This exit receives control when a job is ready to be purged
from the system, after the job has terminated and all SYSOUT output that pertains to
the job has been written.
IEFUJV: The job validation exit. This exit receives control before each job control
statement (or cataloged procedure) in the input stream is interpreted.
IEFUSI: The step initiation exit. This exit receives control before each job step is
started (before allocation).
IEFUSO: The SYSOUT limit exit. This exit receives control when the number of records
written to an output data set exceeds the output limit for that data set.
IEFUTL: The time limit exit. This exit receives control when the Job, Step or JWT time
limits expire.
IEFU29: The SMF dump exit. This exit receives control when an SMF data set
becomes full.
IEFU83: The SMF record exit. This exit receives control before each record is written to
the SMF data set.
IEFU84: The SMF record exit. This receives control when the SMF writer routine is
branch-entered and is not entered in cross-memory mode.
IEFU85: The SMF record exit. This exit receives control when the SMF writer routine is
branch-entered and is entered in cross-memory mode.
You can use the D SMF,O command to display the current SMF options, as shown in
Example 6-3 on page 89, which displays the exits that are active on the system, and
whether they are defined at the SYS or SUBSYS level.
88
7. To eliminate the collection of duplicate data and help reduce the amount of SMF data
collected within a target sysplex, we recommend checking if any SMF records being
collected are sysplex-wide. These sysplex-wide SMF records will only need to be collected
from one system in the target sysplex. There may be applications or ISV products that
write sysplex-wide SMF records, but among IBM products, we were only able to identify
the RMF cache data gatherer that did this.
In a PlatinumPlex configuration, where several systems have access to one and the same
storage subsystem, it is sufficient that the cache data gatherer is started just on one
system. Running the gatherer on more than one system creates several copies of identical
SMF records type 74-5 (Monitor I) or VSAM records (Monitor III). Since RMF has no
sysplex control over the gatherer options, it cannot automatically deselect cache gathering
on all but one system.
However, you can achieve this by taking advantage of shared Parmlibs and by the use of
system symbols. Define a symbol &CACHEOPT, or similar symbol name, in Parmlib
member IEASYMxx (assuming that all the systems in the target sysplex are running in an
LPAR) as seen in Example 6-4.
Example 6-4 IEASYM example for RMF CACHE option setting
SYSDEF SYMDEF(&CACHEOPT='NOCACHE') /* Global value
*/
SYSDEF LPARNAME(PRD1)
SYMDEF(&CACHEOPT='CACHE') /* Local value for PROD1 */
Then update or create a shared RMF Parmlib member, ERBRMFxx, with the appropriate
symbolic as seen in Example 6-5.
Example 6-5 ERBRMF example for CACHE option setting
...
&CACHEOPT.
...
*/
*/
*/
Then start RMF on all systems in the sysplex using the member option, where xx is the
ERBRMF member suffix, as seen in Example 6-6 on page 90.
89
With this definition, the symbol &CACHEOPT is defined as NOCACHE, while on system
PRD1, the symbol is resolved as CACHE.
6.3.1 Tools
The following tools may be helpful in managing your SMF data:
SMFSORT
The SMFSORT is sample batch JCL residing in SYS1.SAMPLIB that invokes the SORT
program to sort SMF records.
IXGRPT1
System logger provides a program, IXGRPT1 in SYS1.SAMPLIB, to show how you might
write a program to analyze the dump data set input and summarize system logger activity
across the sysplex. System logger produces SMF record type 88 to record the system logger
activity of a single system in a sysplex; these records are written to the active SMF data set
on that system. Using the IBM DFSORT program product, or its equivalent, you can sort and
merge SMF data sets from each system into a single dump data set. Refer to Chapter 9,
System Logger Accounting in MVS System Management Facilities, SA22-7630 for more
information.
ERBSCAN
The ERBSCAN exec resides in hlq.SERBCLS and invokes module ERBMFSCN to scan an
SMF data set. The output listing of ERBMFSCN is then displayed in an ISPF EDIT screen.
You can then display a single RMF record by entering the command ERBSHOW recno in
the EDIT command line. In this case, the ERBSHOW exec is invoked as an EDIT macro,
which will then re-invoke this exec with the specified record number. Then the corresponding
record is formatted (broken into its data sections) and displayed in another EDIT window.
ERBSMFSC
The ERBSMFSC batch JCL resides in SYS1.SAMPLIB and invokes the ERBMFSCN
program to scan an SMF data set. The program will list a one-line summary for each SMF
record in the input data set.
EDGJSMFP
The EDGJSMFP batch JCL resides in SYS1.SAMPLIB and invokes the ICETOOL program to
give a count of each SMF record type found in the SMF data set.
6.3.2 Documentation
The following publications provide information that may be helpful in managing your SMF
data:
ICF Catalog Backup and Recovery: A Practical Guide, SG24-5644
MVS System Management Facilities, SA22-7630
z/OS Resource Measurement Facility (RMF) Users Guide, SC33-7990
91
92
Chapter 7.
JES2 considerations
This chapter discusses the following aspects of moving a JES2 system into a sysplex where
all the existing systems are members of the same JES2 Multi-Access Spool (MAS):
Is it necessary to merge JES2 environments, or is it possible to have more than one JES2
MAS in a single sysplex?
A checklist of things to consider should you decide to merge the JES2 configurations.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
Note: A JESplex is equivalent to a JES2 multi-access spool (MAS): the set of systems that
share a common spool and checkpoint data sets. MAS and JESplex have the same
meaning.
93
94
CEC1
SYSA
XCF
JES2
CEC2
Sysplex Timer
11 12 1
10
9
8
2
3
4
7 6
CTC
CTC
SYSC
XCF
JES2
CTC
CTC
SYSB
XCF
JES2
CKPT
CTC
SYSD
XCF
JES2
Spool
CTC
CKPT
Spool
Figure 7-2 on page 96 shows another possible target configuration, this time it is a Parallel
Sysplex rather than a base sysplex. Once again only three of the systems (SYSA, SYSB, and
SYSC) are sharing the spool and JES2 checkpoint data sets. In fact, in this case, the JES2
checkpoints for those three systems are held in the CF. Once again, SYSD is a single JES2
member, not sharing its checkpoint or spool with any other systems.
95
Parallel Sysplex
CEC1
Sysplex Timer
CEC2
11 12 1
SYSA
XCF
JES 2
10
9
8
2
3
4
7 6
SYSC
XCF
JES 2
CKPT
SYSB
XCF
JES2
CKPT
SYSD
XCF
JES 2
Spool DS
Spool DS
SYSA, SYSB,SYSC are in one JESplex, with the CKPT in the Coupling Facility.
SYSD is in its own JESplex, with its own CKPT and spool, both on DASD.
A single member MAS, like SYSD in Figure 7-1 on page 95 and Figure 7-2, can become a
multi-member MAS at any time without the need to restart the running system. This has been
true since MVS/SP 1.3.6. In earlier releases, there were parameters that indicated whether
or not JES2 was operating as a MAS or a single system. A cold start was needed to go from
non-MAS to MAS operations in those earlier releases.
In this chapter we consider the following possible target configurations, as previously
described:
1. BronzePlex: This is a Parallel Sysplex where each JES2 has its own checkpoint and spool
data sets, and there is a one-to-one relationship between z/OS systems and JESplexes.
This is the easiest to implement, because few (if any) changes are required to the
incoming system. However, on an ongoing basis, this is also the most complex
configuration to maintain and operate, and it provides no workload balancing or
centralized management capability.
2. GoldPlex: This is a Parallel Sysplex with more than one JESplex. The amount of work
involved in merging the incoming system depends on whether it will continue to be a single
JES member or if it will join one of the existing JESplexes in the sysplex.
3. PlatinumPlex: This is a Parallel Sysplex with only one JESplex, meaning that all systems
are sharing the same JES2 checkpoint and spool data sets. This is the ideal scenario. It
might involve more work up front to achieve this, but will require less ongoing maintenance
and be easier to operate after the merge. It also provides the ability to do workload
balancing across all the systems in the sysplex.
The incoming system can come from a configuration where the JES2 is:
1. A single member MAS, with its own checkpoint and spool data sets, or
2. A multi-member MAS, where the whole MAS is being merged with the target sysplex.
96
97
The alternative to a MAS is Network Job Entry (NJE), which isolates the JES2 MAS
complexes but still allows transmission of spool data between them. The considerations
for such a configuration are:
This provides an isolated environment which can provide protection from other MVS
systems.
It allows you to isolate workloads and users for security reasons.
It reduces spool and checkpoint contention, although it can impose excessive I/O and
contention by double-spooling jobs and output that have to be transferred from one
MAS to the other.
It may be easier to implement initially.
It does nothing to decrease the operation and maintenance complexity and workload,
in fact it may even result in a more complex and difficult to operate environment.
It does nothing to help you balance workload across the sysplex, one of the main
reasons for implementing a sysplex in the first place.
You need to implement some method (JCL changes, automation, user exits) to ensure
the jobs and output end up in the intended MAS.
There are two approaches to moving the incoming system into the target sysplex:
You can keep the incoming system in its own MAS configuration, like SYSD in Figure 7-2
on page 96. At some later point you may wish to make the incoming system part of the
target sysplex JESplex.
You can merge the incoming system MAS with the target sysplex MAS at the time of the
merge.
We assume that the entire MAS that the incoming system is a part of (which might consists of
just one system, the incoming system, or it might consist of a number of systems) is being
merged into the target sysplex at the same time. While it is possible, we feel it is very unlikely
that you would be taking the incoming system out of an existing multi-member MAS and
moving in into the target sysplex on its own, and therefore do not address this scenario.
98
Consideration
Note
Type
B,G
B,G
B,G
B,G
B,G
Done
The Type specified in Table 7-1 on page 98 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
1. Review the JOBCLASS initialization statements:
a. You can not have the same job class served by both WLM and JES2 initiators. The
MODE parameter specifies the initiator type. We recommend that all jobs with the
same WLM service class be managed by the same type of initiator (WLM or JES).
WLM works more effectively when all initiators servicing a given WLM service class are
the same type.
b. If you are using the SCHENV parameter on the JOBCLASS statement, are the
scheduling environments defined to WLM in the target sysplex? For more information
about WLM, refer to Chapter 4, WLM considerations on page 51.
2. Are JES2 exits being used in the incoming system? Check the EXIT and LOAD
statements to identify the exits being used. If exits are being used, are they still needed if
the system is going to be in the same sysplex as the target sysplex systems? Do they
need changes? Are there new exits that you should add, or new functions to existing exits,
because the incoming system is going to be in the same sysplex as the target sysplex
systems? It would be a good idea to check the exits and functions of the exits in the
existing target sysplex systems to see if any of those functions must be added to the
incoming system. As a general rule, however, the fewer exits you use, the better.
3. You need to decide what spool data, if any, that you are going to bring from the old MAS to
the new environment. You should test the offload and reload procedures in advance to
make sure everything runs smoothly for the actual cutover.
The work you will have in merging your incoming system, from a JES2 perspective, depends
on your initial JES2 configuration. If the incoming system comes from a single-member MAS
environment and preserves the same configuration after merging, your task will be very easy
to implement, with probably nothing to do. If the incoming system is part of a multi-member
MAS, the task may be a little more complex, because a multi-member MAS is likely to have a
larger workload, more exits, and more customization. However, the basic considerations
(from a JES2 perspective) are the same as if you are moving just a single system.
Note
Type
G,P
G,P
G,P
Check that the checkpoint data set is large enough for the additional
workload
G,P
G,P
Done
99
Consideration
Note
Type
G,P
G,P
G,P
G,P
10
G,P
11
G,P
12
G,P
13
G,P
Verify the JES2 initialization parameters that must be the same across
the JESplex
14
G,P
15
G,P
16
G,P
Done
G,P
The Type specified in Table 7-2 on page 99 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes
indicated in Table 7-2 on page 99 are described below:
1. Will the systems in the target sysplex and the incoming system or JESplex share:
a. DASD
i. To be in a MAS configuration, the SPOOL and checkpoint data sets must be shared
among all members.
ii. If some volumes are not shared, you must have a way of preventing jobs that use
data sets on those volumes from running in systems not having access to them.
Using a WLM scheduling environment is one way to achieve this.
iii. Related products like DFSMShsm and SMS should also be shared, with the same
scope as the DASD that are shared.
b. RACF database. It would be very unusual to have a single JESplex span more than
one RACFplex. If you will not have a single RACFplex, you should reconsider whether
you really want a single JESplex.
c. Tape management database, like DFSMSrmm.
d. Master and user catalogs.
If the majority of resources are going to be accessible from all systems, then you should
definitely merge the incoming system into the target sysplex MAS. On the other hand, if
the incoming system will not have access to the resources of the existing systems, and
vice versa, then you should probably keep the incoming system in its own JESplex. The
one exception may be if the sysplex generates large amounts of JES outputin this case,
the benefits of having a single pool of output devices serving all systems may be sufficient
to warrant merging all systems into a single JESplex.
2. Each JESplex in a sysplex has a unique node identification. When merging one JESplex
into another or an isolated system into a JESPLEX, you have to update all references to
the old node id.
100
Scan all /*XMIT, /*XEQ, or /*ROUTE XEQ (and, in z/OS 1.4 and later, //XMIT) JCL control
statements for explicit references to node ids, for example /*XEQ Nx. If the node
definitions were identical in both MASes, then you do not have to worry about this. On the
other hand, if there are duplicate node numbers pointing to different nodes (for example, in
MAS1, N17 points to node POK, while in MAS2, N17 points to node KINGSTON) then you
have to update any JCL to work in the target JESplex environment. You can use
TSO/ISPF option 3.14 (SUPERC) to find references to the node that is being changed. If
there are many to change, you can use the sample CLIST, macro and JCL shown in 7.5,
Tools and documentation on page 110.
If your installation uses DESTid names for route codes, update DESTID(xxxx) initialization
statements with the new node id. It is a good practice to use the node name instead of the
(numeric) node id, since it makes changes easier.
Other systems in other sites that have NJE connections to the incoming system must be
notified of the node id change, and must change their processes and definitions in synch
with the merging of the incoming system into the JESplex.
As an example, let us take 2 nodes, POK and KINGSTON. These 2 nodes are in separate
JESPLEXes that are to be merged together. The new node will be POK, and KINGSTON
will be obsolete.
When the time comes to change the node, bring down node KINGSTON. This is needed
because the $TNODE command requires that the node be unconnected before the
command will take. On other nodes in the network, issue the
$TNODE(KINGSTON),NAME=newname command. In this case, newname is a name that will never
be used as a node in your network.
After issuing this command, issue the command $TDESTID(KINGSTON),NAME=POK. This will
cause all future output routed to KINGSTON to actually be sent to node POK.
To re-route existing output destined to node KINGSTON to node POK, issue the command
$R ALL,R=newname.*,D=POK.* For pre-execution jobs, the command is $R
ALL,R=newname,D=POK . These two commands can be repeated to re-route any output or
jobs created using the old node number.
3. Is the current number of spool volumes in the target JESplex adequate to handle the
additional workload of the incoming system or JESplex? Additional spool volumes may be
required to handle the additional spool files and to better spread the I/O requests and
avoid spool contention.
Also, review the SPOOLNUM parameter of the SPOOLDEF statement, in the JES2
initialization parameters. It specifies the maximum number of spool volumes. The default
value is 32; the highest possible value is 253. This value is used when a cold start is done,
but subsequent to that, the SPOOLNUM value can only be increased by an operator
command ($T SPOOLDEF,SPOOLNUM=xxx).
Just in case a cold start is ever required, you should ensure that the value specified on this
parameter in the JES2 initialization parameters always reflects the actual value currently in
use.
All spool volumes in the MAS must be shared among all MAS members. Review the I/O
configuration to ensure access to all spool volumes by the incoming system or JESplex.
Consider if you need to use spool partitioning for the incoming system. Spool partitioning
can be a useful tool for increasing JES2 performance. Isolating frequently-run
spool-intensive jobs on separate volumes may improve JES2 performance by reducing the
load on the other spool volumes. Spool partitioning by job mask can be achieved by the
SPOOLDEF initialization statements and two installation exits, Exit 11 and Exit12. Since
OS/390 2.10, you can get spool partitioning by system through the use of the command:
$T SPOOL(nnnnnn),SYSAFF=(sys,sys,...)
101
For more information about spool partitioning, refer to JES2 Initialization and Tuning
Guide, SA22-7532 and the IBM Redbook OS/390 Version 2 Release 10 Implementation,
SG24-5976.
Note: APAR OW49317 provides JES2 support for placing the spool data sets on 3390
volumes containing more than 10019 cylinders per volume. Be aware, however, that the
largest spool data set supported is still 64 K tracks.
4. The target checkpoint data set must be large enough to handle the additional workload
related to the incoming system or JESplex. Use the $D CKPTSPACE command to
determine the amount of free space currently available in the target JES2 checkpoint. If
you decide to change some JES2 parameters, be aware of those that affect the checkpoint
size, as shown in Table 7-3.
Table 7-3 JES2 Parameters affecting checkpoint size
Initialization parameter
Description
Default value
SPOOLDEF SPOOLNUM=
32
SPOOLDEF TGSPACE=(MAX=)
16288
CKPTDEF LOGSIZE=
1 (MODE=DUPLEX)
1 to 9 (MODE=DUAL)
CKPTSPACE BERTNUM=
2 x JOBNUM + 100
JOBDEF JOBNUM=
1000
JOBDEF RANGE=
Range of JOBIDs
19999
OUTDEF JOENUM=
2.5 x JOBNUM
If you need more space, you can use the checkpoint reconfiguration dialog to move the
checkpoints to larger data sets or structures. See the section entitled Operator-initiated
Entrance into a Checkpoint Reconfiguration Dialog in JES2 Initialization and Tuning
Guide, SA22-7532.
In case of I/O errors or scheduled hardware maintenance, the checkpoint data set can be
replaced by the data set specified in the NEWCKPTn parameter of the CKPTDEF
initialization statement. It can be a CF structure or a data set on DASD. If these data sets
are not defined in your JES2 parm member, JES2 will not provide default data sets.
However, in that case, you can define these data sets during a checkpoint reconfiguration
dialog. If they are specified in the JES2 parms, these data sets are only defined to JES2
and not allocated. The size of these data sets or structures must be at least equal to the
size of the primary checkpoint. If you change the checkpoint data set size, remember to
also change the NEWCKPTn structure or data set size.
Important: The checkpoint data set must be large enough, or JES2 will not initialize.
5. The JES2 checkpoint structure size is a function of the size of the checkpoint data set. If
the target sysplex has its checkpoint in a CF and you changed the size of the checkpoint
data set, you must review the size of the JES2 structure. When you know how many 4 KB
records will be in the new checkpoint data set, you can use the CFsizer tool, available on
the Parallel Sysplex home page to calculate the new size of the JES2 structure in the CF:
http://www.ibm.com/servers/eserver/zseries/cfsizer/jes.html
Remember that JES2 pre-formats its structure, so it always looks 100% full.
To find out how to calculate the number of 4 KB records in the checkpoint data set, refer to
the HASP537 message issued during JES2 initialization.
102
Review the size of the structure identified on the NEWCKPTn statement. It must be large
enough to accommodate the primary checkpoint.
Important: The structure specified on the CKPT1 statement must be sufficient in size, or
JES2 will not initialize.
6. The MASDEF statement of the JES2 initialization controls the access to the checkpoint
data set through the following parameters:
HOLD: This specifies the minimum length of time a member of a MAS must maintain
control of the checkpoint data set after acquiring it. Setting this value too high needlessly
locks out other members in the configuration. Setting it too low causes more delays in
local JES2 processing and requires more frequent access (and overhead). Members with
a high JES2 workload must hold the checkpoint data set longer than ones with less
workload.
DORMANCY: This specifies both the minimum and the maximum time interval that a
member must wait, after releasing the checkpoint data set, before attempting to regain
control of it. This parameter can be used to stop a member monopolizing the checkpoint
data set by acquiring it too frequently.
Poor tuning of these parameters can reduce JES2 throughput and can lock out smaller
MAS members from access to the checkpoint, or lead to poor JES2 performance on
systems with high JES2 workload. A good example is some job scheduler products that
control a jobs status by issuing JES2 commands. Poor tuning of these parameters can
make these products suffer long delays in JES2 environments.
Also, the communications among JESplex members can be affected by initialization
parameters values. When a member of a MAS queues a message to another member of
that MAS, there is no interrupt mechanism to inform the receiving member that a message
exists. The receiving member periodically reads the shared job queue record and
examines queue information for new messages. The length of time it takes for the
receiving member to recognize the queuing is controlled by:
MASDEF: HOLD and DORMANCY parameters.
NJEDEF: DELAY parameter specifies the maximum delay time for intranodal message
or command transmission.
To avoid delays, set the values on the MASDEF statement bearing in mind the profile of
the system and its position in the target MAS context. You can use the JES2 command
$D PERFDATA(QSUSE) to display the count and average wait time for $QSUSE requests
(access to the checkpoint).
7. In a MAS configuration, the JES2 checkpoint data set is used:
To maintain information about job input and output queues
For inter-JES2 communications
For spool serialization
To contain the spool space map
Review the placement of the primary and secondary checkpoint data sets. Common
placements of the checkpoint within a MAS would be:
a. The Primary checkpoint in a CF structure and the Secondary checkpoint on DASD:
Placing the primary checkpoint in a CF structure provides more equitable access for all
members of a MAS than would be the case if the checkpoint were on DASD. This is
because JES2 uses the CF lock to serialize access to the checkpoint. The CF lock is
better than the hardware RESERVE and RELEASE macros required for DASD,
103
because it ensures all members get fair access to the checkpoint through a first-in,
first-served queuing mechanism. The CF lock affects only the checkpoint, while the
hardware RESERVE macro locks the entire volume upon which the checkpoint data
set resides. Also, data can be transferred to and from a CF much faster than to and
from DASD.
To ensure the security of that data through the DUPLEX copy retained on DASD, we
recommend this placement for MAS configurations with four or more members.
b. Both checkpoints on DASD:
When the checkpoint data set resides on DASD, the change log communicates
checkpoint updates to the member that is reading the checkpoint. If the change log is
small, it will reside only on the first track of the checkpoint data set.You can use the
LOGSIZE= parameter on the CKPTDEF initialization statement to control the size of
the change log. Placing both checkpoints on DASD volumes reduces throughput of
data and can lock out smaller MAS members from access to the checkpoint. When the
primary checkpoint is on DASD, special care must be taken with the I/O configuration
and the tuning of the MASDEF HOLD and DORMANCY parameters, since the HOLD
time includes the time for JES2 to issue, acquire, and release the RESERVE.
Note: If both checkpoints are on DASD, the best JES2 performance is obtained by
running in DUAL mode. However, if the checkpoint is going to be placed in the CF,
JES2 must operate in DUPLEX mode. In z/OS 1.2 and later releases, you can
switch back and forth between modes using the $T CKPTDEF,MODE=xxxxxx
command.
8. The $ACTIVATE command is used to activate new functions at the current release of
JES2. The $ACTIVATE command is JESplex-wide and can make JES2 operate in one of
two modes:
a. Full function mode (Z2) or
b. Compatibility mode (R4)
Z2 mode was introduced in JES2 z/OS V1R2 and provides constraint relief by increasing
the number of JQEs, JOEs, BERTs, and TG space that you can define in the system. This
activation level supports the ability of JES2 to process increases in the following:
a. Job numbers to 999,999
b. JQEs to a maximum of 200,000
c. JOEs to a maximum of 500,000
d. BERTs to a maximum of 500,000
e. TGSPACE to a maximum of 16,581,181 bytes
All shared data areas (spool and checkpoint) used by JES2 are compatible with previous
releases of JES2. However, the data structures in R4 Mode cannot handle the increased
limits. Many binary job number fields are only 2 bytes long and chaining fields use 3 byte
offsets which cannot address the new limits. Z2 Mode binary job number fields are 4 bytes
long and chaining is accomplished using 3 byte indexes rather than 3 byte offsets. These
changes are incompatible with any exit routine or other unique code that examines
checkpointed control blocks or processes job numbers.
If the target JESplex is running in Z2 mode, you may need to:
a. Update routines in the incoming system that examine JES2 control blocks such as the
JQE, where fields are mode sensitive.
b. Update routines that process job numbers, since they can now have six digits.
104
c. Contact your vendors for any products based on JES2, running only in the incoming
system, to ensure that they are ready to run correctly in Z2 mode.
Important: All members in a MAS configuration in Z2 mode must be on JES2 level
z/OS 1.2 or higher. If the target sysplex MAS is already in Z2 mode, but the incoming
system is at a lower level than z/OS 1.2, you can revert back to R4 mode using the
$ACTIVATE command on the target sysplex MAS. If the target sysplex MAS is still in
R4 mode and the incoming system is at a lower level than z/OS 1.2, you should
postpone moving to Z2 mode until the incoming system can be upgraded to a
Z2-supporting level.
9. JES2 exits are used to satisfy installation-specific needs, such as enforcing job card
standardization. JES2 exits allow you to modify JES2 processing without directly affecting
JES2 code.
For operational consistency, it is highly recommended that the exits and the load modules
be the same in all members of the JESplex, since a job can run in any member of the
JESplex. If they are not the same, unpredictable results can occur.
To display all enabled exits and the routines associated with them, you can use the JES2
command:
$d exit(*),status=enabled,routines
10.JES2 devices like readers, printers, punches and Remote Job Entry (RJE) workstations
may be attachable to any or all members in the JESplex.
Local devices are attached to the MVS system and are used by JES2 for reading jobs and
writing output. Local devices include card readers, card punches, internal readers, and
printers. You identify local devices JES2 uses with the RDR(nn), PRT(nn), PUN(nnnn),
and LINE(nnnn) initialization statements.
Review the statements defining local devices on the incoming system to be sure they are
unique throughout the target JESplex. You can define the local devices in the same
manner to each member in the target MAS. This allows all devices attached to the
incoming system to be attached to other members (with appropriate manual switching)
when the incoming system is not operational. Similarly, the LINE(nnnn), PRT(nnnn),
PUN(nn), and RDR(nn) initialization statements should be set so that a physical device
has the same JES2 device number no matter which member it is attached to. For example,
suppose there is a 3211 local printer attached to the incoming system. Before merging,
the target MAS has 11 local printers defined. The 3211 printer on the incoming system
could be defined as:
PRT(12) UNIT=1102,START=YES
PRT(12) UNIT=1102,START=NO
JES2 initialization will detect devices that are not online and place them in a drained state.
Later, the device can be activated by entering the $P device and VARY device OFFLINE
commands on the member to which it is attached, performing hardware switching, then
entering the VARY device ONLINE followed by $S device commands on the new member.
The $S command will fail if no hardware path exists.
If you have defined your own UCSs and FCBs, the SYS1.IMAGELIB data set of the
incoming system, make sure that those definitions will be available to the systems in the
target sysplex. As part of this, you must check that any UCSs or FCBs with duplicate
names are actually identical definitions. If there are, you must address this or else you will
only be able to process the output from the incoming system on devices attached to that
system.
105
Review the RJE workstations in the incoming system: be sure that each workstation has a
unique subscript number in the target JESplex. For example, if the incoming system has
defined RMT(13), RMT(13) should not be defined on any other member of the target
JESplex. RJE workstations can sign on to any member in the MAS, but a given remote
number can only be logged/signed on to one member of the MAS at a time.
11.In a MAS configuration, jobs enter the common queue from any input source (local or
remote) attached to any member in the configuration. Normally (unless special actions are
taken), jobs are eligible to execute in any member in the configuration. Started tasks and
TSO/E users are an exception: they execute only in the member in which they are entered.
As the members logically share common JES2 job input and output queues, the workload
can be balanced among members by allowing jobs to execute on whatever member has
an idle initiator with the correct setup and required output devices. There are two different
modes of initiators:
a. Those owned by JES2 and controlled by JES2 commands.
b. Those owned and managed by WLM.
The MODE parameter on the JOBCLASS statement determines whether jobs in that class
will be selected by a JES2 or WLM initiator, and applies to all systems in the MAS.
Initiators from each JES2 member in the MAS select work from the common job queue
independently of other initiators on other members of the JESplex.
Review the JOBCLASS initialization statements:
In the incoming system, are there WLM-Managed initiators? If so, make sure they are
defined for the same classes as the WLM-Managed initiators (if any) in the target
sysplex. If a given job class is managed by WLM in one environment and by JES2 in
the other, then you need to decide how you are going to proceed. The JOBCLASS
definitions are MAS-wide, so you cannot have a class managed by WLM on one
system and managed by JES2 on others.
In addition to considerations about WLM- and JES-managed initiators, you also need to
consider:
Which job classes will the JES-managed initiators on the incoming system run?
You need to review the classes that those initiators handle prior to the merge, and the
jobs that you want this system to run after the merge. It is likely that, unless you take
some specific action to prevent it, the initiators on the incoming system will start
running jobs from the other systems after the merge. If this is true, you need to make
sure that those jobs can execute successfully on the incoming system: for example,
does JES2 on the incoming system contain all the required user Proclibs, does the
incoming system have access to all the required devices (both DASD and tape), will
any required database managers be available, and so on.
Where will the jobs submitted from the incoming system run?
If the incoming system is going to run jobs from other systems in the MAS, you need to
make sure that all the jobs that previously only executed on the incoming system will
continue to be processed in a timely manner by one of the systems in the MAS, and
that all candidate systems have access to all required devices, Proclibs, database
managers, and so on.
12.Is the target JESplex using standards for job and output classes? If so, is the incoming
system using the same standards? For example, do jobs that require a manual tape drive
use job class T on the incoming system, but class M on the existing systems in the target
sysplex? If the standards are not the same, you need to decide how to proceed. If the
standards are widely accepted and used (which is not always the case!), it can be a
sizable task to change them. Products like ThruPut Manager from MVS Solutions can help
by providing a high level way of defining rules for what jobs should be allowed to run where
106
within the MAS. More information about ThruPut Manager can be found on the MVS
Solutions Web site at:
http://www.mvssol.com/
13.As soon as the incoming system joins the MAS, any job can potentially run in any member
of the MAS. In a sysplex environment, there can be resources that are only accessible to a
subset of the systems. In a multi-member MAS environment, jobs using these resources
must execute on a system that has access to the required resources. There are a number
of ways of directing jobs to a particular system or systems:
a. You can use job classes to associate a job with certain resources and control which
classes the initiators on each system can select; see the discussion in item 12 on
page 106. Using this mechanism, you can potentially have a number of systems that
are candidates to run a given job.
b. Batch jobs can specify which members in the MAS can process them by explicitly
coding the member names on the SYSAFF parameter of the /*JOBPARM statement in
their JCL. If you use this mechanism, the list of candidate systems that a job can run on
is hardcoded in the jobs JCL, which is not really an ideal situation.
c. A job may be assigned a scheduling environment to ensure it executes on the
members in the MAS which have the required resources or specific environmental
states. This mechanism provides more flexibility and granularity than either of the
previous mechanisms. However, like option b, it requires JCL changes or the
implementation of a JES exit to assign the appropriate scheduling environment to the
job.
d. You can use a non-IBM product such as ThruPut Manager. This provides the most
flexibility and power; however, unlike the other three options, there is a financial cost
associated with this option.
The scheduling environment can be specified with the SCHENV= keyword parameter on
the JOB statement. It can also be assigned via the JES2 JOBCLASS initialization
statement. This means that every job that is submitted in a particular job class from any
system in the MAS will be assigned the same scheduling environment.
Scheduling environments must be defined to WLM. The scheduling environment is
validity-checked by the converter and, if not defined to WLM, the job will fail with a JCL
error.
JES2 detects the availability of scheduling environments on each member and allows an
initiator to select jobs only if the specified scheduling environment is available. This is true
for both JES2-managed initiators and WLM-Managed initiators.
If you decide to use this mechanism, or are already doing so, you must review the
scheduling environments defined in the WLM policies in the target Parallel Sysplex:
You may need to add new resources and scheduling environments to fit the incoming
system needs.
If the incoming system was already using scheduling environments, look for different
scheduling environments using the same name, and so on.
If you have to change the name of a scheduling environment, remember that any JCL that
used the old scheduling environment name will need to be updated. For more information
about merging WLM policies, see Chapter 4, WLM considerations on page 51.
This may present a good opportunity for installations using batch-scheduling products to
replace the resource definition in those products with the WLM resource definition and
scheduling environments.
You can display all the scheduling environments defined to WLM, and on which members
in the MAS each is available, by using the SDSF SE option.
Chapter 7. JES2 considerations
107
14.The parameters that must be the same in the JES2 initialization across all members in the
MAS are as follows:
xxxDEF
MASDEF
NJEDEF
CONDEF
APPL
BADTRACK
CONNECT
DESTID(xxxx)
ESTBYTE
ESTIME
ESTLNCT
ESTPAGE
ESTPUN
108
EXIT
INTRDR
JOBCLASS
JOBPRTY
LOADMOD
NETACCT
NODE(xxxx)
OUTCLASS
15.There are many products that interact with JES2, such as JES/328X, INFOPRINT,
JESPRINT, SDSF, CA-SPOOL (Computer Associates product), CONTROL-M (BMC
product) and others. Prior to the merge, review their installation, customization, and
maintenance procedures to ensure they will continue to function when the incoming
system becomes part of the MAS.
16.Jobs can be converted in any member. In order to ensure successful conversion, all
member start-up procedures should use the same data sets and concatenation order.
Also, the Proclib initialization statement should be consistent across all members. Proclibs
defined using the JES2 Proclib initialization statement can be displayed, updated or
deleted using operator commands without restarting JES2.
109
Note
Sysplex
G, P
G, P
G, P
Done
110
/*
MYCHANGE
TO THE MACRO MEMBER NAME
*/
/* FOREGROUND CALL TO CLIST:
*/
/* CLIST_MEMBER_NAME DSN(DATA_SET_NAME_TO_SEARCH)
*/
/*---------------------------------------------------*/
SET RC = 0
IF &DSN = THEN +
DO
WRITE '*** INCORRECT CALL TO CLIST CLIST_NAME ***'
WRITE '*** CORRECT CALL IS: CLIST_NAME DSN(DSN_TO_PROCESS) ***'
EXIT CODE(50)
END
ISPEXEC LMINIT DATAID(TEMPDDN) DATASET('&DSN') ENQ(SHR)
SET LMRC = &LASTCC
IF &LMRC NE 0 THEN +
DO
WRITE '*** LMINIT ERROR RC=&LMRC ***'
EXIT CODE(&LMRC)
END
ISPEXEC LMOPEN DATAID(&TEMPDDN) OPTION(INPUT)
SET LMRC = &LASTCC
IF &LMRC NE 0 THEN +
DO
WRITE '*** LMOPEN ERROR RC=&LMRC ***'
ISPEXEC LMFREE DATAID(&TEMPDDN)
EXIT CODE(&LMRC)
END
SET MEMBER =
SET LMRC = 0
/* LOOP TO ALL MEMBERS OF THE PDS */
DO WHILE &LMRC = 0
ISPEXEC LMMLIST DATAID(&TEMPDDN) OPTION(LIST) MEMBER(MEMBER)
SET LMRC = &LASTCC
IF &LMRC = 0 THEN
DO
ISPEXEC EDIT DATAID(&TEMPDDN) MEMBER(&MEMBER) MACRO(MYCHANGE)
IF &LASTCC > 4 THEN +
DO
WRITE 'ERROR PROCESSING MEMBER &MEMBER'
SET RC = 12
SET LMRC=8
END
END
END
/*--------------------------------------------------------------*/
/* FREE THE MEMBER LIST, CLOSE AND FREE THE DATAID FOR THE PDS. */
/*--------------------------------------------------------------*/
ISPEXEC LMMLIST DATAID(&TEMPDDN) OPTION(FREE)
ISPEXEC LMCLOSE DATAID(&TEMPDDN)
ISPEXEC LMFREE DATAID(&TEMPDDN)
EXIT CODE(&RC)
Example 7-2 contains a sample EDIT macro to help you make mass changes.
Example 7-2 Sample MYCHANGE EDIT macro to find and change strings
ISREDIT MACRO
/* CHANGE OLD_STRING TO THE STRING YOU ARE SEARCHING FOR */
/*
NEW_STRING TO THE STRING TO THAT WILL REPLACE OLD_STRING */
ISREDIT X ALL
111
SET LCC = 0
ISREDIT FIND 'OLD_STRING' ALL WORD
IF &LASTCC = 0 THEN +
DO
ISREDIT (MEMNAME) = MEMBER
ISREDIT CHANGE 'OLD_STRING' 'NEW_STRING' ALL NX
ISREDIT SAVE
SET LCC = &LASTCC
IF &LCC = 0 THEN +
WRITE *** &MEMNAME CHANGED ***
ELSE DO
IF &LCC = 12 THEN +
WRITE &MEMNAME NOT SAVED NOT ENOUGH PDS OR DIRECTORY SPACE
ELSE
WRITE &MEMNAME NOT SAVED - RC=&LCC
ISREDIT CANCEL
END
END
ISREDIT END
EXIT CODE(&LCC)
You can execute the sample CLIST through TSO/ISPF Option 6, or you can use the sample
JCL contained in Example 7-3 to invoke the CLIST from a batch job. If you wish to modify the
sample macro, refer to z/OS V1R2.O-V1R3.0 ISPF Edit and EDIT Macros, SC34-4820.
Example 7-3 Sample JCL to invoke CLIST from batch
//SEARCH EXEC PGM=IKJEFT1A
//SYSPROC DD DISP=SHR,DSN=PDS_CONTAINING_CLIST_AND_MACRO
//ISPPLIB DD DISP=SHR,DSN=ISP.SISPPENU
//ISPMLIB DD DISP=SHR,DSN=ISP.SISPMENU
//ISPSLIB DD DISP=SHR,DSN=ISP.SISPSENU
//ISPTLIB DD DISP=SHR,DSN=ISP.SISPTENU
//ISPLOG
DD SYSOUT=*,LRECL=125,RECFM=VBA
//ISPPROF DD DISP=(,DELETE),DSN=&&ISPPROF,LRECL=80,DSORG=PO,
//
SPACE=(TRK,(5,5,2)),RECFM=FB,BLKSIZE=0
//SYSTSIN DD *
ISPSTART CMD(CLIST DSN(DSN_TO_SEARCH)) BDISPMAX(99999)
/*
To determine the size of the JES2 checkpoint structure, you should use the CFSizer wizard
available on the Parallel Sysplex home page at:
http://www.s390.ibm.com/cfsizer/jes.html
You can find a good description of JES2 processing in a sysplex configuration in the IBM
Redbook OS/390 Parallel Sysplex Configuration Volume 2: Cookbook, SG24-5638.
For a valuable source of information about initialization and tuning refer to JES2 Initialization
and Tuning Guide, SA22-7532.
For specific information about processing in a MAS, refer to Washington System Center
Technical Bulletin JES2 Multi-Access Spool in a Sysplex Environment, GG66-3263.
For a good explanation of the JES2 enhancements relating to I/O to spool and spool
partitioning, refer to the IBM Redbook OS/390 Version 2 Release 10 Implementation,
SG24-5976.
112
After merging the incoming system into a JESplex, you should monitor the performance of
JES2. Realistically, the best measurement tool for your JES2 checkpoint is the lack of
symptoms of JES2 delays by your applications! However, here are some tools you can use for
checkpoint analysis:
SDSF: The MAS option displays the members status, hold and dormancy times and actual
times. This also a convenient panel for adjusting the times and seeing immediate results.
Be aware, however, that these times are only instantaneous, and do not show averages.
RMF III: See the Subsystem Display, then JES Delays. Excessive delays here are often
due to checkpoint delays.
RMF CF Structure Activity Report: JES2 writes many blocks of data at once, so you will
often see No Subchannel Available in these reports. This is normal and should not alarm
you. The service times for Synch and Asynch requests should be within the published
guidelines for your environment.
$D PERFDATA(QSUSE): This displays the count and average wait time for $QSUSE
requests (access to the checkpoint). You will find the description and instructions about
how to use the $PERFDATA JES2 command in:
http://www.ibm.com/support/techdocs/atsmastr.nsf/PubAllNum/W9744B
$TRACE(17) data: Turn on $TRACE(17) records for ten to fifteen minutes during your
most active time of the day (from a JES2 perspective). Then analyze the data with the
JES2T17A sample program provided with JES2. Here are sample JES2 operator
commands to trace to class x:
$S TRACE(17)
$T TRACEDEF,TABLES=20
$T TRACEDEF,ACTIVE=Y,LOG=(START=Y,CLASS=x,SIZE=92000)
When you have collected sufficient data, spin off the $TRCLOG, and turn off tracing, then
use the IBM external writer (XWTR) or SDSF to write the trace records to disk:
$T TRACEDEF,ACTIVE=NO,SPIN
$P TRACE(17)
S XWTR.X,UNIT=SYSDA,DSNAME=JES2.TROUT,SPACE=(CYL,(1,3))
F X,CLASS=x
For a description about JES2 trace records available, refer to z/OS V1R3.0 JES2
Diagnosis, GA22-7531.
113
114
Chapter 8.
115
116
Access to the internal file and directory structure of a shared HFS is controlled by the
UNIX security, which is at the owner, group, or other level. So, systems sharing HFSs
will see the same owner, group, other flags across the HFSplex. The files or
directories in UNIX have owner and group entries associated with each. The owner
and group are numbers (UID and GID) associated with the RACF userids or groups
(defined in the OMVS segment of each). If you do not share the RACF data bases within
the target sysplex, but you do share the system HFS data sets, then the userids and
groups need to have the same UIDs and GIDs respectively across the RACFplex.
For example, if userid Fred has a UID of 1234 on system TEST and system DEVL
shares the HFSplex with TEST, but not RACF, then userid Fred needs to have a UID of
1234 on system DEVL too, to ensure the owner, group, other flags are honoured
correctly. If userid Jane has UID 1234 on DEVL instead, then Jane would have the same
access as Fred to files and directories in the HFSplex.
Given the difficulty that would be involved in maintaining UIDs in this manual way, we do
not recommend placing the incoming system in the same HFSplex as the target sysplex
unless the incoming system is also in the same RACFplex as the target sysplex.
Moving the incoming system into a PlatinumPlex means sharing everything, including the
system infrastructure, user catalogs, and all user volumes. This configuration allows you to
gain the benefits of sharing not just the sysplex-wide root HFS, but also full read/write
sharing of all user HFSs from any system within the sysplex. There are a number of
advantages when moving into a PlatinumPlex configuration and using sysplex HFS
sharing.
Every HFS data set that is shared read/write in the sysplex has a central owning system
that manages it on behalf of other systems in the sysplex. The owning system gets read or
write requests from other systems and performs these operations on the HFS data sets. A
messaging protocol using XCF services is used to transfer data around the sysplex from
the central owner. When HFS data sets are shared across the sysplex, you will have a
single large pool of all the HFS data sets of the entire sysplex. All these HFS data sets will
be fully accessible from every system in the sysplex.
Important: It is vital to understand that once a system joins a HFSplex, all the HFSs on
that system, and all the HFSs on all the other systems in the HFSplex are accessible to
every system in the HFSplex, even if the volume containing a given HFS is not online to
some of the systems. As long as a volume is online to just one system, every system in the
HFSplex can access the HFSs on that volume.
Note
Applies
to
G, P
G, P
G, P
G, P
G, P
Done
117
Consideration
Note
Applies
to
G, P
G, P
G, P
G, P
G, P
G, P
Done
The Type specified in Table 8-1 on page 117 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes
indicated in Table 8-1 on page 117 are described below:
1. The sysplex root is an HFS data set that is used as the sysplex-wide root. This data set is
required for the sysplex HFS sharing capability provided in OS/390 2.9, and is only used if
that capability is being used. It provides an umbrella structure that encompasses all the
HFS files in the sysplex. It contains directories and symlinks to redirect HFS file references
to the correct files across the sysplex. It does not contain any files or any code. The
sysplex root has symlinks to point to /bin, /usr/, /lib and /opt files that are specific to a
release/service level of z/OS and /dev, /tmp, /var, and /etc files that are specific to each
system in the sysplex. For more information refer to Chapter 18, Shared HFS in a
sysplex, in z/OS UNIX System Services, GA22-7800.
When a system mounts the sysplex root HFS, a mount point for that system is dynamically
added in that HFS. For this reason, the sysplex HFS data set must be mounted read-write
and designated AUTOMOVE. For more information about AUTOMOVE, see z/OS UNIX
System Services, GA22-7800.
You will need to ensure sysplex HFS sharing is enabled on all systems within the HFSplex
by specifying the SYSPLEX(YES) parameter in the BPXPRMxx member of Parmlib, as in
Example 8-1. Note that this parameter is only available on systems at OS/390 2.9 and
above. Also see Example 8-4 on page 122 for more information on the BPXPRMxx
filesystem definitions.
Example 8-1 Enable sysplex HFS sharing in BPXPRMxx
... ... ...
SYSPLEX(YES)
... ... ...
/* sysplex enabled
*/
Only one sysplex root is allowed for all systems participating in an HFSplex. The sysplex
root is created by running the BPXISYSR sample job in SYS1.SAMPLIB. After the job
runs, the sysplex root file system structure would look like the one in Example 8-2 on
page 119.
You will notice the $VERSION and $SYSNAME symbolic links. A directory with the value
specified on the VERSION parameter of BPXPRMxx will be dynamically created at system
initialization under the sysplex root and will be used as a mount point for the version HFS.
If the content of the symbolic link begins with $SYSNAME and SYSPLEX(YES) is
specified in BPXPRMxx, then $SYSNAME is replaced with the system name when the
symbolic link is resolved, which is used for system-specific HFS data sets. The presence
of symbolic links is transparent to the user.
118
No files or code reside in the sysplex root data set. It consists of directories and symbolic
links only, and is a small data set3 cylinders is the default size.
The sysplex root provides access to all directories. Each system in a sysplex can access
directories through the symbolic links that are provided. Essentially, the sysplex root
provides redirection to the appropriate directories, and it should be kept very stable;
updates and changes to the sysplex root should be made as infrequently as possible.
2. You need to define system-specific HFS data sets for each system in the sysplex that is
using the sysplex HFS sharing capability. We recommend that you use a naming
convention to associate this data set to the system for which it is defined. The sysplex root
has symbolic links to mount points for system-specific etc, var, tmp, and dev HFS data
sets.
Note: The system-specific HFS data set should be mounted read/write. In addition, we
recommend that the name of the system-specific data set contain the system name as
one of the qualifiers. This allows you to use the &SYSNAME symbolic in BPXPRMxx.
The mount point for these data sets will be dynamically created by the system during
OMVS initialization.
You can customize the BPXISYSS JCL in SYS1.SAMPLIB library to create a
system-specific HFS. You need to run this job for each system in the sysplex that shares
the sysplex root HFS. These data sets may be SMS- or non-SMS managed.
3. To set up an HFSplex, an OMVS CDS will need to be created. The CDS contains the
sysplex-wide mount table and information about all participating systems, and all mounted
file systems in the sysplex. To allocate and format a CDS, customize and submit the
BPXISCDS sample job in SYS1.SAMPLIB. The job will create two CDSs: one is the
primary and the other is a backup that is referred to as the alternate. In BPXISCDS, you
also need to specify the number of mount records that are supported by the CDS. For
more information refer to Chapter 18, Shared HFS in a sysplex, in z/OS UNIX System
Services, GA22-7800. The CDS is used as follows:
a. The first system that enters the sysplex with SYSPLEX(YES) initializes the OMVS
CDS. The CDS controls shared HFS mounts and will eventually contain information
about all systems participating in sysplex HFS sharing in this sysplex. This system
processes its BPXPRMxx Parmlib member, including all its ROOT and MOUNT
statement information. It is also the designated owner of the byte range lock manager
for the participating group. The MOUNT and ROOT information are logged in the CDS
so that other systems that eventually join the participating group can read data about
systems that are already active in the HFSplex.
119
b. Subsequent systems joining the participating group will read the information in the
CDS and will perform all mounts. Any new BPXPRMxx mounts are processed and
logged in the CDS. Systems already in the participating group will then process the
new mounts added to the CDS.
Once the OMVS CDS data set has been defined, update the COUPLExx Parmlib member
to define the primary and alternate OMVS CDS to XCF.
4. The version HFS is the IBM-supplied root HFS data set containing files and executables
for z/OS elements. To avoid confusion with the sysplex root HFS data set, the IBM
supplied root HFS data set is called the version HFS. In a sysplex environment, there
could be a number of version HFS data sets, each denoting a different release or service
level of z/OS. We recommend that you mount the version HFS in read-only mode. The
version HFS should never be updated by anything except SMP and should not contain
anything other than what is delivered by IBM. If your root HFS is currently mounted
read/write, refer to the section entitled Post-Installation Actions for Mounting the Root
HFS in Read-Only Mode in z/OS UNIX System Services, GA22-7800.
A GoldPlex can be set up to share the version HFS data sets on the shared sysres without
needing to set up sysplex-wide HFS sharing as long as the version HFS is only mounted
read-only on all systems.
A GoldPlex may not include the sharing of the Master Catalog and therefore user catalogs
may or may not be shared. So, you will need to consider how you want to manage the HFS
data set catalog entries. If the HFS data sets are cataloged in the Master Catalog you will
need to define a process to keep the Master Catalogs in sync across the sysplex. An
alternative is to have the HFS data sets in a user catalog that is shared by all system
within the sysplex.
5. If you are going to use the shared sysplex root HFS structure, you will need to adjust the
HFS file structure on the incoming system. Adjustments would include preparation of the
system-specific and version HFS data sets of the incoming system to reflect the naming
standards of the target sysplex. This is a requirement if a shared BPXPRMxx will be used
in the target sysplex.
We recommend that the name of the system-specific data sets contain the system name
as one of the qualifiers. This allows you to use the &SYSNAME symbolic in BPXPRMxx.
We do not recommend using &SYSNAME as one of the qualifiers for the version HFS data
set name. Appropriate names may be the name of the target zone, &SYSR1, or any other
qualifier meaningful to the system programmer. For more information about version HFS
naming standards, see 22.5.2, HFS considerations on page 383.
The BPXPRMxx Parmlib member is used to specify the various parameters to control the
UNIX file system. This member also contains information to control the OMVS setup and
processing. We recommend that you use two different members: one containing the file
system information, and the other containing OMVS information. You can point the OMVS
parameter of IEASYSxx Parmlib member at both BPXPRMxx members. You will find that
migrating from one release to another is easier if you use this method.
You can use a common BPXPRMxx member for all the systems in the sysplex. There are
four parameters in BPXPRMxx that are relevant to HFS sharing in a sysplex: SYSPLEX,
VERSION, ROOT, and MOUNT. The following are the details of each parameter:
a. SYSPLEX
You should specify SYSPLEX(YES) to indicate that you wish to use sysplex HFS
sharing. This parameter tells the system at the time of IPL to take part in HFS sharing
across the sysplex.
120
b. VERSION
This statement dynamically creates a mount point at the time of IPL to mount the
version HFS file. The version HFS is the IBM-supplied root HFS data set. You should
specify a VERSION parameter which identifies your z/OS system release level. It is a
good idea to use the system residence volser in the version HFS name. Different z/OS
systems in the sysplex can specify different VERSION parameters in their BPXPRMxx
member to allow different releases or service levels of the root HFS. We recommend
that you mount the version HFS in read-only mode. Just as you should never update a
running sysres, similarly, you should never update a running version HFS, and using
read only sharing provides better performance than using sysplex sharing. For specific
actions you have to take before you can mount the version HFS in read-only mode, see
the section entitled Post-Installation Actions for Mounting the Root HFS in Read-Only
Mode in z/OS UNIX System Services, GA22-7800.
Note: We do not recommend using &SYSNAME as one of the qualifiers for the version
HFS data set name. Appropriate names may be the name of the target zone, &SYSR1,
or any other qualifier meaningful to the system programmer. For more information about
version HFS naming standards, see 22.5.2, HFS considerations on page 383.
c. ROOT
Specify the name of the sysplex root data set. This data set must be mounted
read/write.
d. MOUNT
You should specify the mount information for all the HFS files required for your system.
If you used &SYSNAME as one of the qualifiers when you defined your system-specific
HFS data sets, then you can create a single BPXPRMxx member for all systems in
your sysplex.
You can specify two new parameters for the MOUNT statements. The SYSNAME
parameter specifies the name of the z/OS system in the sysplex that should own the
HFS data set being mounted. The AUTOMOVE parameter determines if another
system in the sysplex can take ownership of the HFS data set if the owning system
goes down.
We recommend that you use a common BPXPRMxx member for all systems in the
target sysplex. Example 8-3 and Example 8-4 on page 122 contain examples of the
two BPXPRMxx members.
If you wish to have a particular system issue the mount for some system, subsystem,
or application-related HFS data sets, you can use the SYSNAME parameter on the
MOUNT statement in BPXPRMxx, or you can set up a system-specific BPXPRMxx
member that only has those HFS data set mount parameters specified. This
BPXPRMxx member suffix would need to be added to the OMVS=(xx,yy) parameter of
the appropriate IEASYSxx Parmlib member. Note that regardless of where the file
system is mounted, if it is mounted read/write (and therefore sharable using Sysplex
HFS sharing), it will be accessible from all systems in the HFSplex.
Example 8-3 BPXPRM00 parameters
MAXPROCSYS(1000)
MAXPROCUSER(50)
MAXUIDS(50)
MAXFILEPROC(200)
MAXPTYS(256)
CTRACE(CTIBPX00)
FILESYSTYPE TYPE(UDS) ENTRYPOINT(BPXTUINT)
121
NETWORK DOMAINNAME(AF_UNIX)
DOMAINNUMBER(1)
MAXSOCKETS(10000)
TYPE(UDS)
FILESYSTYPE TYPE(INET) ENTRYPOINT(EZBPFINI)
NETWORK DOMAINNAME(AF_INET)
DOMAINNUMBER(2)
MAXSOCKETS(60000)
TYPE(INET)
MAXTHREADTASKS(50)
MAXTHREADS(200)
IPCMSGNIDS
(500)
IPCMSGQBYTES (262144)
IPCMSGQMNUM (10000)
IPCSHMNIDS
(500)
IPCSHMSPAGES (262144)
IPCSHMMPAGES (256)
IPCSHMNSEGS (10)
IPCSEMNIDS
(500)
IPCSEMNSEMS (25)
IPCSEMNOPS
(25)
MAXMMAPAREA(4096)
MAXASSIZE(41943040)
MAXCPUTIME(2147483647)
MAXSHAREPAGES(131072)
FORKCOPY(COW)
SUPERUSER(BPXROOT)
TTYGROUP(TTY)
STARTUP_PROC(OMVS)
Example 8-4 BPXPRMFS - filesystem definitions
/*************************************************************/
/* This member, as specified in IEASYS00, will cause
*/
/* OpenEdition to come up with all filesystems mounted.
*/
/*************************************************************/
FILESYSTYPE TYPE(HFS)
/* Filesystem type HFS
ENTRYPOINT(GFUAINIT) /* Entrypoint for defining HFS
PARM(' ')
/* Null PARM for physical file
/* system
*/
*/
*/
*/
VERSION('&SYSR1.')
SYSPLEX(YES)
*/
*/
FILESYSTYPE TYPE(TFS)
ENTRYPOINT(BPXTFS)
/* version
/* sysplex enabled
122
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
NOAUTOMOVE
MOUNT
MOUNT
MOUNT
MOUNT
MOUNT
MOUNT
MOUNT
/*
*/
/*
*/
FILESYSTEM('OMVS.&SYSR1..ROOT')
/* Version HFS
*/
MOUNTPOINT('/$VERSION')
TYPE(HFS)
/* Filesystem type HFS
*/
MODE(READ)
/* Mounted for read/write
*/
/*
*/
FILESYSTEM('OMVS.&SYSNAME..ETC')
/* HFS for /etc directory
*/
MOUNTPOINT('/&SYSNAME./etc')
TYPE(HFS)
/* Filesystem type HFS
*/
MODE(RDWR)
/* Mounted for read/write
*/
NOAUTOMOVE
/*
*/
/*
*/
FILESYSTEM('OMVS.&SYSNAME..VAR')
/* HFS for /var directory
*/
MOUNTPOINT('/&SYSNAME./var')
TYPE(HFS)
/* Filesystem type HFS
*/
MODE(RDWR)
/* Mounted for read/write
*/
NOAUTOMOVE
/*
*/
/*
*/
FILESYSTEM('LSK130.&SYSNAME..MSYS.HFS')
/* HFS for msys for setup
*/
MOUNTPOINT('/lsk130')
TYPE(HFS)
/* Filesystem type HFS
*/
MODE(RDWR)
/* Mounted for read/write
*/
NOAUTOMOVE
/*
*/
/*
*/
FILESYSTEM('LSK130.&SYSNAME..MSYS.LOG.HFS')
/* HFS for msys for setup log file*/
MOUNTPOINT('/lsk130/log')
TYPE(HFS)
/* Filesystem type HFS
*/
MODE(RDWR)
/* Mounted for read/write
*/
NOAUTOMOVE
/*
*/
/*
*/
FILESYSTEM('/&SYSNAME./TMP') /* TFS for /tmp directory
*/
MOUNTPOINT('/&SYSNAME./tmp')
TYPE(TFS)
/* Filesystem type TFS
*/
PARM('-s 10')
/* 10 meg file system
*/
NOAUTOMOVE
/*
*/
/*
*/
/*
*/
FILESYSTEM('/&SYSNAME./DEV') /* TFS for /dev directory
*/
MOUNTPOINT('/&SYSNAME./dev')
TYPE(TFS)
/* Filesystem type TFS
*/
PARM('-s 5')
/* 5 meg file system
*/
NOAUTOMOVE
/*
*/
/*
*/
In z/OS 1.3 and later, there is a new MOUNT parameterthe UNMOUNT parameter
specifies that the HFS is to be automatically unmounted when the owning system leaves
the sysplex.
123
124
In Figure 8-1 on page 124, if the content of the symbolic link begins with $VERSION or
$SYSNAME, the symbolic link will resolve in the following manner:
If you have specified SYSPLEX(YES) and the symbolic link for /dev has the contents
$SYSNAME/dev, the symbolic link resolves to /PRD1/dev on system PRD1 and
/PRD2/dev on system PRD2.
If you have specified SYSPLEX(YES) and the content of the symbolic link begins with
$VERSION, $VERSION resolves to the value nnnn specified on the VERSION
parameter. Thus, if VERSION in BPXPRMxx is set to PLXSY1 (in our example
BPXPRMxx we use &SYSR1 which substitutes the sysres id, in the case PLXSY1),
then $VERSION resolves to /PLXSY1. For example, a symbolic link for /bin, which has
the contents $VERSION/bin, resolves to /PLXSY1/bin on a system whose $VERSION
value is set to PLXSY1.
6. UNIX System Services security (HFS)
UNIX System Services (USS) is tightly integrated with the z/OS Security Server (RACF).
The system programmer or data administrator must know the concepts of UNIX and USS
security to manage the individual hierarchical file system (HFS) files (not data sets). It is
especially important to understand the concept of a superuser. This topic provides some
background information about UNIX and USS security. It also provides information about
HFS security.
a. UNIX file security overview
On UNIX systems, each user needs an account that is made up of a user name and a
password. UNIX uses the /etc/passwd file to keep track of user names and encrypted
passwords for every user on the system (this file is not used in z/OS). Internally, the
UNIX operating system uses a numeric ID to refer to a user, so in addition to user
name and password, the /etc/passwd file also contains a numeric ID called the user
identifier or UID. UNIX operating systems differ, but generally these UIDs are unsigned
16-bit numbers, ranging from zero to 65,535. z/OS UNIX System Services supports
UID numbers up to 2,147,483,647.
b. UNIX users and superuser
There is no convention for assigning UID numbers to users other than 0 (zero), which
has special significance in that it indicates a superuser who has special powers and
privileges under UNIX. So, care must be exercised in deciding who can gain access to
a UID of 0 (zero). If the z/OS Security Server (RACF) is installed, it offers some
features that can be activated to gain more control over superusers.
The z/OS Security Server also has capabilities to allow superuser-type access for
some specific services for specific users. The UID is the actual number that the
operating system uses to identify the user. User names are provided for convenience,
as an easy way for us to remember our sign-on to the UNIX system. If two users are
assigned the same UID, UNIX views them as the same user, even if they have different
user names and passwords. Two users with the same UID can freely read and write
each other files and can kill each others processes.
Note: If the users of the incoming system and target sysplex have the same UID, they
will have access to the same resources and they will be able to cancel the same
processes. During the merge of the systems, we recommend that you analyze users
and UIDs according to the security policy of the target sysplex.
125
c. UNIX groups
UNIX systems also use the concept of groups, where you group together many users
who need to access a set of common files, directories, or devices. Like user names
and UIDs, groups have both group names and group identification numbers (GIDs).
Each user belongs to a primary group that is stored in the /etc/passwd file on UNIX
systems (this file is not used in z/OS USS). The z/OS UNIX System Services supports
GID numbers up to 2,147,483,647.
d. Permission bits
All UNIX files have three types of permissions:
Read (displayed as r)
Write (displayed as w)
Execute (displayed as x)
For every UNIX file, read, write and execute (rwx) permissions are maintained for three
different types of file user:
The permission bits are stored as three octal numbers (3 bits for each type of file user)
totalling nine bits. When displayed by commands, such as ls -l, a ten-character field is
shown, consisting of nine for permission, preceded by one for file type.
Permission bit structure: The structure of the ten-character field is tfffgggooo,
where:
t The type of file or directory. Valid values are:
- File
c Character special file
d Directory
l Symbolic link
p FIFO special file
fff The OWNER permissions, as explained in Table 8-2.
ggg The GROUP permissions, as explained in Table 8-2.
ooo The OTHER (or sometimes referred to as WORLD permissions), as explained in
Table 8-2.
Table 8-2
126
Pos
Char
Access
Type
Permission for
directory
read
Permission to read,
but not search, the
contents.
write
Permission to
change, add, or
delete directory
entries.
Pos
Char
Access
Type
Permission for
directory
execute
or
search
Permission to
search the directory.
any
No
access
Octal value bits: There are many places in UNIX where permission bits are also
displayed as a representation of their octal value. When shown this way, a single digit is
used to represent an rwx setting. The meaning associated with the single digit is:
0 No access (---)
Permission bit examples: Remembering that each file always has permission bits for
owner, group and other, it is usual to see three-digit numbers representing the
permission bits of a file. For example, some typical file permission settings are:
666 Owner(6=rw-) group(6=rw-) other(6=rw-)
700 Owner(7=rwx) group(0=---) other(0=---)
755 Owner(7=rwx) group(5=r-x) other(5=r-x)
777 Owner(7=rwx) group(7=rwx) other(7=rwx)
User settings: A user may set permission bits for any combination at any level of
access. For example, if a user wanted to have read, write, and execute access to one
of his/her own files, but not allow access to anyone else, the permission bits would be
set to 700, which the ls -l command would display as -rwx------.
z/OS 1.3 provides a new concept of Access Control Lists. For more information about
this, see z/OS Distributed File Servce zSeries File System Implementation,
SG24-6580 or z/OS UNIX System Services, GA22-7800.
Chapter 8. Shared HFS considerations
127
128
The NORACF keyword prevents the standard non-OMVS related information from
being displayed.
For a group, the OMVS segment contains the group identifier (GID). To display this
information, the OMVS keyword should be appended to the RACF LG TSO command.
Example 8-6 shows how you would display the OMVS segment for group OMVSGRP.
Example 8-6 Listgrp command
LG OMVSGRP OMVS NORACF
INFORMATION FOR GROUP OMVSGRP
OMVS INFORMATION
---------------GID= 0000000001
For more information, refer to z/OS UNIX System Services, GA22-7800 and z/OS Security
Server RACF Command Language Reference, SA22-7687.
7. If you use SMS to manage your HFS data sets, you will need to review the ACS routines
on the incoming system to ensure the HFS data sets are managed appropriately when
merged into the target sysplex so the management of the data sets is consistent.
This would be an issue primarily in a GoldPlex configuration because of the separate
SMSplexes. In a PlatinumPlex configuration there would be a single SMSplex, so this
would not be an issue.
8. The following considerations apply when using zFS in a sysplex in shared HFS mode:
Only systems running the zFS address space can use zFS file systems. The file
system hierarchy appears different when viewed from systems with zFS-mounted file
systems than it does from those systems not running zFS. Pathname traversal through
zFS mountpoints will have different results in these cases since the zFS file system is
not mounted on those systems not running zFS.
zFS file systems owned by another system are accessible from a member of the
sysplex that is running zFS.
zFS compatibility mode file systems can be automoved and automounted. A zFS
compatibility mode file system can only be automoved to a system where zFS is
running.
129
You can have multi-file system aggregates in sysplex sharing mode, but they should be
mounted NOAUTOMOVE (or with UNMOUNT, if running z/OS 1.3 or later) to
automatically unmount the file systems when the owning system leaves the sysplex.
The IOEFSPRM file cannot be shared across systems in a sysplex when the file
contains:
A msg_output_dsn specification, or
A trace_dsn specification.
All systems running zFS see the zFS file systems. The file system hierarchy
appears differently when viewed from systems with zFS mounted file systems than
it does from those systems not running zFS. Pathname traversal through zFS
mountpoints will have different results in those cases, since the zFS file system is
not mounted on those systems not running zFS.
zFS compatibility mode file systems owned by that system that are allowed to be
automoved will be automoved to another system running zFS. If this function fails to
find another owner, the file system becomes unowned.
File systems which are unowned are not visible in the file system hierarchy, but can
be seen from a D OMVS,F operator command. To recover a file system that is
mounted and unowned, the file system must be unmounted.
130
zFS compatibility mode file systems owned by the system that are allowed to be
automoved will be automoved to another system running zFS. If this function fails to
find another owner, the file system and all file systems mounted under it are
unmounted in the sysplex.
zFS file systems that are defined as NOAUTOMOVE and all file systems mounted
under them are unmounted in the sysplex.
When all members of the sysplex are not at z/OS 1.2 and some or all systems are
running zFS:
Only systems running zFS see zFS file systems. The file system hierarchy appears
differently when viewed from systems with zFS mounted file systems than it does
from those systems not running zFS. Pathname traversal through zFS mountpoints
have different results in such cases since the zFS file system is not mounted on
those systems not running zFS.
zFS compatibility mode file systems owned by the system that can be automoved
are automoved to another system running zFS. If this function fails to find another
owner, the file system becomes unowned.
zFS file systems owned by the system that are noautomove become unowned.
File systems which are unowned are not visible in the file system hierarchy, but can
be seen from a D OMVS,F operator command. To recover a file system that is
mounted and unowned, the file system must be unmounted.
All zFS file systems owned by any of the systems unmount, even if this zFS does
not own any zFS file system.
Note: This means that terminating zFS on any one system causes all zFS file
systems in the sysplex to be unmounted. Because of this, it is not recommended
at this time to allow zFS file systems to be shared between systems in a shared
HFS environment when all systems are not at z/OS 1.2 or higher. In a sysplex
environment where all members are not running z/OS 1.2 or higher, the following
configuration choices are recommended:
Restrict zFS usage to one member of the shared HFS sysplex group, or,
Restrict zFS usage to images not participating in a shared HFS sysplex
group.
Alternatively, you should restrict zFS usage to a multi-system shared HFS
sysplex where all systems are at z/OS 1.2 or higher.
131
2. Ensure the userid you are using has the appropriate access to perform the required tasks.
For example, you will require superuser access to issue the mounts. If not, get access to
the BPX.SUPERUSER FACILITY class.
3. Create temporary mount points and mount the system-specific HFS data sets at /tmp
directory mount points. This is required to make the newly-created HFS data sets available
to copy the required data into.
Example 8-8 contains a sample REXX to do this:
Example 8-8 Create /tmp directories and mount required HFS
/* REXX */
parse source . . . . . . . omvs .
tso= omvs<>'OMVS'
if tso then call syscalls on
call syscall 'mkdir /tmp/var 755'
call syscall 'mkdir /tmp/etc 755'
address tso
"MOUNT FILESYSTEM('OMVS.TEST.VAR') ",
"MOUNTPOINT('/tmp/var') TYPE(HFS)"
"MOUNT FILESYSTEM('OMVS.TEST.ETC') ",
"MOUNTPOINT('/tmp/etc') TYPE(HFS)"
exit
syscall:
parse arg cmd
address syscall cmd
return
4. Prior to copying the /etc and /var data from the version HFS, to guarantee integrity, you will
need to ensure that these directories are not being updated while the data is copied. You
may need to do the copy step during a change window or quiet period. To ensure there are
no updates during the copy process, you should mount the version HFS as READ only.
You can do this a number of ways, via ISHELL, OMVS or TSO. For example, in TSO you
could issue the following command, using the appropriate root HFS data set name:
Example 8-9 Mount version HFS as read/only prior to copy
TSO UNMOUNT FILESYSTEM('OMVS.TEST.ROOT') REMOUNT(READ)
132
5. The next step is to copy the /etc and /var directories from the version HFS into their own
system-specific HFS data sets. There are a number ways to do this, including using the
pax or tar UNIX commands. These commands have numerous and complex command
parameters that are required to ensure the file and directory structure and the file
attributes are preserved during the copy.
To avoid this complexity, we recommend that you obtain a copy of the COPYTREE utility.
COPYTREE is a freely available utility that can run under TSO or the shell and is used to
make a copy of a file hierarchy preserving all file attributes. For information about
COPYTREE utility, refer to 8.4, Tools and documentation on page 134.
Issue COPYTREE commands to copy the directory contents from the version HFS into the
system-specific /etc and /var HFSs. Example 8-10 contains a sample REXX to do this:
Example 8-10 Copy directory contents using COPYTREE
/* REXX */
parse source . . . . . . . omvs .
tso= omvs<>'OMVS'
if tso then call syscalls on
address tso
"COPYTREE /etc /tmp/etc"
"COPYTREE /u /tmp/var"
exit
6. You now have the contents of the /etc and /var directories in their own HFS data sets. You
now need to mount these HFS data sets at the appropriate mount points to ensure these
are used, instead of the version HFS data set. To do this, you will need to unmount the /etc
and /var HFS data sets from the /tmp mountpoint and remount them at the /etc and /var
mountpoints. Example 8-11 contains a sample REXX to do this:
Example 8-11 Mount the new /etc and /var HFS data sets
/* REXX */
parse source . . . . . . . omvs .
tso= omvs<>'OMVS'
address tso
"UNMOUNT FILESYSTEM('OMVS.TEST.VAR') NORMAL"
"UNMOUNT FILESYSTEM('OMVS.TEST.ETC') NORMAL"
"MOUNT FILESYSTEM('OMVS.TEST.VAR') ",
"MOUNTPOINT('/var') TYPE(HFS) MODE(RDWR)"
"MOUNT FILESYSTEM('OMVS.TEST.ETC') ",
"MOUNTPOINT('/etc') TYPE(HFS) MODE(RDWR)"
exit
7. With the new /etc and /var HFS data sets mounted and in use, you can remount the
version HFS read/write, if required.
You can do this a number of ways, via ISHELL, OMVS or TSO. For example, in TSO you
could issue the following command, using the appropriate root HFS data set name:
Example 8-12 Mount version HFS RDWR after the copy
TSO UNMOUNT FILESYSTEM('OMVS.TEST.ROOT') REMOUNT(RDWR)
8. The last step is to update the BPXPRMxx member of Parmlib to ensure the new created
/etc and /var HFS data sets are mounted at the /etc and /var mountpoints at the next IPL.
133
8.4.1 Tools
The following tools should be used to help you successfully complete the merge:
BPXISYSR sample job in SYS1.SAMPLIB creates the sysplex root.
BPXISYSS sample job in SYS1.SAMPLIB creates the system-specific HFS data sets.
COPYTREE is a utility that can run under TSO or the shell and is used to make a copy of
a file hierarchy, preserving all file attributes. It can also be used to check the integrity of file
system structures without copying the file tree. It is available from IBMs z/OS UNIX Tools
and Toys Web site at URL:
http://www.ibm.com/servers/eserver/zseries/zos/unix/bpxa1ty2.html
8.4.2 Documentation
In addition, the following documentation contains information that will help you during this
exercise:
Hierarchical File System Usage Guide, SG24-5482
UNIX System Services File System Interface Reference, SA22-7808
z/OS Distributed File Servce zSeries File System Implementation, SG24-6580
z/OS Security Server RACF Command Language Reference, SA22-7687
z/OS UNIX System Services, GA22-7800
134
Chapter 9.
Language Environment
considerations
This chapter discusses the considerations for Language Environment (LE) when merging a
system into an existing sysplex.
135
LE has two default option modules: CEECOPT for CICS, and CEEDOPT for non-CICS, such
as batch programs. They provide installation-wide defaults. The installation defaults
distributed by IBM can be changed by creating a new CEECOPT or CEEDOPT module and
installing it with an SMP/E usermod.
136
LE provides a facility to enable a specific application to override the default run-time option
values, using the user options module, CEEUOPT. The CEEUOPT module is linked with the
application program.
LE also provides a region-wide default options module, CEEROPT. CEEROPT addresses the
requirement to have different run-time options for individual CICS regions. It resides in a
user-specified load library. CEEROPT can be used with CICS and IMS with Library Routine
Retention (LRR). If you are running programs that require Language Environment in an
IMS/TM dependent region, such as an IMS message processing region, you can improve
performance if you use Language Environment library routine retention. For more information
see Chapter 9, Using Language Environment under IMS in Language Environment for
OS/390 Customization, SC28-1941 or z/OS Language Environment Customization,
SA22-7564.
BronzePlex considerations
In line with the sysplex target environment options outlined in 1.2, Starting and ending
points on page 2, moving the incoming system into a BronzePlex would typically be done to
obtain the benefits of PSLC or WLC. This would mean that the incoming system would not be
sharing the sysres or maintenance environment. Therefore, there should not be any LE
considerations in relation to the BronzePlex implementation.
GoldPlex considerations
Moving the incoming system into a GoldPlex would be done to share the same system
software environment (sysres) and maintenance environment, and therefore you will need to
review this chapter to ensure any LE considerations are addressed. The main consideration
with merging into a shared sysres environment is the potential for the run-time options being
different between the systems.
PlatinumPlex considerations
Moving the incoming system into a PlatinumPlex would be for system level resource sharing
within the sysplex, which includes a shared sysres, maintenance environment, Master
Catalog and all user catalogs, and so on. This configuration would allow you to obtain the
maximum benefits from being in the sysplex. For example, the PlatinumPlex configuration
would allow you to use ARM for subsystem restarts on the incoming system, and potentially
use the incoming system as an additional target for batch workloads from the systems in the
target sysplex. The main LE consideration with merging into a shared sysres environment is
the potential for the run-time options being different between the systems.
Note
Type
G, P
2, 3
Done?
137
Consideration
Note
Type
2, 3
G, P
Done?
The Type specified in Table 9-1 on page 137 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. If none
of these considerations apply (in other words, if the target environment is a BronzePlex), then
there should not be any LE considerations when moving the incoming system into the target
sysplex.
Notes in Table 9-1 on page 137 are described below:
1. You will need to review LE run-time options if the incoming system is going to share the
sysres with other systems within the target sysplex. Ensure the run-time options being
used on the target sysplex are compatible with those on the incoming system prior to the
implementation. Refer to 9.4, LE run-time options on page 139.
2. You will need to consider downward compatibility issues:
If the incoming system is to be used as a target for subsystem restarts using ARM or a
similar automatic restart mechanism, and the incoming system is at a lower operating
system level than the target sysplex.
If the incoming system is to be used as a resource for the sysplex batch workload
within the target sysplex, and the incoming system is at a lower operating system level
than the target sysplex.
Review 9.3.2, Downward compatibility on page 139.
We dont recommend these configurations when merging a system into a sysplex. We do
recommend first bringing the incoming system up to the same operating system and
maintenance level as the target sysplex. This may, however, be a valid configuration
during a system upgrade cycle, after the merge.
3. You will also need to review the LE run-time options:
If the incoming system is to be used as a target for subsystem restarts using ARM, or a
similar automatic restart mechanism, and the incoming system is not sharing the
sysres with other systems within the target sysplex.
If the incoming system is to be used as a resource for the sysplex batch workload
within the target sysplex, and the incoming system is not sharing the sysres with other
systems within the target sysplex.
You will need to review the LE run-time options to ensure the options being used on the
incoming system are compatible with those on the target sysplex. We recommend that you
also review the default settings, as they can change between releases. This will ensure
that any applications running in a subsystem restarted on the incoming system (once it is
a member of the target sysplex) will work as expected. Refer to 9.4, LE run-time options
on page 139.
We dont recommend these configurations when merging a system into a sysplex. We do
recommend sharing the sysres. This may however, be a valid configuration during a
system upgrade cycle, after the merge.
4. You will also need to review your language(s) compile options if either the incoming
system or the target sysplex is used by application development to compile programs, and
138
the incoming system is going to share the sysres with other systems within the target
sysplex. Although this chapter is on LE, we thought it would be worth pointing this out.
9.3 Compatibility
Note the following compatibility support.
139
LE has two default option modules: CEECOPT for CICS, and CEEDOPT for non-CICS, such
as batch programs. They provide installation-wide defaults. The installation defaults
distributed by IBM can be changed by creating a new CEECOPT or CEEDOPT module and
installing it with an SMP/E usermod, and it is these usermods that you will need to review to
ensure compatibility. We recommend that you also compare the source of these usermods to
the values from the running systems. These run-time option values can be obtained in batch
for CEEDOPT (refer to 9.4.1, Displaying the run-time values in batch on page 140) and
within CICS for CEECOPT (refer to 9.4.2, Obtaining the run-time values in CICS on
page 142).
LE provides a facility to enable a specific application to override the default run-time option
values, using the user options module, CEEUOPT. The CEEUOPT module is linked with the
application program. When creating a CEEUOPT, only the run-time options and suboptions
whose values are to be changed need to be specified. The application specifies the required
run-time options, so merging the incoming system into the target sysplex should have no
impact on the application.
LE also provides a region-wide default options module, CEEROPT. CEEROPT addresses the
requirement to have different run-time options for individual CICS regions. It resides in a
user-specified load library. CEEROPT can be used with CICS and IMS with LRR (Library
Routine Retention). When creating a CEEROPT module, only the options whose values are
to be changed need to be specified. The region-wide default options module resides in a
user-specified load library which would generally be region(s) specific and not on a sysres, so
merging the incoming system into the target sysplex should have no impact, but we
recommend you identify where this module resides, if this option is being used.
For example:
Example 9-1 Find the current LE run-time options using a PL/I program
//STEP010 EXEC PGM=pliprog,PARM='RPTOPTS(ON)/'
For COBOL users with CBLOPTS(ON) as the installation default for the CBLOPTS run-time
option, the slash should go before the LE run-time options:
//stepname EXEC PGM=program_name,
//
PARM='program parameters/run-time options'
For example:
Example 9-2 Find the current LE run-time options using a COBOL program
//STEP010 EXEC PGM=cobolpgm,PARM='/RPTOPTS(ON)'
Tip: Make sure you use SCEERUN as the run-time library when running your program.
This will produce a run-time options report similar to the one shown in Figure 9-1
140
In the run-time options report, the LAST WHERE SET column indicates where each option
obtained its setting. In this column, look for the words Installation default. These options
obtained their value from the installation default values (CEEDOPT/CEECOPT).
The run-time options which cannot be specified in CEEDOPT, CEEROPT, or CEEUOPT will
be indicated by the phrase Default setting. If other phrases appear, this means the value for
these run-time options is coming from other than the installation defaults as outlined in
Table 9-2.
Table 9-2 LE run-time option report
Programmer default
Override
Invocation command
Option was specified on the JCL in PARM= (you should see this for
RPTOPTS)
141
9.5.1 Tools
There are no tools to help with this activity.
9.5.2 Documentation
The following publications provide information that may be helpful in managing Language
Environment:
Language Environment for OS/390 & VM Programming Guide, SC28-1939
Language Environment for OS/390 Customization, SC28-1941
z/OS Language Environment Programming Guide, SA22-7561
z/OS Language Environment Customization, SA22-7564
142
10
Chapter 10.
143
10.1 Introduction
When starting to implement a sysplex, an area that many installations look at first is the
sharing of system data sets. The level of sharing will vary by installation, and depends on
where you want to end up. In this chapter, we discuss the considerations for the following
three target environments:
The BronzePlex scenario is where the minimum number of resources are sharedthis is
basically a non-shared environment (several copies of each of the system data sets) with
some special considerations because the systems are in a sysplex. Each copy must be
maintained separately, with each one likely to be unique. System programmers must keep
track of the differences, the reasons for those differences, and updates of the appropriate
ones. When an update is introduced across a number of systems, multiple copies of the
same data sets must be changed. Much of this process can be simplified by consolidating
onto one or two sets of system data sets (e.g., Parmlib) per Parallel Sysplex cluster, and
that is why the PlatinumPlex scenario is often more attractive.
In a GoldPlex scenario, a subset of the system data sets is shared. For example, the
sysres is shared among systems, but the Proclib data set might not be, especially if the
incoming system is totally unrelated to the existing systems. For example, if you have just
one JES2 started task JCL, it has to contain all the user Proclibshowever, half the
volumes will be offline to each system, meaning that half the JES2 procs will be missing,
so you would get a JCL error when you tried to start JES2.
We have called the ideal scenario a PlatinumPlex. Instead of having multiple copies of
system data sets, such as Parmlib, Proclib, Master Catalog, and sysres volumes, ideally
you would have just one of each in a Parallel Sysplex. The consolidation and
standardization of these resources can significantly ease the management of a z/OS
environment. System programmers can maintain these libraries much more easily. For
example, through the use of symbols and cloning, a change to one member of a data set
can be sufficient to implement a change across many systems. This consolidation also
leads to standardization of these system resources across the Parallel Sysplex, making it
easier to implement changes and manage the systems.
To make it possible to implement a PlatinumPlex, z/OS provides a number of capabilities
such as system symbols, shared sysres, shared Master Catalog, and others. These are
discussed in the following sections.
144
Symbol Name
Substitution Text
The character string that the system substitutes for a symbol each
time it appears. Substitution text can identify characteristics of
resources, such as the system on which a resource is located, or the
date and time of processing. When you define static system symbols
in the IEASYMxx Parmlib member (see Step 5. Create an IEASYMxx
&SYSNAME
&SYSPLEX
&SYSR1
System symbols can be used in most of the Parmlib members, system commands, JCL for
started tasks and TSO/E logon procedures, JES2 initialization statements and commands,
JES3 commands, dynamic allocations, and other places. The following are some examples of
how system symbols can be used in a Parallel Sysplex consisting of two systems named
SYSA and SYSB:
Data set names: If the systems use the same SMFPRMxx Parmlib member, you can
specify the following naming pattern to create different SMF data sets on each system:
SYS1.&SYSNAME..MAN1. This definition produces:
SYS1.SYSA.MAN1 on system SYSA
SYS1.SYSB.MAN1 on system SYSB
Parmlib members: If the systems use the same IEASYSxx Parmlib member and require
different CONSOLxx Parmlib members, you can specify CON=&SYSCLONE. This
definition results in:
CON=SA on system SYSA - so member CONSOLSA would be used on system SYSA
145
To be sure that the symbols are correctly spelled and set according to your needs, we
recommend that you use the symbol syntax checker tool, a member of SYS1.SAMPLIB.
Further information about how to install and use it, as well as its limitations, is available in the
Appendix B, Symbolic Parmlib Parser of z/OS MVS Initialization and Tunning Reference,
SA22-7592.
If you only need to verify a new Parmlib members use of the current system symbols, you can
run the IEASYMCK sample program to see how the contents of the Parmlib member will
appear after symbolic substitution occurs. IEASYMCK is located in SYS1.SAMPLIB. See the
program prolog for details.
You can also enter the DISPLAY SYMBOLS operator command to display the static system
symbols and associated substitution texts that are in effect for a system. See MVS System
Commands, SA22-7627 for information about the use of this command.
Starting with DFSMS 1.4, you could use system symbols, for example &SYSR1, to represent
volsers in the catalog entries of non-VSAM, non-SMS data sets. This is known as Indirect
Volume Serial Support. With DFSMS/MVS 1.5 and later, it is also possible to use system
symbols in extended alias definitions for data set names. These can then be related to
different real data set names on different systems. This can be useful when migrating from
one software release to the next. The advantage of this function is that your JCL does not
have to be modified with respect to the data set names that you wish to use. What you must
consider is the system on which you run the job. Selecting the system on which you run the
job can be accomplished by the use of WLM Scheduling Environments. For more information,
see Chapter 4, WLM considerations on page 51.
Some customers that use system symbols extensively have come up against the limit in the
number of system symbolsprior to z/OS 1.4, you could have about 98 system symbols. In
z/OS 1.4, the number of system symbols has been increased to at least 800 (the actual
number depends on the size of the symbol names and the size of their values).
146
Note
Type
G, P
G, P
Check that the incoming system and target sysplex have the
same relative data set placement on their sysres volumes
G, P
G, P
G, P
G, P
Ensure there are available logical paths to the sysres for the
incoming system
G, P
G, P
Done
The Type specified in Table 10-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 10-1 are described below:
1. If the sysres has a high access rate, performance is an important point to consider.
147
FICON attachment is preferable as this provides support for higher I/O rates and more
logical paths. The performance through one FICON attachment should provide
approximately 4 to 5 times the throughput received with an ESCON connection.
Also, you should consider the use of parallel access volumes (PAV) and control units with
large caches. For more information about PAV, refer to 4.3, Considerations for merging
WLMplexes on page 54.
You should also aggressively use facilities such as LLA and VLF to minimize accesses to
libraries on the sysres.
Contention, and therefore performance, is most likely to be an issue during a sysplex IPL,
when many systems are IPLing, and therefore accessing the sysres, at the same time.
However, sysplex IPLs should be a very rare event if you follow our recommendations for
rolling IPLs.
2. The sysres should contain only data sets that meet the following criteria:
All the data sets must be read only.
The data sets should all be maintained only by SMP/E. Any data sets that are updated
manually, for example SYS1.PARMLIB, should be placed on a different volume.
Note: If you will be using SYS1.PARMLIB to contain your own customized members,
make sure you remember to update the SMP/E PARMLIB DDDEF to point at
SYS1.IBM.PARMLIB (which should remain on the sysres).
There should not be anything on the volume that is specific to one system or, indeed,
one sysplex (so the same logical volume can be cloned across multiple operating
environments). If you follow our naming conventions, all system-specific data sets
would have the system name as part of the data set name, and be placed on a volume
other than the sysres.
The sysres should only contain data sets that will be replaced by your next ServerPac.
Otherwise, you are faced with the chore of moving those data sets manually when you
migrate to the next operating system release.
See the section entitled Recommended data set placement in z/OS Planning for
Installation, GA22-7504. This information helps you decide which data sets should be
placed on each of your sysres volumes.
3. Your objective should be to be able to move to the shared sysres and back again (if
necessary) transparently. In order to do this (without having to change Master Catalog
entries), you must ensure that all sysres data sets are placed on the same relative
volumes, and that they are all cataloged indirectly. If the Master Catalog contains an entry
saying that PRODUCT.V1R1M1.LOAD is on the second sysres volume, that entry will
point at either the second sysres volume of your incoming system sysres or the second
sysres volume of your target sysplex sysres, depending on which sysres you IPLed from.
If this is not the case, you will get a data set not found message on some of the systems.
You can check this by searching the output from an IDCAMS LISTCAT ALL for any
occurrences of the sysres volume serial number. If you find any, either change them to be
indirect, or move the data set off the sysres if it does not belong there. You can compare
the contents of each sysres volume easily using ISPF 3.4 and SUPERC to compare the
contents.
Refer to 10.4, Sharing Master Catalogs on page 150 for more information about indirect
volume serial support.
4. If you are going to share a sysres between a number of system images, you must ensure
that the sysres you plan to use contains all the products that will be required on each
148
system. Presumably you will be changing the incoming system to use the sysres of the
target sysplex. In this case, you must check that that sysres has the products and releases
that are used on the incoming system.
5. You should also think ahead to the next ServerPac that you will install. If the same Master
Catalog will be used to access both the old and new sysres, you must make sure that the
new ServerPac places data sets on the same relative volumes. For this reason, you
should make sure you allow enough free space on the sysres volumes for normal
release-to-release growth of the product libraries.
6. All systems that are sharing the sysres should have the same maintenance and release
policy. For example, if you have a system that gets updated with every release of the
operating system, that would not be a suitable candidate for sharing a sysres with a
system that gets updated less frequently.
7. The number of images that can share a sysres is limited predominately by the number of
logical paths to the DASD device on which it resides.
8. We recommend that you have an alternate sysres on which you apply the maintenance.
After applying maintenance, IPL one system from the alternate sysres and perform tests
to ensure that everything is functioning correctly. This way, if there is a problem with the
new sysres, only one system will be impacted, and you can easily fall back to the previous
version of the sysres. Once you are satisfied that everything is functioning correctly, move
the remaining systems over to that sysres in as short time as your schedules permit.
149
Finally, you could bring the incoming system into the sysplex using its old sysres and then
move to the target sysplex sysres at some later timehopefully, not too long after the cutover.
The disadvantage of this approach is that the incoming system may be at a different service
or release level than the other systems in the sysplex, meaning that you may need to apply
some toleration or compatibility service. Also, you may not have had the opportunity of testing
that mix of release or service levels together in the sysplex, whereas if you move to the new
sysres before the cutover, at least you know that the release and service levels being used by
all the systems in the sysplex have been successfully tested together prior to the cutover.
150
Extended Alias Support is used to provide the ability to have only one alias for all systems,
but have different libraries used, depending on which system the job runs. A parameter for
the DEFINE ALIAS command, SYMBOLICRELATE, allows system symbols to be used in
the specification of the base data set name.
Suppose you have two systems, SYSA and SYSB, and the IEASYMxx Parmlib member
definition shown in Example 10-1:
Example 10-1 Sample of IEASYMxx Parmlib member
SYSDEF
SYSDEF
SYSDEF
SYSCLONE(&SYSNAME(3:2))
SYMDEF(&CLOCK='VM')
/* USE CLOCKVM */
SYMDEF(&COMMND='00')
/* USE COMMND00 */
SYMDEF(&LNKLST='C0,C2')
/* LNKLST
*/
SYMDEF(&VATLST='00')
/* VATLST
*/
SYMDEF(&SMFPARM='00')
/* POINT TO SMFPRM00 */
SYMDEF(&SSNPARM='00')
/* POINT TO IEFSSN00 */
SYMDEF(&BPXPARM='FS')
/* SYSPLEX FILE SHARING */
HWNAME(SCZP702)
LPARNAME(A01)
SYSNAME(SYSA)
SYSPARM(00,01)
SYMDEF(&SYSNAM='SYSA')
SYMDEF(&SYSR2='&SYSR1(1:3).RS2') /* 2nd SYSRES logical ext */
SYMDEF(&SYSR3='&SYSR1(1:3).RS3') /* 3rd SYSRES logical ext */
SYMDEF(&SYSID1='1')
SYMDEF(&OSZOSREL='ZOSR12')
SYMDEF(&PRODVR='V1R3M0')
SYMDEF(&CLOCK='00')
/* USE CLOCK00 */
HWNAME(SCZP702)
LPARNAME(A02)
SYSNAME(SYSB)
SYSPARM(00,01)
SYMDEF(&SYSNAM='SYSB')
SYMDEF(&SYSR2='&SYSR1(1:5).2') /* 2nd SYSRES logical ext */
SYMDEF(&SYSR3='&SYSR1(1:5).2') /* 3rd SYSRES logical ext */
SYMDEF(&SYSID1='2')
SYMDEF(&OSZOSREL='ZOSR13')
SYMDEF(&PRODVR='V1R4M0')
SYMDEF(&CLOCK='00')
/* USE CLOCK00 *
Note that for SYSA, the symbol &PRODVR. is defined as V1R3M0 and for SYSB,
&PRODVR. is defined as V1R4M0.
There is only one alias in the Master Catalog, created using the SYMBOLICRELATE
parameter:
DEFINE ALIAS (NAME(SYS1.PRODUCT) SYMBOLICRELATE('SYS1.&PRODVR..PRODUCT'))
This single alias definition has the ability to refer to different data sets, depending on which
system the job runs. The result is as follows:
For SYSA: alias SYS1.PRODUCT refers to SYS1.V1R3M0.PRODUCT
For SYSB: alias SYS1.PRODUCT refers to SYS1.V1R4M0.PRODUCT
In this case, the alias name is resolved at the time of use, rather than at the time of
definition. When sharing systems are ready to upgrade to newer versions of the product
(new data sets), they only need to change the definition of the appropriate system symbol
or symbols to access the new data set by the same alias.
For further information, refer to z/OS DFSMS:Managing Catalogs, SC26-7409.
151
Indirect volume serial support allows the system to dynamically resolve volume and device
type information for non-VSAM data sets that reside on either the system residence
volume (sysres) or one or more logical extensions to the sysres volume. If all the sysres
data sets do not fit on a single DASD volume, you can use additional volumes as logical
extensions to sysres and refer to them indirectly. This allows you to change the volume
serial number or device type of sysres or its logical extension volumes without having to
recatalog the non-VSAM data sets on that volume.
For data sets that reside on the sysres volume, you can specify the indirect volume
serial either:
The system dynamically resolves ****** or &SYSR1 to the volume serial of the volume
from which the system was IPLed (the sysres volume).
For data sets that reside on the logical extensions to sysres, you must specify a static
system symbol in place of the volume serial. This static symbol must be defined in the
IEASYMxx Parmlib member.
Using indirect catalog entries, together with the extended alias support, allows you to
share a Master Catalog among multiple images that use different volumes with different
names for the sysres volumes and their extensions. You can also do this using a single
SYMDEF for all images in a shared Parmlib data set. Thus, once set up, no future updates
should be needed to continue using this support.
If your installation IPLs with different sysres volumes and you establish a naming
convention for the sysres and its logical extension volumes, you can create a single
IEASYMxx Parmlib member that can be used regardless of which sysres volume is used
to IPL the system. To do this, use substrings of the sysres volume serial (&SYSR1) in
defining the symbols for the extension volume serials (&SYSR2, &SYSR3, and so on). For
example, assume you have the following sysres volumes and logical extensions listed in
Table 10-2.
Table 10-2 Sysres volumes and logical extensions
System
SYS1
S01RES
S01RS2,S01RS3
SYS2
S02RES
S02RS2,S02RS3
SYS3
DEVRES
DEVRS2, DEVRS3
You can refer to them using a single IEASYMxx Parmlib member with the following
statements:
Example 10-2 Sample definition for symbols &SYSR2 and &SYSR3
SYSDEF
152
SYSCLONE(&SYSNAME(3:2))
SYMDEF(&CLOCK='VM')
SYMDEF(&COMMND='00')
SYMDEF(&LNKLST='C0,C2')
SYMDEF(&VATLST='00')
SYMDEF(&SMFPARM='00')
SYMDEF(&SSNPARM='00')
SYMDEF(&BPXPARM='FS')
SYMDEF(&SYSR2='&SYSR1(1:3).RS2')
SYMDEF(&SYSR3='&SYSR1(1:3).RS3')
/*
/*
/*
/*
/*
/*
/*
/*
/*
USE CLOCKVM */
USE COMMND00 */
LNKLST
*/
VATLST
*/
POINT TO SMFPRM00 */
POINT TO IEFSSN00 */
SYSPLEX FILE SHARING */
2nd SYSRES logical ext */
3rd SYSRES logical ext */
Note: The system automatically sets &SYSR1 to the volume serial of the IPL volume.
You cannot define &SYSR1.
For details about how to use indirect volume serial support, see z/OS MVS Initialization
and Tunning Reference, SA22-7592.
Global Resource Serialization (GRS)
The SYSIGGV2 reserve is used to serialize the entire catalog BCS component across all
I/O, as well as to serialize access to specific catalog entries. The SYSZVVDS reserve is
used to serialize access to associated VVDS records.
The SYSZVVDS reserve along with the SYSIGGV2 reserve provide an essential
mechanism to facilitate cross-system sharing of catalogs, and it is critical that these
reserves are handled correctly in the GRS RNL. See item 10 on page 156 for more
information about this.
Note
Type
G, P
G, P
G, P
If using ECS, ensure that the structure is large enough for the target
number of catalogs using that facility
G, P
G, P
Confirm that the multi-level alias value is set the same in both
systems
G, P
G, P
G, P
G, P
10
G, P
Done
The Type specified in Table 10-3 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 10-3 are described below:
1. Ideally a Master Catalog should contain as few entries as possible. It should contain:
All the data sets on the sysres volumes, plus other required system data sets (SMF,
Parmlib, Proclib, page data sets, and so on). This should basically be just the data sets
that were delivered on the ServerPac.
153
User catalogs
Aliases pointing to the user catalogs
Possibly the distribution libraries that were delivered with the ServerPac
Having a clean Master Catalog makes the merge easier, simply because there are fewer
entries to be merged. It also makes catalog recovery easier, should the catalog ever get
damaged. And it provides better performance than a catalog with thousands of entries.
2. If the incoming system system is at the same DFSMS release and service level as the
target sysplex, there are no considerations. However, if one of the systems is at a different
release of DFSMS, you must ensure that any required toleration service has been applied.
Information about the requirements for each release of DFSMS can be found in z/OS
DFSMS Migration, GC26-7398.
3. You can issue the MODIFY CATALOG,ALLOCATED operator command to verify the
characteristics of each of the Master Catalogs. See Example 10-3.
Example 10-3 MODIFY CATALOG,ALLOCATED operator command
F CATALOG,ALLOCATED
IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE
IEC348I ALLOCATED CATALOGS 770
*CAS***************************************************************
* FLAGS -VOLSER-USER-CATALOG NAME
*
* Y---R- #@$#M2 0001 MCAT.V#@$#M2
*
* Y-I-R- #@$#I1 0001 UCAT.V#@$#I1
*
* YSI-R- #@$#A1 0001 UCAT.V#@$#A1
*
* Y-I-R- #@$#Q1 0001 UCAT.V#@$#Q1
*
* Y-I-R- #@$#C1 0001 UCAT.V#@$#C1
*
* Y-I-R- #@$#D1 0001 UCAT.V#@$#D1
*
* Y-I-R- #@$#M1 0001 UCAT.V#@$#M1
*
* Y-I-R- #@$#M1 0001 MCAT.V#@$#M1
*
*******************************************************************
* Y/N-ALLOCATED TO CAS, S-SMS, V-VLF, I-ISC, C-CLOSED, D-DELETED, *
* R-SHARED, A-ATL, E-ECS SHARED, K-LOCKED
*
*CAS***************************************************************
IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
4. You can share catalogs among systems which are running different levels of DFSMS, but
you must be sure that the incoming system has the minimum DFSMS level required to
support the features implemented on the target sysplex. For information on sharing
catalogs between different levels of DFSMS, see z/OS DFSMS Migration, GC26-7398.
5. The Enhanced Catalog Sharing (ECS) structure name is SYSIGGCAS_ECS, and the
structure size is related to the number of catalogs that will be enabled for ECS.
You should use the CFSizer tool to determine how large the structure should be. The
CFSizer is available at the following Web site:
http://www.ibm.com/servers/eserver/zseries/cfsizer/
6. ECS cannot be used with catalogs that are shared outside the Parallel Sysplex. If the
incoming system has catalogs that were previously shared with the systems in the target
sysplex, then those catalogs were previously ineligible for use with ECS. If the only
systems using those catalogs after the merge will all be in the new expanded sysplex, then
you might consider enabling those catalogs for ECS after the merge.
154
Note: Although you can only have one ECS structure per sysplex (because the name is
fixed), it is still possible to use ECS in a GoldPlex or even a BronzePlex, where half the
DASD are offline to half the systems, and the remaining DASD are offline to the other
half of the systems. It is acceptable to use ECS with a catalog that is unavailable to
some of the systems in the sysplex, as long as you do not try to use the catalog from a
system that does not have access to the volume containing the catalog and do not
access the catalog from a system outside the sysplex.
7. A multilevel catalog alias is an alias of two or more high-level qualifiers.You can define
aliases of up to four high-level qualifiers. Using multilevel aliases, you can have data sets
with the same high-level qualifier cataloged in different catalogs, without using JOBCAT or
STEPCAT DD statements. For example:
a. Alias PROJECT1.TEST points to catalog SYS1.ICFCAT.PRO1TEST,
b. Alias PROJECT1.PROD points to catalog SYS1.ICFCAT.PRO1PROD, and
c. Alias PROJECT1 points to catalog SYS1.ICFCAT.PROJECT1.
If the alias search level is 2, then data sets are cataloged as shown in Table 10-4.
Table 10-4 Multilevel Alias Facility
Data set
Catalog
Reason
PROJECT1.UTIL.CNTRL
SYS1.ICFCAT.PROJECT1
PROJECT1.PROD.DATA
SYS1.ICFCAT.PRO1PROD
PROJECT1.PROD
SYS1.ICFCAT.PROJECT1
PROJECT1.TEST.CNTRL
SYS1.ICFCAT.PRO1TEST
PROJECT1.TEST.A.B
SYS1.ICFCAT.PRO1TEST
In this example, data being used for tests (TEST) is isolated from production programs
and data (PROD) and other miscellaneous files. This isolation is desirable for data
protection and availability. Backup and recovery of one catalog would not affect projects
using the other catalogs.
The alias search level is specified in the SYSCATxx member of SYS1.NUCLEUS or
LOADxx member of SYS1.PARMLIB. It can also be changed without an IPL using the
MODIFY CATALOG,ALIASLEVEL operator command. For more information, see z/OS
DFSMS:Managing Catalogs, SC26-7409.
For example, if the target sysplex has aliaslevel=2 and the incoming system has
aliaslevel=3, you need to adapt the data set naming conventions of the incoming system to
match the target sysplexs rules. In this case, you may need to merge some user catalogs,
because you are not limited to only two levels in the aliases.
155
Note: While the aliaslevel can be set to a different value on each system, it should be set
to the same value on every system that is sharing the affected catalogs. Using a different
aliaslevel value on each system can lead to errors and inconsistent behavior, depending
on where a job is run.
8. Make sure everyone has the correct RACF access to the new Master Catalog. One way to
do this is to check the access list for the Master Catalog on the incoming system and
ensure that those people also have the required access level to the new Master Catalog.
We strongly recommend having a separate RACF profile for the Master Catalog, with a
very restricted list of people that have UPDATE or higher access.
To open a catalog as a data set, you must have ALTER authority. When defining an
SMS-managed data set, the system only checks to make sure the user has authority to
the data set name and SMS classes and groups. The system selects the appropriate
catalog, without checking the user's authority to the catalog. You can define a data set if
you have ALTER or OPERATIONS authority to the applicable data set profile.
For more information, see Chapter 5. Protecting Catalogs in z/OS DFSMS:Managing
Catalogs, SC26-7409.
9. Caching catalogs in storage is the simplest and most effective method of improving
catalog performance. This reduces the I/O required to read records from catalogs on
DASD.
Two kinds of cache are available exclusively for catalogs:
a. The in-storage cache (ISC) cache is contained within the Catalog Address Space
(CAS). Its objective is to cache only those records that are read directly. It is the default
and ideally would only be used for the Master Catalog. Since ISC is the default catalog
cache, catalogs are cached in the ISC unless you specify that the catalog is to use
CDSC, or unless you use the MODIFY CATALOG operator command to remove the
catalog from the ISC.
b. The catalog data space cache (CDSC) is separate from CAS and uses the MVS VLF
component, which stores the cached records in a data space. You can add catalogs to
the CDSC only by editing the COFVLFxx member to specify the catalogs, stopping
VLF, then starting VLF. A sample COFVLFxx statement to add a catalog to CDSC is
shown in Example 10-4.
Example 10-4 Defining a catalog to CDSC
CLASS NAME(IGGCAS) EMAJ(SYS1.SAMPCAT)
Although you can use both types of catalog cache, you cannot cache a single catalog in
both types of cache simultaneously. We recommend using CDSC for all user catalogs, and
reserving the ISC for the Master Catalog.
Example 10-3 on page 154 shows how to use the MODIFY CATALOG,ALLOCATED operator
command to verify how a catalog is being cached.
For more information, see Caching Catalogs section on z/OS DFSMS:Managing Catalogs,
SC26-7409.
10.Verify that SYSZVVDS is defined in the GRS RNL EXCLUSION list. There should be a
generic entry in the CONVERSION list for SYSIGGV2. In addition, if any catalogs are
shared with systems outside the sysplex, you must include an entry in the EXCLUSION
list with a QNAME of SYSIGGV2 and an RNAME of the catalog name. For example:
RNLDEF RNL(EXCL) TYPE(SPECIFIC) QNAME(SYSIGGV2) RNAME(ucat.fred)
For further information concerning catalog serialization, see the section entitled VSAM
and ICF Catalogs in z/OS MVS Planning: Global Resource Serialization, SA22-7600.
156
11.To protect yourself from failures, we recommend that you have a procedure for recovering
a Master Catalog from a backup, and also have a minimal Master Catalog that can be
used to bring a system up that can be used to run the restore. The minimal Master Catalog
should contain entries for all the data sets you need to IPL your system, logon to your TSO
ID, and run the catalog recover job. You should create a SYSCATxx member in
SYS1.NUCLEUS containing the name of the backup catalog, or maintain an alternate
LOADxx member specifying the backup catalog name.
To use the alternate SYSCATxx member, during the IPL, specify an initialization message
suppression indicator (IMSI) on the LOAD parameter that tells the system to prompt for
the Master Catalog response. You will then be prompted with a message requesting the
two-character suffix of the SYSCAT member you wish the system to use.
You should also review z/OS DFSMS:Managing Catalogs, SC26-7409 for a discussion
about maintaining a backup of the Master Catalog.
157
Whichever method you choose, you also need to keep the catalogs in sync once you get them
there. Fortunately the Master Catalog should not be updated frequently, and there should only
be a small number of people with update access. Therefore, it should be possible to
implement procedure changes to ensure that every time one Master Catalog is updated, the
other one is also updated at the same time. You should also implement a regularly scheduled
job to check the two catalogs to ensure they remain in sync, and identify any differences that
may appear.
Having synchronized the Master Catalogs, the next thing to decide is when to change the
incoming system to use the target sysplex Master Catalog. While it is possible to share a
Master Catalog (or any catalog) between systems that are not in the same sysplex, this relies
on using Reserve Release to serialize accesses to the catalog. Even though the Master
Catalog typically has a very low rate of updates, we still feel it is best to completely avoid
Reserves against the Master Catalog, if at all possible. Therefore, we strongly recommend
waiting until after the cutover to switch the incoming system to the target sysplex Master
Catalog.
158
Parmlib member
Function
ADYSETxx
ALLOCxx
APPCPMxx
ASCHPMxx
Parmlib member
Function
BPXPRMxx
Parameters that control the z/OS UNIX System Services environment and
the hierarchical file system (HFS). The system uses these values when
initializing the z/OS UNIX System Services kernel.
CLOCKxx
Parameters that control operator prompting to set the TOD clock, specifying
the difference between the local time and GMT, and ETR usage.
CNGRPxx
COFDLFxx
Allows a program to store DLF objects that can be shared by many jobs in
virtual storage managed by Hiperbatch.
COFVLFxx
COMMNDxx
CONFIGxx
CONSOLxx
COUPLExx
CSVLLAxx
Allows an installation to list the entry point name or LNKLST libraries that
can be refreshed by the MODIFY LLA, UPDATE=xx command.
CSVRTLxx
CTncccxx
DIAGxx
EXITxx
EXSPATxx
GRSCNFxx
GRSRNLxx
Resource name lists (RNLs) that the system uses when a global resource
serialization complex is active.
IEAAPP00
IEACMD00
IEADMCxx
IEAFIXxx
Names of modules to be fixed in central storage for the duration of the IPL.
159
160
Parmlib member
Function
IEAICSxx
IEAIPSxx
IEALPAxx
IEAOPTxx
IEAPAKxx
IEASLPxx
IEASVCxx
IEASYMxx
IEASYSxx
System parameters. Multiple system parameter lists are valid. The list is
chosen by the operator SYSP parameter or through the SYSPARM
statement of the LOADxx Parmlib member (see Chapter 58, LOADxx
(System Configuration Data Sets) z/OS MVS Initialization and Tunning
Reference, SA22-7592 for more information).
IECIOSxx
Parameters that control missing interrupt handler (MIH) intervals and update
hot I/O detection table (HIDT) values.
IEFSSNxx
IFAPRDxx
IGDSMSxx
Initialize the Storage Management Subsystem (SMS) and specify the names
of the active control data set (ACDS) and the communications data set
(COMMDS) and other SMS parameters.
IKJTSOxx
IVTPRM00
LNKLSTxx
LOADxx
Specifies data sets MVS uses to configure your system. Supports &SYSR1
as a volser. No other system symbol support.
LPALSTxx
MMSLSTxx
Specifies information that the MVS message service (MMS) uses to control
the languages that are available in your installation.
Parmlib member
Function
MPFLSTxx
MSTJCLxx
Contains the master scheduler job control language (JCL) that controls
system initialization and processing. For more information, see 10.5.1,
MSTJCLxx Parmlib member on page 161.
PFKTABxx
Parameters contain the definitions for program function key tables (PFK
tables).
PROGxx
SCHEDxx
Provides centralized control over the size of the master trace table, the
completion codes to be eligible for automatic restart, and programs to be
included in the PPT.
SMFPRMxx
TSOKEYxx
VATLSTxx
Volume attribute list that defines the mount and use attributes of direct
access volumes.
XCFPOLxx
Specifies the actions to be taken if a system fails to update its system status
in the sysplex CDS. This member is ignored if SFM is active.
JOB MSGLEVEL=(1,1),TIME=1440
EXEC PGM=IEEMB860,DPRTY=(15,15)
DD SYSOUT=(A,INTRDR)
DD SYSOUT=(A,INTRDR)
DD DSN=SYS1.PROCLIB,DISP=SHR
DD DSN=CPAC.PROCLIB,DISP=SHR
DD DSN=SYS1.IBM.PROCLIB,DISP=SHR
DD DSN=SYS1.&SYSNAME..PROCLIB,DISP=SHR
DD DSN=SYS1.UADS,DISP=SHR
DD DSN=SYS1.BRODCAST,DISP=SHR
161
162
163
.
The system searches the libraries for procedure MYPROC in the following order:
1. SYSPLEX.PROCS.JCL
2. PLEX1.PROCS.JCL
3. PLEX2.PROCS.JCL
4. PLEX3.PROCS.JCL
5. SYS1.PROCLIB
In the JES2 started task JCL, you can have multiple PROCxx statements, each potentially
pointing to a different set of Proclibs. In the JES2PARM, you can say which PROCxx
statement you want each job class to use. For example:
Example 10-8 JES2 started task JCL
//JES2
PROC
//IEFPROC EXEC
//HASPLIST DD
//HASPPARM DD
//PROC00
DD
//
DD
//PROC01
DD
//
DD
PGM=HASJES20,TIME=1440,DPRTY=(15,14)
DDNAME=IEFRDER
DSN=SYS1.PARMLIB(J2USECF),DISP=SHR
DSN=SYS1.PROCLIB,DISP=SHR
DSN=CPAC.PROCLIB,DISP=SHR
DSN=SYS1.&SYSNAME..PROCLIB,DISP=SHR
DSN=SYS1.PROCLIB,DISP=SHR
You can use system symbols in the MSTJCL member, so you can have a mix of shared
and system-specific Proclibs. The IEFPDSI DD statement defines the data set that
contains procedure source JCL for started tasks. Normally this data set is
SYS1.PROCLIB, but it can be any library in the concatenation. For useful work to be
performed, the data set must at least contain the procedure for the primary JES. For an
example, refer to Example 10-5 on page 161.
Another option is to use INCLUDE JCL statements. Example 10-9 contains an example.
Example 10-9 INCLUDE JCL
The following INCLUDE group is defined in member SYSOUT2 of private library
CAMPBELL.SYSOUT.JCL.
//* THIS INCLUDE GROUP IS CATALOGED AS...
//* CAMPBELL.SYSOUT.JCL(SYSOUT2)
//SYSOUT2 DD
SYSOUT=A
//OUT1
OUTPUT DEST=POK,COPIES=3
//OUT2
OUTPUT DEST=KINGSTON,COPIES=30
//OUT3
OUTPUT DEST=MCL,COPIES=10
//* END OF INCLUDE GROUP...
//* CAMPBELL.SYSOUT.JCL(SYSOUT2)
The system executes the following program:
//TESTJOB JOB ...
//LIBSRCH JCLLIB ORDER=CAMPBELL.SYSOUT.JCL
//STEP1
EXEC
PGM=OUTRTN
//OUTPUT1 INCLUDE MEMBER=SYSOUT2
//STEP2
EXEC
PGM=IEFBR14
The JCLLIB statement specifies that the system is to search private library
CAMPBELL.SYSOUT.JCL for the INCLUDE group SYSOUT2 before it searches any system libraries.
After the system processes the INCLUDE statement, the JCL stream appears as:
//TESTJOB JOB ...
//LIBSRCH JCLLIB ORDER=CAMPBELL.SYSOUT.JCL
//STEP1
EXEC
PGM=OUTRTN
//* THIS INCLUDE GROUP IS CATALOGED AS...
//* CAMPBELL.SYSOUT.JCL(SYSOUT2)
164
//SYSOUT2 DD
SYSOUT=A
//OUT1
OUTPUT DEST=POK,COPIES=3
//OUT2
OUTPUT DEST=KINGSTON,COPIES=30
//OUT3
OUTPUT DEST=MCL,COPIES=10
//* END OF INCLUDE GROUP...
//* CAMPBELL.SYSOUT.JCL(SYSOUT2)
//STEP2
EXEC
PGM=IEFBR14
Note: Both JCLLIB and INCLUDE are somewhat restrictive because you cant use system
symbols in JCL.
The considerations for the various target environments are as follows:
In a BronzePlex, you will only be sharing the sysplex CDSs, so there is no need to share
the Proclib or TSO-related files.
In a GoldPlex, you would probably share the Proclib.
Sharing Proclibs and CLIST libraries is a little more complex. There are many system
tasks that have their JCL in SYS1.PROCLIB, and must have a certain name. If the
incoming system must have different JCL for some of those members, you may wish to
retain separate Proclibs. Similarly, if you have similarly named TSO CLISTs on the
different systems that do completely different things, you will probably keep them
separate.
In a PlatinumPlex, as mentioned before, you would share everythingParmlib, Proclib,
and TSO LOGON and CLIST libraries.
10.7.1 Tools
Table 10-6 on page 166 contains a list of tools that may help, the function of each tool, and
where you can get further information about the tool.
165
Function
Reference
SYMUPDTE utility
Permits symbols to be
changed without an IPL.
Note: This is an unsupported
utility.
ftp://www.redbooks.ibm.com/redbo
oks/SG245451/
IEASYMCK
SYS1.SAMPLIB member.
http://www.ibm.com/servers/eserv
er/zseries/pso/
10.7.2 Documentation
The following documentation may be helpful as you merge the entities discussed in this
chapter:
MVS System Commands, SA22-7627
z/OS DFSMS:Managing Catalogs, SC26-7409
z/OS DFSMS Migration, GC26-7398
z/OS MVS Initialization and Tunning Reference, SA22-7592
z/OS MVS Planning: Global Resource Serialization, SA22-7600
z/OS Planning for Installation, GA22-7504
166
11
Chapter 11.
VTAM considerations
This chapter discusses the following aspects of moving a system that is using VTAM into a
sysplex where one or more of the systems are also using VTAM:
Is it necessary to merge VTAM environments, or is it possible to have more than one
VTAM environment in a single sysplex?
A checklist of things to consider should you decide to merge the VTAMs.
A list of documentation that can assist with this exercise.
167
168
APPN
Subarea
Sysplex
SSCP1A
SSCP2A
ICN
SSCP3A
NN
SSCP4A
EN
In this configuration, Nodes 1A and 2A are subarea-capable, and Nodes 2A, 3A and 4A are
all APPN-capable. Node 2A acts as the Interchange Node (ICN) between the subarea and
APPN networks. Within the APPN network, only Nodes 3A and 4A (and other ENs like 4A)
are in the sysplex. Node 3A is the NN that is acting as the Network Node Server (NNS) for all
of the ENs in the sysplex. Also, assume that Nodes 3A and 4A (and the other ENs) are all
participating in the same GRplex and/or MNPSplex.
LUs which are owned by (and attached to) Node 1A (in the subarea network) can still
establish sessions to GR and/or MNPS APPLs in the sysplex. When one of these LUs
initiates a session to a GR-name or MNPS APPL, the session request is routed through
subarea to 2A, then into the APPN network using the ICN function of 2A. During the APPN
session setup process, the GR and/or MNPS functions are available to those LUs.
For GR, this means that when Node 3A receives the request, it will use the sysplex/XCF GR
information to choose a GR-instance on one of the ENs, and the session will be established
with that instance. If that instance should fail, and the LU attempts a new session to the
GR-name, Node 3A will resolve this new session request to another still-working instance of
that GR and a new session will be established.
169
For MNPS, this means that the session will be established to the MNPS APPL that was
requested. If HPR was used for the APPN part of the session path and the host (EN) that the
MNPS APPL is on fails, then the MNPS APPL can be restarted on one of the other ENs and
the existing session can be PATH SWITCHED to the new owning host.
To summarize, the session path does not have to be entirely APPN in order to be able to
establish sessions with GR or MNPS APPLsonly the sysplex portion of the session path
must be APPN.
Another option is to display who is connected to the ISTXCF XCF group, as shown in
Example 11-2 on page 171.
All VTAM systems in the sysplex that wish to use VTAMs sysplex features must use the
APPN protocols to communicate with each other; therefore, APPN must be implemented
within the sysplex in this case.
170
#@$2M$$$USIBMSC
#@$3M$$$USIBMSC
This shows the names of the three VTAMs in the sysplex and their network names. As you
can see, #@$1M, the VTAM that is running on this image, is a member of the XCF group and
therefore VTAM is currently using XCF services.
Note: If you wish the TCP/IP stack on a system to be able to exchange packets with other
TCP/IP stacks on other systems in the sysplex via XCF links, VTAM must be connected to this
group.
If the incoming system is actually a set of systems that currently use dynamic XCF links to
communicate with each other, and your target environment is a BronzePlex (meaning that you
dont want all the VTAMs in the sysplex to communicate with each other), then you must
change the incoming systems to use some method other than dynamic XCF links (such as
statically-defined links) to communicate with each other.
171
If the incoming system is actually a set of systems that currently use VTAM GR, and the
existing systems in the target sysplex are also using VTAM GR, and your target environment
is a BronzePlex (meaning that you dont want all the VTAMs in the sysplex to communicate
with each other), then you have a problem. While it is technically possible to have two different
VTAM GRplexes in the sysplex, this is not recommended for productionmeaning that one
of the VTAMplexes is going to have to discontinue its use of VTAM GR.
172
While this is a valid environment, you should be aware that if you do not want to connect the
VTAMs, then you must do one of the following:
Stop VTAM on the incoming system(s) from joining the ISTXCF group
Stop all VTAMs on the target sysplex systems from joining that group
Remember, as well, that by stopping a system from joining ISTXCF, you are limiting the
sysplex-related benefits that TCP on that system can use. This is discussed in more detail in
Chapter 12, TCP/IP considerations on page 175.
In a GoldPlex or a PlatinumPlex, it is more likely that you would want to provide the capability
for LUs on any system in the sysplex to be able to logon to VTAM applications on any system
in the sysplex. In this case, you probably would allow VTAM to connect to the ISTXCF group.
Be aware, however, that simply placing all the systems in the same VTAMplex will not, by
itself, provide you with improved application availability or workload balancing. Being in a
single VTAMplex provides the enabling infrastructure; however, you still have to design and
deploy your applications in a manner that allows you to exploit the capabilities.
Note
Type
B, G, P
B, G, P
G, P
Done?
The Type specified in Table 11-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 11-1 refer to the following points:
1. Neither VTAM nor TCP/IP currently support sub-clustering. That is, you cannot have all
VTAMs using dynamic XCF links, but say that systems A, B, and C can only communicate
with each other, and systems D and E can only communicate with each other.
If a VTAM connects to the ISTXCF group, you have no way of stopping it from
communicating with all the other VTAMs in the sysplex that are connected to the same
group. And because you have no control over the name of the group, all VTAMs that
connect to XCF will connect to the same group and therefore be able to communicate with
each other.
This is of particular note if the incoming system (which may actually comprise a sysplex
environment already) and the target sysplex network environments both already use
dynamic XCF linksas there is currently no way of letting this continue unless you are
willing to let all the VTAMs connect to each other.
173
2. By default, all the APPN-capable VTAMs in a sysplex will communicate with each other. If
you want to maintain separation between the incoming system(s) and the target sysplex
networks, then you must specify XCFINIT=NO for all the VTAMs in one of the subplexes.
For more information on how VTAM uses XCF, and how you can control this, refer to
11.1.4, How VTAM uses XCF on page 170.
An alternative would be to define connectivity using MPC+ via AHHC (available in CS/390
R3 and above), which has the benefits of better throughput and less CPU overhead.
However, VTAM definitions are required in order to set up and maintain VTAM-to-VTAM
connectivity via MPC+.
The reason you might do this is for granularity. With XCFINIT=YES, dynamic XCF links are
established to every other system in the sysplex that has also specified (or defaulted to)
XCFINIT=YES, regardless of NETID affiliation. With MPCs, you only get MPC links to the
nodes that you define them to.
Therefore, you could set up two subplexes in the same sysplex and allow each subplex to
create meshed MPCs between their own hosts without forcing them to also establish
links to the hosts that belong to the other subplex. However, note that this granularity is at
the host level, not at the APPL level. That is, you cannot use this capability to provide
access to just a subset of applications in a hosteither all the applications on the host are
available to all connected systems, or none of them are.
3. In order for TCP/IP to be able to use the full range of its sysplex-specific functions, the
corresponding VTAMs must be connected to the ISTXCF group.
4. Although we typically recommend that all of the VTAMs in a sysplex share the same
network-id, this is not a requirement. Dynamic XCF links can be used between Network
Nodes (NNs) and End Nodes (ENs) that have different NETIDs. You can also use Dynamic
XCF links between NNs with different NETIDs (but with BN=YES specified to enable
Border Node). In this case, you can establish Dynamic XCF links between these EBNs.
However, VTAM MNPS and VTAM GR will not work unless all nodes that participate in the
function (such as application owners or, for GR, Resource Selector Nodes) have the
same NETID.
11.4 Documentation
The following documents contain information that may be useful to you when deciding how to
handle your VTAMplexes:
SNA in a Parallel Sysplex Environment, SG24-2113
TCP/IP in a Sysplex, SG24-5235
z/OS Communications Server: SNA Network Implementation Guide, SC31-8777
174
12
Chapter 12.
TCP/IP considerations
This chapter discusses the following aspects of moving a system that is using TCP/IP into a
sysplex where one or more of the systems are also using TCP/IP:
Is it necessary to merge TCP/IP environments, or is it possible to have more than one
TCP/IP environment in a single sysplex?
A checklist of things to consider should you decide to merge the TCPs.
A list of documentation that can assist with this exercise.
175
176
So for example, if the sysplex consists of systems SYSA, SYSB, SYSC, and SYSD, and
you want to use Dynamic XCF to provide TCP/IP communication between SYSA and
SYSB, and between SYSC and SYSDbut you dont want SYSA or SYSB to be able to
communicate with SYSC or SYSDthat is not possible.
Because specifying DYNAMICXCF does make a material difference to users on systems
that use this facility, this is the definition of TCPplex that we will use in the remainder of this
chapter.
When operating in a sysplex environment, TCP/IP can provide the ability for a collection of
z/OS systems to cooperate with each other to provide a single-system image for clustered IP
server applications. The location of the IP server applications within the TCPplex will be
transparent to the end user. To them, it appears there is a single instance of the application.
They are unaware that many instances of the same application are running on multiple
systems in the sysplex. This single-system image allows you to dynamically balance
workload across the sysplex and minimize the effects of hardware, software, and application
failures on your end users.
If you fully exploit the functionality of a TCPplex, you will be able to provide your clustered IP
server applications with:
A single IP address for connections to applications in the sysplex cluster
Workload balancing across multiple systems in the sysplex cluster
Improved availability
Improved scalability
177
In this case, there should be no changes to TCP/IP in either environment (because, from a
TCP/IP perspective, nothing will have changed as a result of the merge).
Note: This assumes that the incoming system does not already use Dynamic XCF.
If you are implementing a PlatinumPlex, and will be exploiting the TCP/IP sysplex features as
described above, then you must have a TCP/IP connection between the incoming system and
the target sysplex. The normal way to do this would be to specify DYNAMICXCF on every
system. If you want to exploit features like Sysplex Distributor, you must specify
DYNAMICXCF on all systems that will be in the TCPplex. If TCP/IP on the target sysplex and
the incoming system will be connected, then you must review the TCP/IP network
definitionsthis is discussed in greater detail in 12.2.2, Routing in a sysplex on page 179.
If you are implementing a GoldPlex, the TCPplex considerations depend on whether you wish
to connect the TCP/IP stacks on the incoming system and target sysplex systems:
If you do not wish to make this connection, then the considerations are exactly the same
as for a BronzePlex.
If you do wish to make this connection, then the considerations are identical to those for
implementing a PlatinumPlex.
In summary, from a TCP/IP perspective, there are really only two target environmentsone
where all the TCPs are connected and using Dynamic XCF, and one where the incoming
system TCP/IP is not connected to the target sysplex TCPs; note the following:
If all the TCPs are connected and using Dynamic XCF, the incoming system can exploit all
the sysplex TCP/IP capabilities.
If all the TCPs are not connected, then either the target sysplex or the incoming system
can use all the TCP/IP sysplex featuresbut not both of them.
12.2.1 DYNAMICXCF
When you specify DYNAMICXCF, TCP/IP uses MVS XCF messaging to dynamically discover
the other TCP/IP stacks in the sysplex. It will then automatically define the XCF links that are
used to communicate between the TCP/IP stacks in the sysplex. This feature simplifies
configuring the communications links between TCP/IP stacks in the sysplex, because it
removes the need to define the links manually.
These links are used to direct IP packets between the TCP/IP stacks, and to exchange
information about the IP addresses that are supported by each TCP/IP stack. Information is
exchanged whenever:
A new TCP/IP stack is added.
A TCP/IP stack is halted or fails.
An IP address is added or deleted.
178
This information exchange ensures that each TCP/IP stack in the sysplex is aware of all the
active IP addresses in stacks in the sysplex. As changes occur to the IP configuration, this
dynamic messaging maintains the currency of the sysplex IP address table in each TCP/IP
stack. This information exchange contributes to the TCPplex being able to provide a
single-system image for clustered IP applications.
In a TCPplex, each TCP/IP stack knows about all the active IP addresses in the sysplex. This
raises the following issues when merging the network of the incoming system into the target
sysplex:
Differences in the IP network addressing scheme used in both the incoming system and
the target sysplex must be resolved. If creating a PlatinumPlex, it is probable that the IP
addressing of the incoming system will have to be changed, with associated changes for
the DNS or client configurations.
User applications on the incoming and target systems may have been written to use the
same port numbers, which may restrict use of Dynamic VIPAs in the sysplex prior to the
installation of z/OS 1.4.
All the active IP addresses are known to all the systems in the TCPplex. This may not be a
desirable situation.
For example, you may be an outsourcer, where you have two competing customers
sharing a sysplex, but you dont want them to see each others IP applications. You may
need to provide some form of network isolation, in which case you will probably not use
Dynamic XCF and not connect the TCPs.
Dynamic XCF uses an XCF group called ISTXCF for communication between TCP/IP stacks.
This group is also used by VTAM for dynamically defining VTAM-to-VTAM connections. The
group name is fixed at ISTXCF and cannot be customized. This is the reason there cannot be
more than one TCPplex in a sysplexevery TCP/IP stack that uses Dynamic XCF connects
to the same group and can therefore communicate with every other TCP/IP stack that is also
using Dynamic XCF.
Dynamic XCF is implemented by specifying the DYNAMICXCF statement in the IPCONFIG.
This automatic definition of IP links removes the need to manually define or update links
whenever a new TCP/IP stack is added to the TCPplex.
The pre-requisites for Dynamic XCF are:
There must be a common IP addressing scheme across all TCP/IP stacks that will use this
support.
Dynamic definition of VTAM-to-VTAM connections must be enabled:
VTAM must be executing in an APPN environment.
The APPN node must support HPR using RTP.
As long as these conditions are met, VTAM will connect to XCF by default.
179
It is well beyond the scope of this chapter to give an overview of TCP/IP routing, but it is
important to discuss briefly the differences between static routing and dynamic routing:
With static routing, paths to different networks are hardcoded in tables held on each
TCP/IP host. Static routing is sufficient for small stable networks, but as network size and
complexity increases, it becomes impossible to administer the required entries across all
routers in the network.
Note: Static routing should not be used if you wish to avail of any of the advanced TCP/IP
availability functions discussed in this chapter.
With dynamic routing, paths to different networks are learned and updated dynamically by
routers on the network. While requiring greater effort to implement initially, dynamic
routing removes administrative overhead and allows much better recovery from hardware
failures provided there is redundancy in the network design. Most of the functions we will
discuss which provide improved availability or workload balancing in a sysplex rely on a
dynamic routing protocol being enabled in the sysplex and attached networks.
There are two dynamic routing protocols that are likely to be used within organizations:
RIP and OSPF.
RIP, or Routing Information Protocol, is a long-established internal gateway protocol
(IGP) designed to manage relatively small networks. It has many limitations, some of
which are removed by a later version of RIP called RIP Version 2 (often referred to as
RIP 2 or RIP V2).
RIP is the most widely supported protocol because it has been around the longest.
Communications Server for z/OS supports RIP V2, and it can be implemented using
the OMPROUTE server.
OSPF, or Open Short Path First, is a newer IGP that was designed for large networks.
It removes many of the limitations found in RIP and tends to be the routing protocol of
choice within large organizations. Among the benefits of OSPF is the improved time to
route around network failures, and the reduced network traffic required to maintain
routing tables when compared to RIP.
OSPF is not as widely supported as RIP, but this is changing rapidly. The current
version of OSPF (V2) is described in RFC 2328 and is implemented by
Communications Server for z/OS by the OMPROUTE server.
When merging TCP/IP networks, a choice has to made as to which routing protocols to use.
This decision cannot be taken by the z/OS network system programmers in isolation. The
routing protocols implemented by Communications Server in the sysplex must match those
used by the rest of the TCP/IP network to which the sysplex is attached. If TCP/IP networks
are being merged, then there will usually have to be a TCP/IP network redesign and one
routing protocol will be chosen, probably OSPF.
Important: It is possible to run both RIP and OSPF at the same time, but Communications
Server will always prefer the OSPF route to the RIP route to the same destination.
When merging networks, it may be required to run OSPF over some network interfaces
and RIP over others.
For a full discussion of TCP/IP routing, refer to the IBM Redbooks TCP/IP Tutorial and
Technical Overview, GG24-3376, and TCP/IP in a Sysplex, SG24-5235.
180
181
stack and new connections are directed to the original stack. This provides the least
disruption to clients. This behavior will vary according to how the Application DVIPA was
activated. For more information, refer to the IBM Redbook TCP/IP in a Sysplex,
SG24-5235.
System-Managed DVIPAs can be defined as MOVEABLE IMMEDIATE or MOVEABLE
WHENIDLE. If defined as MOVEABLE IMMEDIATE (introduced with OS/390 2.10), then
the DVIPAs owned by a failing stack will be taken back immediately by the original stack
when it is restarted, even if there are existing connections to the backup stack. Existing
connections will be automatically routed to the backup stack, so this process is
non-disruptive.
In summary, Dynamic VIPAs, in particular System-Managed DVIPAs, are only likely to be
useful as a part of a PlatinumPlex where there is full data sharing and where applications can
be run on any system in the sysplex. Dynamic VIPAs do not provide any workload balancing
function; they are specifically a high-availability feature.
182
If the incoming system is going to be part of this domain and use the DNS/WLM to provide
workload balancing, do the following:
Update the forward domain file with hostnames-to-IP addresses mapping.
Update the reverse domain or in-addr.arpa file with IP addresses-to-hostnames mapping.
Note that there may be other config files to update, too!
DNS/WLM is an availability and load balancing tool. This implies a number of cloned
applications sharing data executing across multiple servers. This may not apply to the
incoming system, particularly in a BronzePlex or a GoldPlex, as there is no data sharing.
183
184
Note
Type
B, G, P
B, G, P
B, G, P
G, P
G, P
G, P
G, P
G, P
Done?
185
Consideration
Note
Type
G, P
Done?
The Type specified in Table 12-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
The notes indicated in Table 12-1 refer to the following points:
1. All TCP/IP stacks in the sysplex will be aware of each others existence, even if you dont
use Dynamic XCF. However, IP packets cannot be transferred between stacks unless you
provide connectivity via static connections or Dynamic XCF.
Note: Having all the TCP/IPs being aware of each other should not normally cause any
problems or concerns; however, some installations may not like the IP addresses being
known across the systems.
2. If TCP/IP is using XCF to transfer packets between stacks in the sysplex, regardless of
when Dynamic XCF or static XCF links are being used, TCP/IP uses the XCF group
ISTXCF for communication within the sysplexthis name is fixed and cannot be changed.
You might not want a single TCPplex because it is not possible to separate one IP network
from another. All the IP addresses in the TCPplex will be available to all the systems in the
TCPplex. This may not be a desirable situation.
A single TCPplex demands unique IP addresses across the entire TCPplex.
3. You should ensure that your clients are using a DNS to access their IP server applications,
rather than explicit IP addresses. The use of DNS makes it easier to change IP addresses,
should this be required.
4. IP addresses must be unique when they reach the server. Duplicate IP addresses could
occur when merging intranets as a result of using private IP addressing schemes. A
solution would be to re-number the addresses, or deploy a NAT function between the
clients and servers.
5. A server application will usually listen on a specific port. Most servers have agreed or
default port numbers (for example, 23 for TN3270 or 446 for DB2). A client application
connects to a socket, which is identified by a combination of the IP address and port
number.
It is acceptable for duplicate port numbers to exist on different TCP/IP stacks, but a
problem will arise if it is required to run more than one application that uses the same port
connected to the same stack. For example, if a decision is made to run two DB2s on one
system, then they cannot both listen on port 446 if they bind to the same IP address.
Application DVIPAs may provide a workaround in this situation, if port numbers cannot be
changed. Sysplex Distributor, which presents one sysplex IP address to client
applications, also requires unique port numbers for different applications.
6. You can run both RIP and OSPF, but this makes administration confusing and is not
recommended.
7. You could use IP filtering rules to prevent users from one stack accessing server
applications that belong to another stack.
8. TCP/IP potentially uses two XCF groups. TCP/IP will automatically connect to the
EZBTCPCS group for discovering other stacks in the sysplex, and subsequently for
exchanging IP addresses and Dynamic VIPA configuration information. The VTAM
interface to XCF does not have to be active for this.
186
However, if you wish to use XCF for exchanging IP packets between stacks (whether you
are using Dynamic or static XCF connections), VTAM must be started and must be set up
to start its interface to XCF. This capability requires that VTAM is running in APPN mode.
9. The combination of system symbols and dynamic discovery of TCP/IP stacks makes it
easier to add more systems to the sysplex.
187
188
13
Chapter 13.
RACF considerations
This chapter discusses the following aspects of moving a system that is using its own RACF
database into a sysplex that is currently using a shared RACF database:
Is it necessary to merge RACF databases, or is it possible to have more than one
RACFplex in a single sysplex?
A checklist of things to consider should you decide to merge the databases.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
Note: This chapter does not provide a step-by-step guide to merging two RACFplexes.
This is a complex subject, and the steps and considerations are different in every
installation.
However, we do provide a list of considerations to keep in mind when deciding whether to
merge RACFplexes, as well as hints and tips based on actual experiences. We strongly
recommend working with someone that has worked on at least two such merges in the
past should you decide to proceed with the merge.
189
RVARY SWITCH
RVARY ACTIVE
RVARY INACTIVE
RVARY DATASHARE
RVARY NODATASHARE
SETROPTS RACLIST (classname)
SETROPTS RACLIST (classname) REFRESH
SETROPTS NORACLIST (classname)
SETROPTS GLOBAL (classname)
SETROPTS GLOBAL (classname) REFRESH
SETROPTS GENERIC (classname) REFRESH
SETROPTS WHEN(PROGRAM)
SETROPTS WHEN(PROGRAM) REFRESH
Another benefit of having all systems in the same RACFplex is that all systems can benefit
from placing RACF buffers in the RACF structures in the CF. There can be only one set of
RACF structures in a sysplex, and all systems using those structures must be sharing the
same RACF database. Using the RACF structures in the CF can provide significant
performance benefits for a number of reasons:
RACF now has a far larger number of buffers accessible with very fast access times. While
access to the buffers in the CF is not as fast as the buffers in MVS storage, it is still many
times faster than reading from DASD.
When a buffer gets updated, only that buffer gets invalidated on any other systems that
also have a copy of the buffer. Previously, when a buffer is updated, other systems will
invalidate all their in-storage buffers.
If RACF use of the structures in the CF has been enabled, RACF will use a completely
different serialization design, which uses the CF data to eliminate the need for
cross-system serialization for most accesses to the database. This provides a further
performance benefit.
There is also a security benefit from having just one RACFplex per sysplex. Assuming that all
the resources in the sysplex (programs, DASD data sets, tapes, and so on), are potentially
accessible from any system in the sysplex, having a single RACFplex ensures that the
resources are protected in the same way, regardless of from which system they are
accessed. If there is more than one RACFplex (and therefore, more than one set of RACF
databases), it is possible, even likely, that some resources will be protected by different RACF
profiles, with different access lists.
190
A different aspect of this problem is that the sysplex architecture encourages the design and
implementation of applications that have multiple instances running on separate systems for
availability and scalability. However, when you have multiple RACFplexes in the sysplex:
You limit your flexibility in deploying those multiple instances, since any that share a
workload must run within the same RACFplex.
You therefore end up (potentially) needing to deploy separate clusters of the same
application to handle the different security scopes.
You end up increasing the chance that you will deploy the applications incorrectly, such
that an application running in RACFplex#1 will route work incorrectly, sending it to another
instance running in a different RACFplex, say RACFplex#2. In that case, several results
may occur:
The work element may fail because the user is not defined there, or it may fail because
the user is defined but the security definitions are different, or
It may not fail because, although the user is defined, the security definitions are
incorrect, or
It may not fail because a different user with the same ID is defined there, and that
userid (different person, remember) legitimately has the authority to perform that work,
even though the original user would not have that authority if the work ran in the proper
RACFplex.
Note that one such application is system console support. In a sysplex, it is common for an
operator on one system to route commands to another system for execution. If the user
definitions or security rules are not common between the systems, such commands may
either fail unexpectedly or work (when they should have failed).
Despite all of this, it is possible to have more than one RACFplex in a sysplex, should you
decide to do so. The main benefit of such an approach is that you avoid the one-time cost of
having to merge the RACF databases.
If your target environment is a BronzePlex, it is not necessary to merge the RACFplexes. Only
the minimum set of shared resources is available to both RACFplexes. Relatively few RACF
profiles would need to be maintained to provide a consistent security environment, for those
shared resources, across the two RACFplexes.
Similarly, in a GoldPlex, where user DASD is not shared, there is still not a lot of benefit in
merging the two RACFplexes. However, if you are moving towards a PlatinumPlex, then the
merging of RACFplexes will become an issue. Merging RACFplexes can often take a long
time and should be completed before large scale DASD sharing is attempted. So it may be
worthwhile starting the RACFplex merge project at this stage.
In the case of a PlatinumPlex, it makes little sense to maintain more than one RACFplex. In
this environment most, or all, of the sysplex DASD will be online to all systems. There will
probably be a unified catalog structure, and thus single SMS and HSMplexes. Trying to
maintain two security environments, and yet ensure that all resources are protected in the
same manner is, at best, problematic, and at worst, a potential security exposure. For a
PlatinumPlex it is strongly recommended to merge into a single RACFplexes.
191
Note
Type
Done?
P
1
P
P
Check that the RACF database is large enough to contain all profiles.
Replace the RACF started task table (ICHRIN03) with STARTED class
profiles.
The RACF database templates must match the highest level of RACF.
The Type specified in Table 13-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes indicated in
Table 13-1 are described below:
1. All systems must have access to the CFs containing the RACF structures if you plan on
placing the RACF buffers in CF structures. This should not be an issue as in a
PlatinumPlex, all systems in the sysplex would normally have access to all CFs anyway.
2. Review the size of the RACF database to be used for the single RACFplex. You can
determine how much free space there is in your databases using the RACF database
verification utility program (IRRUT200). If there is insufficient free space within the RACF
database, then the database can be extended using the Split/Merge/Extend utility program
(IRRUT400).
3. If RACF sysplex communication is enabled, when RACF is being initialized, it will compare
its data set name table with the data set name table in use on other systems in the
sysplex. If any values other than the number of resident data blocks are different, RACF
will override those values with the values already in use on the other systems in the
sysplex. RACF does not check to ensure that the class descriptor table or the router table
are the same on every system, however, for operational simplicity, we strongly recommend
using the same version of these three tables on all systems in the sysplex.
4. Though not a requirement, you should replace the ICHRIN03 table with STARTED class
profiles. This removes the need to synchronize and maintain ICHRIN03, and can be
updated without an IPL.
192
5. The same RACF classes should be enabled on all systems in the RACFplex, and the
RACF options and exits should be identical on all systems. This should be done to ensure
resources on all sharing systems are protected in the same manner. To check the options,
issue a SETR LIST command on both the target sysplex and the incoming
systembefore you move the incoming system into the target sysplex, the response to
this command should be identical on all systems.
After you have synchronized the options, check that the RACF Global Access Table is the
same on all systems. In particular, you need to be sure Enhanced Generic Naming (EGN)
is either enabled on all systems or disabled on all systems before you check the Global
Access Table.
6. The RACF database template level must be equal to, or higher than, the highest RACF
software level sharing the database. The IRRMIN00 utility is used to update the database
templates when a new release or service level is available. The templates are downwardly
compatible, meaning that you can run old levels of RACF with a database that has been
formatted for a newer release.
7. Security categories, security levels, and security labels require special handling. This is
discussed in 13.3.3, Synchronize RACF options on page 198.
193
manually. This tends to be an iterative process. With large RACF databases, the
discrepancies may be many and take considerable time to resolve. There is an IBM-supplied
tool, DBSYNC, that can help you synchronize profiles; this tool is discussed in more detail in
13.3.7, RACF database merge on page 199.
Because of the gradual, iterative, approach to synchronizing the two databases, fallback from
any change along the way should be quick and simple.
There is no single utility to do the bulk of the work involved. Rather, there are a few utilities
which can help, and these will need to be supplemented by additional user-developed utilities.
As each installation is unique, different issues will need to be addressed and various utilities
developed to aid the merge process.
Important: Do not use the RACF split/merge utility (IRRUT400) to merge two RACF
databases. It is not a merge utility for disparate databases. It is used only to merge
databases that have previously been split by the same utility under the control of a range
table (for example, recombining a RACF database that has previously been split into key
ranges).
Performing a RACF merge requires access to RACF data in a form that is easy to analyze
and process. This can be done using an IBM-provided utility such as the RACF database
unload utility, IRRDBU00. Data unloaded by IRRDBU00 can be loaded into DB2 tables for
easier analysis; this process is described in z/OS Security Server RACF Security
Administrators Guide, SA22-7683.
The RACF command set is rich enough to allow the manipulation of most of the RACF profile
fields. Together with the RACF database unload utility, these two features of RACF allow the
speedy development of home grown utilities to automate large parts of the merge process.
What follows is a step-by-step approach to merging two RACF databases detailing the
obstacles that need to be overcome to achieve a single RACF database.
194
195
Tip: You cannot necessarily deactivate a class just because there are no profiles defined in
that class. You must decide on a class-by-class basis whether to leave it active, as some
classes act as switches rather than as containers for profiles. It may be safe to deactivate
any class that doesn't have profiles and that is not listed as PROFDEF=NO in the customer
or IBM CDT.
In SYS1.SAMPLIB, member IRRICE contains some very useful sample utilities that can be
used to locate problem areas. One sample report, for example, will identify those high-level
qualifiers that have more than a specified number of generic data set profiles defined. These
can serve as excellent templates to develop your own reports.
For example, the sample job in Example 13-2 uses ICETOOL and SORT to list all users by
their last access date. Inactive userids will have old last access dates, or, if the userid has
never been used, no last access date.
Example 13-2 ICETOOL sample to list all users by last access date
//UNLOAD
EXEC PGM=IRRDBU00,PARM=NOLOCKINPUT
//SYSPRINT DD SYSOUT=*
//INDD1
DD DISP=SHR,DSN=SYS1.RACFPRIM
//OUTDD
DD DISP=(,CATLG),DSN=&UNL,SPACE=(CYL,(5,5),RLSE),
//
LRECL=4096,BLKSIZE=0,RECFM=VB
//EXTRACT EXEC PGM=SORT
//SORTIN
DD DISP=SHR,DSN=&UNL
//SYSOUT
DD SYSOUT=*
//SORTOUT
DD DSN=&TEMP,DISP=(NEW,PASS),
//
SPACE=(CYL,(20,5,0)),UNIT=SYSALLDA
//SYSIN
DD *
SORT FIELDS=(118,10,CH,A,109,10,CH,A)
INCLUDE COND=(5,4,CH,EQ,C'0200')
OPTION VLSHRT
/*
//REPORT
EXEC PGM=ICETOOL
//TOOLMSG
DD SYSOUT=*
196
//PRINT
DD SYSOUT=*
//DFSMSG
DD SYSOUT=*
//DBUDATA
DD DISP=(OLD,DELETE),DSN=&TEMP
//TOOLIN
DD *
DISPLAY FROM(DBUDATA) LIST(PRINT) PAGE TITLE('USER LAST ACCESS REPORT') DATE(YMD/) TIME(12:) BLANK ON(10,8,CH) HEADER('USER ID') ON(30,8,CH) HEADER('OWNER') ON(19,10,CH) HEADER('CREATE DATE') ON(118,10,CH) HEADER('LAST USED DATE') ON(109,8,CH) HEADER('LAST USED TIME')
/*
The JCL contained in Example 13-2 on page 196 will produce the report shown in
Example 13-3.
Example 13-3 Report created from ICETOOL example above
- 1 USER ID
-------MSO$1SIR
MS2$1PPT
MS2$1SIR
WATS
RAFTEN
MAIDA
HSHA
MS2$2PPT
MS2$2SIR
VDPUTTE
SHEEHAN
I#$#IRLM
LAWSOD
#@OPR1
#@OPR10
CREATE DATE
----------2001-09-19
2001-10-10
2001-10-10
2001-09-26
2001-03-27
2000-11-15
2000-08-16
2001-10-10
2001-10-10
2001-02-15
2001-02-15
2000-12-05
2000-08-16
2001-01-04
2001-01-04
02/04/23
LAST USED DATE
-------------2001-09-24
2001-10-10
2001-10-10
2001-11-05
2001-11-29
2001-12-05
2001-12-10
2002-01-10
2002-01-10
2002-01-17
2002-01-17
2002-02-11
2002-02-11
2002-02-13
2002-02-13
07:00:26 pm
LAST USED TIME
-------------21:46:47
11:49:23
11:49:28
01:52:08
09:48:55
15:06:48
15:26:32
15:34:27
15:34:32
09:20:00
09:56:17
11:11:13
11:44:17
16:31:12
16:31:13
197
logical duplicatesthat is, that there are not two entries specifying SYS1.** and SYS1.*.**.
If there are, remove one of the entries. You also need to check to ensure that the same
access is being granted. For example, if one system has A.*/READ and the other has
A.*/UPDATE, then you need to decide which profile you wish to use in the merged RACFplex.
A useful tool for merging RACF databases is the DBSYNC tool. DBSYNC and other RACF
tools can be downloaded from the RACF Web site at:
http://www.ibm.com/servers/eserver/zseries/zos/racf/goodies.html
DBSYNC is a REXX exec that compares two RACF databases that have been unloaded with
the RACF database unload utility (IRRDBU00). DBSYNC generates the RACF commands
required to convert database a into database b and vice versa. Thus, if a profile exists in
database a and not in database b, DBSYNC will generate the RACF commands to delete
the profile from database a and add the profile to data set b. Any profiles found in both
RACF databases are matched, and any discrepancies reported.
199
Though not specifically designed to merge RACF databases, DBSYNC is very useful in
generating the RACF commands to bring two RACF databases into line. This can be done by
selectively using pieces of the command files produced, to add profiles from one RACF
database to the other. Being written in REXX, it is easy to modify and debug, should the need
arise, to meet the needs of the specific merge project.
Important: DBSYNC was not developed as a tool to synchronize two RACF
databasesrather, it helps you make one database the same as the other. In the example
above, DBSYNC generates two sets of commands: one set will make database a the
same as database b, adding anything that is in b and not in a and deleting anything
that is in a that is not in b. The other set does the reverse - this is why you have to
selectively use just parts of the output from DBSYNC.
The general approach is to run DBSYNC repeatedly over time, making incremental changes
to the RACF databases between each run until all the differences have been resolved. This
may take many weeks or months, and of course, as the RACF database is a moving target,
profiles are being added and deleted all the time.
Note: The RACF remote support facility (RRSF) can be used during the synchronization
process to propagate updates to RACF databases. However, the z/OS Security Server
RACF Security Administrators Guide, SA22-7683, under the heading Synchronizing
Database Profiles, states:You can use automatic direction to maintain synchronization of RACF database
profiles that are already synchronized, but you must synchronize the profiles before you
activate RRSF functions
Thus RRSF can be used to maintain synchronization of already synchronized profiles, but
will not actively synchronize them.
Once the differences have been resolved, an outage is required to IPL the incoming system to
get it to use the target sysplex RACF database. During this outage a final run of DBSYNC
should be performed and any remaining differences corrected. There are some other tasks
required during this outage; we will discuss those later in this chapter.
The first step is to run DBSYNC to generate the list of duplicate, but conflicting, profiles.
These profiles will need to be examined and a decision made as to how to resolve the conflict.
As the conflicts are being resolved, DBSYNC can be rerun to verify that all is going to plan.
Note: If either or both RACF databases contain profiles in the SECDATA class, the
database unloads must be pre-processed with the DBSECLV exec, before using the
DBSYNC exec.
While resolving conflicting profiles, you can use the commands generated by DBSYNC to
merge the two databases. RACF profiles often point to other profiles. For example, a user
profile can contain references to RACF groups. These groups must exist when the ADDUSER
command is executed, otherwise the command will fail. DBSYNC takes this into account
when generating its command files. Given the large number of RACF commands that will
need to be executed, it is wise to break the task up into manageable pieces. This reduces the
complexity of the task and the amount of change made at any one time.
200
While DBSYNC is used to generate the RACF commands, you can also use small REXX
execs to identify which bits of the DBSYNC command files you should use. For example,
before you add the RACF group profiles, compare the two RACF database unload files and
list all the RACF groups in one file but not the other. This provides a count of how many
ADDGROUP commands you will need to extract and gives you their group names.
The following steps can be used to merge two RACF databases:
Group profiles
Group profiles can be added first, as they have the fewest dependencies. Though they
contain a superior group, the DBSYNC utility creates a dummy group and uses that as the
superior group for the ADDGROUP commands. After all the ADDGROUPs have been generated,
DBSYNC generates ALTGROUP commands to set the correct superior group. Once these
commands have been extracted and run, DBSYNC can be run again. There shouldnt be any
ADDGROUP or ALTGROUP commands generated in the second set of DBSYNC command files.
User profiles
The same procedure can be done for user profiles.
When DBSYNC compares user profiles, it ignores the password fields. The ADDUSER
commands generated do not specify a password, and so the password will be set to the
users default group and the password is set to expired. We will correct the passwords in
13.3.9, Password synchronization on page 207.
Connect profiles
Now that the user and group profiles have been processed, group connection profiles can be
attempted. There may exist a data set or general resource profile that contains multiple
groups and access levels in their access lists. Connecting a user who is already a member of
one of these groups to another of these groups may increase their access to resources
covered by this profile.
For example, consider a data set profile of SYS1.** with an access list containing GROUP1
with UPDATE access and GROUP2 with READ access. If a user is already connected to
group GROUP2 and as part of the merge we were to connect him to GROUP1, his access
would be upgraded from READ to UPDATE. When connected to multiple groups, the higher
access is assigned. This may not be a problem, but it may give some users access to
resources that may not be desired.
The sample REXX exec shown in Example 13-4 will list all data set profiles that contain more
than one group with differing levels of access. This could be used to provide a basis for
ensuring that no unwanted access changes occur while adding connect profiles.
Example 13-4 Sample REXX to list data set profiles with a mix of access levels
/* REXX */
/**********************************************************************/
/* List any data set profile that contains an access list with
*/
/* multiple groups and access levels.
*/
/**********************************************************************/
"alloc dd(indd) da(racf.unl13) shr reuse" /* alloc IRRDBU00 unload DS */
eof = 0
/* clear end-of-file flag
*/
access_list_count = 0
/* length of access list cntr */
space = '
'
lastprof = ' '
group_count = 0
say "Profile
"||,
201
"
Group
Access"
do until eof
/* loop thru IRRDBU00 unload ds*/
"execio 100 diskr indd (stem indd." /* Get some records
*/
if rc = 2 then eof = 1
/* If EOF then set EOF flag
*/
do i = 1 to indd.0
/* loop thru records read
*/
rtype = substr(indd.i,1,4)
/* get record type
*/
if rtype = '0100'
/* Racf group record ?
*/
then call group
/* yes. the remember it
*/
if rtype = '0404' then
/* data set access list ?
*/
do
/* yes
*/
prof = substr(indd.i,6,44)
/* get data set profile
*/
if lastprof = prof
/* same dsname as last prof ? */
then call stack
/* yes. then remember it.
*/
else call print
/* no . print previous prof
*/
end
/*
*/
end
/*
*/
end
/*
*/
return
/**********************************************************************/
/* List a profile if it has mix of groups and accesses h */
/**********************************************************************/
print:
if access_list_count > 1
then
do
access_mix = 0
do j = 2 to access_list_count
if acc.j = acc.1 then access_mix = 1
end
if access_mix
then
do
say lastprof grp.1 acc.1
do j = 2 to access_list_count
say space grp.j acc.j
end
say space
end
end
lastprof = prof
access_list_count = 0
call stack
return
/**********************************************************************/
/* Build an array of all groups in access list for a profile
*/
/**********************************************************************/
stack:
this_grp = substr(indd.i,58,8)
do k = 1 to group_count
if this_grp = group_name.k
then
do
access_list_count = access_list_count + 1
grp.access_list_count = this_grp
acc.access_list_count = substr(indd.i,67,8)
end
end
return
/**********************************************************************/
/* Build an array of all RACF groups
*/
202
/**********************************************************************/
group:
group_count = group_count + 1
group_name.group_count = substr(indd.i,6,8)
return
203
A field indicating the first volume serial of the data set for a discrete profile, or blanks if
the profile was generic.
Fields indicating the UACC, WARNING, and ERASE attribute(s) of the profile in each
database
Note: The internal-format profile names were used so that profiles not necessarily
existing in the same RACF database could be treated as if they all existed in a single
database. Once profiles from both databases were sorted in ascending order of
internal-format profile name, a program could simulate the results of a RACF search
for the best-matching profile in the merged data base before the actual merge.
This logic was only used for profiles in the DATASET class, since DATASET class
profile names cannot contain RACF variables. The rules for RACF variables and
resource grouping classes add complexity to RACF processing, prohibiting all but
the most determined attempts to simulate RACF's search for the best-matching
profile in general resource classes.
If you have access to a sandbox (test) RACF database, in which all profiles can be
defined prior to the production merge, then there is no need to simulate RACF's
search for the best-matching profile. In the project described here, however, a
sandbox RACF was not used because the risk assessment required access to the
ICF catalogs of the production systems, and no existing sandbox system had such
access. RACF uses an internal name format that is used to simplify processing of
generic and wildcard characters in profiles. The internal format of RACF profile
names is partially described in APAR OY31113.
There is one important difference between the internal format of DATASET class
profile names and all other general resource profile namesit is related to the fact
that, for data sets, an ending single asterisk is more specific than internal double
asterisks, while for general resource profiles, the opposite is true. A sample REXX
exec containing subroutines for the translation of RACF profile names between
internal and external formats is available in the Additional Materials for this book
from the Redbooks Web site at:
http://www.redbooks.ibm.com
2. The data set produced in step 1 was sorted in ascending order of the internal format
profile names.
3. Using this data set as input, you then run a program twice (once on each RACFplex) to
evaluate the risk of adding new DATASET class profiles to that RACFplex. The evaluation
logic of that program is:
Read in the entire sorted file created by step 2, associating each external profile name
with its relative position (sequence number) in the input file.
Loop through the list of profile names, considering each name individually as follows:
204
If the profile exists only on the current (execution) RACFplex, no further analysis is
needed. The impact of adding this profile to the other RACFplex will be determined
by the execution of this program on the other RACFplex.
If the profile exists in both RACFplexes, compare the UACC, WARNING and
ERASE attributes for incompatibilities and report if any are found.
If the profile exists only on the other RACFplex, then invoke the Catalog Search
Interface, using the external profile name as the search filter, to find all cataloged
data sets that could be protected by the new profile. The Catalog Search Interface
is described in z/OS DFSMS:Managing Catalogs, SC26-7409.
For each data set returned by the Catalog Search Interface, determine the profile
that protects it on the current RACFplex, then compare the sequence number of the
profile currently protecting the data set with the sequence number of the profile to
be added (which was used as the catalog search filter). If the profile currently
protecting the data set has a higher number (because its internal-format name
collates higher than the new profile to be added), then this data set will be protected
by the new profile in the merged environment.
Note: The Catalog Search Interface filter uses the same masking rules as RACF
does for Enhanced Generic Naming (EGN).
Determining the profile that protects a data set on the current RACFplex can be
done by parsing the output from the LISTDSD command.
Tip: The LISTDSD command always refreshes generic profile information from
the RACF database on each call. This method can be quite slow when many
data sets are involved.
An alternative that provides better performance was found by writing a command
processor to determine the protecting profile using the RACROUTE
REQUEST=AUTH macro with the PRIVATE option (which returns the protecting
profile name in private storage). Because the DATASET class profiles are
processed in collating sequence, generic profiles for any high-level qualifier are
only loaded into the private region of the job's address space once. This can
save significant amounts of CPU, I/O and time when determining the protecting
profile for thousands of data sets.
The source code for this program is called PROPROF and can be located in the
Additional Materials for this book on the Redbooks Web site at:
http://www.redbooks.ibm.com
Associate each existing profile that will have data sets stolen, with the new profile
that steals data sets from it. This information is written to a data set and used as
input to the profile attribute merge program used in step 4. Also, evaluate the risk of
adding the overlapping profile by comparing the UACC, WARNING and ERASE
attributes of the stealing profile with those of any profile(s) from which it would
steal protection of existing data sets.
Assign a risk level to the addition of the new profile. The risk assessment can be
determined by factors such as the number of data sets whose protecting profile
would change, the compatibility of the attributes of the stealing profile with all
profiles it steals from, and any other criteria determined by the installation.
4. The output of the program executed in step 3 was manually reviewed. In addition, an
attribute merge program was executed to automatically merge access lists and attributes
for overlapping profiles whose risk assessment (as determined automatically in step 3)
was below a certain level. In other words, there were many cases where a one-way merge
of access list(s) into the stealing profile was deemed safe and was performed
automatically.
Important: The issue of overlapping generic profiles is probably the most problematic part
of the merge process. Great care is needed to ensure unwanted changes to access levels
do not occur.
205
consider these fields. In each unloaded profile, prior to the matching process, DBSYNC
replaces these fields with question marks and thus removes their effect from the
comparison process.
6. Changing the Group Tree Structure. The addition of a RACF group may alter the RACF
group tree structure. If your business processes rely on group level privileges, then you
should investigate the impact of changes to the group tree structure. For example, if a
group of NEW is added with a superior group of OLD, and OLD is an existing group, then
users with group privileges in OLD may get authority over group NEW. If this will be a
problem, then the RACF Data Security Monitor program (DSMON) can produce a group
tree report to allow you to examine the group structure in effect at your installation.
You should determine what the group structure will look like in the merged environment
before actually adding new groups to either database. Groups defined above the insertion
point of a new group should be investigated to understand the effect of adding the new
group at this point.
Use of the PWDCOPY program is recommended to perform the actual updates to the RACF
database. User-written programs can be used to manipulate the output from the extract and
generate the input to PWDCOPY.
If the PWDCOPY programs do not provide enough functionality, you can develop your own
password synchronizing programs. The encrypted RACF password field can be retrieved
using the RACROUTE REQUEST=EXTRACT macro with TYPE=EXTRACT, which is
described in z/OS Security Server RACROUTE Macro Reference, SA22-7692. However, the
RACROUTE REQUEST=EXTRACT macro with TYPE=REPLACE cannot be used to update
the RACF password using only the encrypted value as input, because RACROUTE assumes
207
that a replacement value for an encrypted field is provided in clear text and would therefore
re-encrypt the value. Instead, use the ICHEINTY ALTER macro in conjunction with an
ICHEACTN macro that specifies FIELD=PASSWORD,ENCRYPT=NO to update a RACF
password with the encrypted password extracted by the first program.
Note: ICHEINTY and ICHEACTN are documented in z/OS Security Server RACF Macros
and Interfaces, SA22-7682.
If the user exists in both RACF databases, then the second program will need a method to
decide which of the passwords to use.
Some criteria to consider are:
1. If a user has never logged on to a RACFplex, the last access date field of the user record
will not be set. In this case, use the password from the other database. User records that
were added with the DBSYNC utility will not have the last access date and time set.
2. If one of the passwords has been changed more recently than the other, then consider
using that password.
3. If the last access date for one password is significantly more recent than the other, then
consider using the newer password.
4. If the last access date is older than the password change interval, then consider using the
other password.
This method only works if the source password field was encrypted using a technique that the
target system supports for decryption.
A suggested method is that if a user has a valid and current password on both data sets, then
notify the user as to which password will be selected; that is, the password from system X will
be used when we cannot determine which password is the best one to use.
The password extraction and update should be done during the outage to re-IPL the
necessary systems onto the shared RACF database.
13.3.10 Cutover
Having synchronized the two RACF databases, you can now implement a single RACF
database for the entire sysplex. Assuming that you will be moving the incoming system to use
the target sysplex RACF database, only the incoming system will need to be IPLed. An
outage is not required on the other systems.
The following steps need to be performed:
1. Shut down the incoming system to a minimal state, leaving only those resources active
that will be needed to run the final synchronizing tasks.
1. Run a final DBSYNC.
2. Update any remaining discrepancies.
3. Run the password unload utility and extract all passwords.
4. Shut down the incoming system.
5. Update the passwords in the target sysplex RACF database.
6. Update the RACF Data Set Names Table (ICHRDSNT) on the incoming system,
specifying the new RACF database.
7. IPL the incoming system.
208
8. Verify the incoming system is using the correct RACF database using RVARY LIST.
9. Monitor syslog and SMF for RACF violations.
13.3.11 Summary
A RACF database merge requires careful analysis of the customization, entities, and
resource profiles in both environments. Thorough comparison and reporting of mismatching
profile attributes can lead to a methodical approach in which many of the decisions can be
made programmatically. By carefully synchronizing the databases using controlled, reversible
changes, the merge can be accomplished with minimal risk or disruption, and a high degree
of user satisfaction.
Having said that, the process is complex, and mistakes are very visible. For this reason, and
especially if you have a large or complex security setup, you may find it advisable to involve
someone that has been through a similar process previously. The experiences, and the tools,
that they contribute may more than offset the financial cost of their involvement.
13.4.1 Tools
The following tools are available to perform many of the merge tasks.
209
Compares these IDs to the user IDs and group names contained in RACF data fields
such as:
OWNER fields
NOTIFY fields
Commands to delete profiles, for profile names that contain the ID value
See z/OS Security Server RACF Security Administrators Guide, SA22-7683 for information
on how to use this program.
DBSYNC
DBSYNC is a REXX exec that compares two RACF databases, unloaded by IRRDBU00, and
creates the RACF commands to make them similar.
To obtain DBSYNC, visit the RACF Web site at:
http://www.ibm.com/servers/eserver/zseries/zos/racf/dbsync.html
DBSECLV
DBSECLV is a REXX exec that will read a RACF database unloaded by IRRDBU00, and
create a copy of the database with external names added wherever a profile had an internal
SECLEVEL or category number.
DBSECLV is intended to be used as a pre-processing step for the DBSYNC exec which must
use the external names of SECLEVELs and categories when comparing RACF databases,
not the internal SECLEVEL and category numbers.
To obtain DBSECLV, visit the RACF Web site at:
http://www.ibm.com/servers/eserver/zseries/zos/racf/dbsync.html
PWDCOPY
The PWDCOPY utility allows you to copy passwords from user IDs in one RACF database to
another RACF database. The utility consists of two parts: an export utility (ICHPSOUT) and
an import utility (ICHPSIN). The exported file can be post-processed, prior to importing into
another RACF database. The exported records contain the userid, encrypted password, last
password change date and profile creation date.
To obtain PWDCOPY, visit the RACF Web site at:
http://www.ibm.com/servers/eserver/zseries/zos/racf/pwdcopy.html
210
RACEX2IN
RACEX2IN is a sample REXX exec containing two functions to convert RACF profile names
to and from RACF internal name format. If internal-format profiles are sorted in ascending
order, then the first (lowest) profile found that matches a resource name is the profile RACF
will use to protect that resource. This information can be used to perform RACF analysis on
profiles that have not yet been added to a system.
The source code for this exec is located in the Additional Materials for this book on the
Redbooks Web site at:
http://www.redbooks.ibm.com
PROPROF
PROPROF is a sample TSO/E command processor that displays the name of a profile
protecting a resource. This utility performs better than the listdsd or rlist RACF
commands, because the profile name is retrieved via racroute request=auth, so that RACF
database I/O may be avoided. This utility is useful if large numbers of data sets will need to be
processed as part of the merge.
The source code for this program is located in the Additional Materials for this book on the
Redbooks Web site at:
http://www.redbooks.ibm.com
ALTPASS
ALTPASS is a sample TSO/E command processor that allows a user's password to be
transferred from one RACF database to another when only the encrypted form of the
password is available; that is, when the password itself is not known. ALTPASS also allows a
user's revoke count and/or password change date to be updated without changing the user's
password. This sample is available as assembler source code and can be used as is, or as a
base to develop your own password synchronization process.
The source code for this program is located in the Additional Materials for this book on the
Redbooks Web site at:
http://www.redbooks.ibm.com
ICETOOL
ICETOOL is a multi-purpose DFSORT utility. ICETOOL uses the capabilities of DFSORT to
perform multiple operations on one or more data sets in a single job step.
These operations include the following:
Creating multiple copies of sorted, edited, or unedited input data sets
Creating output data sets containing subsets of input data sets based on various
criteria for character and numeric field values, or the number of times unique values
occur
Creating output data sets containing different field arrangements of input data sets
Creating list data sets showing character and numeric fields in a variety of simple,
tailored, and sectioned report formats, allowing control of title, date, time, page
numbers, headings, lines per page, field formats, and total, maximum, minimum and
average values for the columns of numeric data
Printing messages that give statistical information for selected numeric fields such as
minimum, maximum, average, total, count of values, and count of unique values
211
Tip: SYS1.SAMPLIB member IRRICE contains a lot of useful RACF report generators
using ICETOOL.
13.4.2 Documentation
The following manuals contain a wealth of information that can be used during the merge:
z/OS Security Server RACF Security Administrators Guide, SA22-7683
z/OS Security Server RACF Command Language Reference, SA22-7687
z/OS Security Server RACF System Programmers Guide, SA22-7681
z/OS Security Server RACROUTE Macro Reference, SA22-7692
z/OS Security Server RACF Macros and Interfaces, SA22-7682
The RACF home page contains useful information and can be found at:
http://www.ibm.com/servers/eserver/zseries/zos/racf/
The IBM Redbooks Web site contains a number of RACF-related books and can be found at:
http://www.redbooks.ibm.com
212
14
Chapter 14.
SMS considerations
This chapter discusses the following aspects of moving a system that is using its own SMS
environment into a sysplex where all the existing systems share all or most DASD and have a
common SMS environment:
Is it necessary to merge SMS environments, or is it possible to have more than one SMS
environment in a single sysplex?
A checklist of things to consider should you decide to merge the SMSs.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
213
214
Consideration
Note
Type
Done
Consideration
Note
Type
10
11
12
13
14
15
16
17
Done
Check that any required compatibility PTFs are applied if all systems are not using the same DFSMS level
The Type specified in Table 14-1 on page 214 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes in Table 14-1 on page 214 are described below:
1. Review Control Data Set sizes, and attributes.
Review if control data sets need to be re-sized, and check that the VSAM Shareoptions
are specified correctly for shared VSAM data sets. Refer to DFSMSdfp Storage
Administration Reference, SC26-7331 for more information.
2. Define all systems to the base configuration, and review defaults.
All systems in the SMSplex must be defined in the base configuration. There is no harm in
defining a system in the base configuration before it actually joins the SMSplex.
Additionally, it is possible, if you wish, to define all the systems in the sysplex using a
single sysgrp definition. If you currently use sysgrp, the incoming system system will
automatically be included when it joins the sysplex, so no further action is required.
You should also review the default values for Management Class, Unit, Device Geometry,
and (if running z/OS 1.3 or later) the data set separation profile for both SMSplexes. If the
values are different, institute whatever changes are required to bring them into
synchronization. Remember that changing the Device Geometry value can cause either
space abends, or over-allocation.
3. Review all data class, management class, and storage class definitions.
Review all the SMS construct definitions, as they will need to be consistent across all
systems. This is a critical part of the merge process, and often the most difficult to
perform. A simple change to a parameter can have a huge impact on the way data sets
and DASD volumes are managed. This, along with reviewing and merging the ACS
routines, may be the most time-consuming part of the merge process.
215
Compare all storage, management, and data classes with duplicate names to see if they
really are identical. If they are not identical, you must address this, especially to make sure
the wrong data sets dont get deleted.
You also need to check for storage, management, and data classes that are defined with
different names, but are in fact identical. You should make a note of any such
classesafter the merge is complete, you may wish to change your ACS routines so that
only one of the classes gets assigned to new data sets.
The merged SMSplex must contain the superset of storage classes, management
classes, and data classes from both SMS environments.
The construct information can be copied from one SCDS to another (assuming both are
online to the system you are using) by using the ISMF line operator command COPY. This
will bring up a panel that allows you to specify the source and target SCDS. This will copy
the constructs, but will not copy the volumes associated with the storage groups. This
procedure is documented in DFSMSdfp Storage Administration Reference, SC26-7331.
4. Review all storage group definitions.
Assuming that you are only merging SMSplexes if the target environment is a
PlatinumPlex, you need to decide how you want to handle your storage groups. Will you
retain the separate storage groups that you have prior to the merge, or will you merge
some storage groups, resulting in fewer, but larger, storage groups?
If you decide to merge storage groups, you need to review the attributes of the storage
groups being merged. If the groups being merged have different attributes (for example,
one may have AUTOBACKUP set to YES and the other may have it set to NO), you need
to ensure the attributes of the merged group meet the requirements of all data sets that
will reside within that group. Once you have resolved any differences, change the
definitions in the target sysplex SMSplex. For simplicity, we do not recommend actually
merging the storage groups until after you have brought the incoming system up using the
target sysplex ACDS. This means that you must define the incoming systems storage
groups in the target sysplex SMSplex.
In order to allow both SMSplexes to share the one set of SMS constructs while still
controlling which volumes each can use, we recommend defining the storage groups so
that the incoming systems storage groups have a status of DISNEW for the target sysplex
systems, and the target sysplexes storage groups have a status of DISNEW for the
incoming system. This will permit you to have a single SMSplex, but still restrict access to
the various storage groups to specific systems. Once you are comfortable that everything
is working correctly, you can start changing the storage groups statuses to ENABLE.
You also need to review which systems you have defined to carry out the various storage
management tasks (migration, backup, dump) for each storage group. Many installations
limit which systems can do each of these tasks for a given storage group. Given that there
are more systems in the sysplex now, and potentially larger storage groups, you should
review each storage group definition and decide whether you wish to change the system
or system group assigned for each storage group. You obviously need to coordinate this
activity with the merge of the HSMplex, to ensure that HSM is enabled for the tasks you
wish carried out on the systems you specify in the storage group definition.
5. Review and merge SMS ACS routines.
This can be the most time-consuming part of the SMS merge. Fortunately, most of the
work can be performed prior to the merge day process.
You need to create a set of SMS ACS routines that will work with both SMS environments.
This means creating superset routines containing filter lists, class and group names. You
will need to be satisfied that all data sets are being assigned the appropriate Classes and
Groups. Consider the use of Naviquest to assist with this task (ISMF option 11).
216
Avoid deleting the definitions for any data classes, management classes, or storage
classes, unless you are sure that they are currently no longer in use. SMS Class names
are contained in many places, including catalog entries, HSM CDSs, and on DSS dump
tapes. Just because a class is not being assigned to new data sets in the ACS routines
does not guarantee that there are no existing data sets with these classes assigned to
them. If a management class is deleted, and a data set on Level 0 is assigned to that
management class, backup and space management will not process that data set and
error messages will be produced saying that the management class could not be found.
If you do eliminate class definitions, consider coding the ACS routines to catch the old
names, if data sets are recalled or recovered. Data sets on Level 0 volumes need to have
class names defined, but data sets that have been migrated or backed up could have their
class names changed by the ACS routines, when they are recalled/recovered to Level 0.
If any systems are using Tape Mount Management, make sure your routines cater for this.
If any systems are using information from the DFP segments, make sure RACF and your
ACS routines cater for this.
6. Review all tape library definitions.
If you have any tape libraries defined to (either) SMSplex, you must ensure that those
definitions are carried forward to the merged environment.
7. Review Virtual I/O (VIO) definitions.
Check to see how VIO is defined in the storage group definitions in both environments. If
the definitions are different, make sure you understand why they have different values, and
adjust accordingly.
8. Review SMS subsystem definition.
Review the IEFSSNxx definitions for the SMS subsystem and resolve any differences. The
options for the IEFSSN entry is described in the section entitled Starting the SMS
Address Space in DFSMSdfp Storage Administration Reference, SC26-7331.
9. Review and merge Parmlib member IGDSMSxx.
Review all IGDSMSxx statements for differences, and resolve. The IGDSMSxx statements
are described in the section entitled Initializing SMS Through the IGDSMSxx Member in
DFSMSdfp Storage Administration Reference, SC26-7331.
10.Review any automatic or automated commands.
Review if there is any automation executing SMS-related commands at specific times, or
in response to specific messagesfor example, SETSMS SAVEACDS(.......). Because all
systems will be in the same SMSplex, such commands should probably only be issued on
one system in the sysplex.
11.Review any housekeeping jobs.
Because there will only be one SMSplex, you may be able to eliminate some
housekeeping jobs (for example, the jobs that previously ran on the incoming system to
back up that systems SCDS and ACDS).
12.Check for SMS exits.
You will need to determine what SMS exists are in use, and if they are still actually
required. There have been many enhancements to SMS over recent DFSMS releases,
meaning that the function provided by your exit may actually be available as a standard
part of SMS now. Or maybe the business requirement no longer exists, so the exit can be
removed. If you determine that the exit is still required, you need to ensure that the exit
217
provides the functions required by both the incoming system and the target sysplex
systems, and that modifying the exit in preparation for the merge does not have a negative
impact on either environment. The SMS exits normal reside in SYS1.LPARLIB and are
called:
IGDACSXT
IGDACSDC
IGDACSSC
IGDACSMC
218
Once you have successfully tested that the incoming system functions correctly and that data
sets get allocated where you would expect them to, you can then start changing the storage
groups status to ENABLE. Remember that the user catalogs should be available to all
systems at this point, meaning that someone on one of the target sysplex systems could
potentially try to recall a data set that had been migrated by HSM on the incoming system,
and vice versa. If the incoming systems storage groups are not accessible (ENABLE) to the
target sysplex systems, the recall will probably fail.
219
220
15
Chapter 15.
DFSMShsm considerations
This chapter discusses the following aspects of moving a system that is using DFSMShsm
into a sysplex where all the existing systems use DFSMShsm and share a set of DFSMShsm
CDSs:
Is it necessary to merge DFSMShsm environments, or is it possible to have more than one
DFSMShsm environment in a single sysplex?
A checklist of things to consider should you decide to merge the DFSMShsm.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
221
222
CDSs. There are also tasks relating to moving to consistent space and availability
management practices, setting up new data sets, changing procs and parms, and so on. The
actual merge process takes place after all this work has been completed, and is described in
15.3, Methodology for merging HSMplexes on page 235.
This section contains, in checklist format, a list of things that must be considered should you
decide to merge two or more HSMplexes.
Due to the interrelationship between them, it is strongly recommended that you merge the
SMS environment, tape management system (for example, RMM), security environment (for
example, RACF), catalog environment, and DASD environments at the same time as merging
the HSMplexes.
Table 15-1 Considerations for merging HSMplexes
Consideration
Note
Type
B, G
Check that the CDSs and CDS Backup data sets are large
enough and have the correct attributes
10
11
12
13
14
15
If the HOSTID changes, check that the SMS ACS routines will
manage the new Activity Log data sets (which include the
HOSTID as part of the name), as expected
Done
223
Consideration
Note
Type
Done
The Type specified in Table 15-1 on page 223 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Notes indicated in Table 15-1 on page 223 are described below:
1. If two HSMplexes exist within a sysplex environment, one HSMplex interferes with the
other HSMplex whenever DFSMShsm tries to update CDSs in non-RLS mode or when it is
performing other functions, such as level 1-to-level 2 migration. This interference occurs
because each HSMplex, although having unique resources, uses the same resource
names for global serialization.
To eliminate this constraint, exploit the Single GRSplex Support feature in HSM by
specifying RNAMEDSN=YES in the startup JCL of every HSM address space, and by
specifying the HSMplex name on the PLEXNAME keyword in the ARCCMDxx member.
Obviously the two HSMplexes should have different PLEXNAME values.
For more information on this support, see the section entitled Single GRSplex
Serialization in a Sysplex Environment in z/OS DFSMShsm Implementation and
Customization Guide, SC35-0418 and the section entitled PLEXNAME: Specifying a
Name for an HSMplex in z/OS DFSMShsm Storage Administration Reference,
SC35-0422.
2. Before you start doing any work with any of the HSM CDSs (on any of the systems), run a
HSM AUDIT against all CDSs in both HSMplexes. We recommend running the following
audits:
AUDIT DATASETCONTROLS(MIGRATION)
AUDIT DATASETCONTROLS(BACKUP)
AUDIT ABARSCONTROLS
Running the audit will identify any records that are in error that should be removed or
repaired before you start processing the CDSs in preparation for the merge. Running the
audit at this time will potentially save you time by avoiding working with records that are
irrelevant.
It would probably also be a good idea at this point to issue a RECYCLE ALL EXECUTE
PERCENTVALID(0) HSM command to free up any empty HSM tapes. The fewer entries
you have in the HSM CDSs, the easier the merge process should be.
3. Identify and eliminate duplicate records within the HSM CDSs.
224
Important: There are likely to be quite a number of duplicate records as you review the
HSM CDSs. Before you make any changes, you should decide which set of CDSs you
want to use as the basis for the merged HSMplex, and which ones you will delete the
duplicates from. It may be that you will be forced to make some changes in one HSMplex
(for example, relabelling a ML1 volume), and other changes (for example, renaming some
data sets) in the other HSMplex, but if you can select one HSMplex for all changes, it will
make the merge process significantly easier and reduce the opportunity for mistakes to be
made.
The merge process that we provide will automatically discard duplicate entries based on
the order in which the CDSs were passed to the merge step. If you know that all of the
changes you are going to make to remove duplicates will be to HSMPLEX2, then you
simply have to include the HSMPLEX2 CDSs as the second data set in the input to the
merge. On the other hand, if some of the changes will be made to HSMPLEX1 and some
to HSMPLEX2, then you cannot depend on the merge job dropping all the duplicates so
you must delete them manually (usually, using the FIXCDS command) prior to the merge.
When merging CDSs, it is important to understand the contents of the data sets. For
example, if you are merging the CDSs belonging to two HSMplexes, you cannot do the
merge if the same resource is defined in each CDS. You must first analyze the contents of
the data sets to ensure that there are no duplicate resources, and remove any that you
discover.
The merging of the HSM CDSs will be quite easy provided there are no unexpected
duplicate records within them. Some duplicate records can be easily ignored in the merge
process, while others need to be reviewed, and the appropriate action taken, before the
merge process can proceed.
Job PREMERGE in HSM.SAMPLE.TOOL, which is created by running the job in member
ARCTOOLS in SYS1.SAMPLIB, is a job that can assist with determining which records
are duplicates. You will need to run this against all CDSsnote that the job only runs
against one CDS type at a time, so you will need to run the job three times (once for each
set of BCDS, MCDS, and OCDS CDSs).
The output from the PREMERGE job identifies the duplicates by the record number, along
with the record key; for example, data set name, tape volser, and so on. There is some
documentation within the PREMERGE member that advises on the appropriate action for
record types that have duplicates.
An alternate to the PREMERGE job is the Merge Jobs for the Migration Control Data Set
(MCDS), Backup Control Data Set (BCDS), and Offline Control Data Set (OCDS)
(Example 15-1 on page 237, Example 15-2 on page 238, and Example 15-3 on page 239
in 15.4, Tools and documentation on page 237).
Just like the PREMERGE job, these jobs can be continually rerun until all duplicates (or at
least those that require action) have been resolved. Duplicates will be copied into the data
set identified on the DUPS DD statement in Step S002ICET. You may find it more
convenient to use the Merge Jobs rather than the PREMERGE job to identify the
duplicates, because you will need to set up and run these jobs for the actual merge
process anyway.
The record types, the CDS that they reside in, and an indicator of how duplicates should
be handled, are displayed in Table 15-2 on page 226, Table 15-3 on page 226, and
Table 15-4 on page 226.
225
Description
Note
00
01
02
04
07
10
10
10
10
10
11
12
Description
Note
20.00
21.00
22.00
24.00
26.00
27.00
28.00
29.00
2A
2C
30.00
30.00
30.00
Description
Note
32.00
227
DSR RECORDS: The DSR record is a statistical summary of daily activity. Duplicates can
be safely ignored, as they will be discarded by the merge job.
L2CR RECORDS: The L2CR record defines the structure of migration level 2 volumes
and their associated key ranges. If you are using tape for your ML2 and you have no ML2
DASD volumes, you should not have any of this record type. If there are duplicates, you
need to resolve these prior to the merge by emptying and relabelling the DASD ML2
volumes. There are specific considerations for draining ML2 volumes defined with
keyrangessee the description of the ADDVOL command in z/OS DFSMShsm Storage
Administration Reference, SC35-0422 for more information.
MCR RECORDS: The MCR record contains host control information that must be
maintained between HSM starts. The key of the record is the characters MCR followed by
a single digitthe single digit being the HSM host identification as specified on the HOST
keyword in the HSM startup JCL. You do not have to do anything with these records. If the
host identification of HSM on the incoming system will change when it moves into the
sysplex, the duplicate MCR record will be discarded during the merge job. On the other
hand, if the host identification will remain the same (remember that every HSM in the
HSMplex must have a different host identification number), then it will be copied correctly
to the new merged CDS.
MHCR RECORDS: The MHCR record contains space usage information about the HSM
CDSs. You do not have to do anything with these recordsduplicate records will
automatically be discarded by the merge job. When HSM is started after the merge, using
a new set of CDSs, the MHCR record will be updated with information about those data
sets.
VSR RECORDS: The VSR record contains information about volume activity for one day
for each volume under HSM control. Duplicate VSR records indicate that some volumes
under HSM control have duplicate volsers. Obviously you cannot have duplicate volsers
after the merge, so all duplicate volumes need to be deleted or relabeled prior to the
merge. Removing the duplicate volser ensures that no new duplicate VSR records will be
created, and existing ones will be discarded during the merge job.
MCA RECORDS: The MCA record contains information about a migrated data set,
including pointers back to the original data set name. You must eliminate any duplicate
MCA records before the merge. However, because the key contains the first two qualifiers
of the data set and the time and date that it was migrated, it is very unlikely that you will
find duplicates. If you do happen to find duplicates, all you have to do is recall the data set
and that will cause the MCA record to be deleted.
Note: If you have specified a MIGRATIONCLEANUPDAYS reconnectdays value
greater than 0, the MCA record will not get deleted until that number of days after the
data set is recalled. It might be wise to temporarily set this value to 0 in the days leading
up to the merge. Once the merge has been successfully completed, reset
reconnectdays back to its original value.
MCO RECORDS: The MCO record contains information about migrated VSAM data sets.
The key of the record is the migrated data set name. Just as for the MCA records, there
cannot be any duplicate MCO records. However, again like the MCA records, the key
contains the date and time and the first two qualifiers of the original data set name, so it is
very unlikely that you will actually find any duplicates. If you do, simply recall the data set
(with the same reconnectdays consideration as the MCA records).
228
MCB RECORDS: The MCB record contains control information for an individual data set
that has been backed up, and identifies backup versions. The key is the original data set
name, so duplicate records are an indicator that there are duplicate data set names. You
cannot have duplicate data sets in the merged HSMplex, so all duplicates data sets need
to be deleted or renamed (and re-backed up) prior to the merge. Once you delete or
rename the data sets, the duplicate MCB records should disappear.
DVL RECORDS: The DVL record contains information about HSM dump volumes. The
key is the dump volume volser, so duplicate DVL records indicate that there are duplicate
dump volsers. You must eliminate any duplicate volsers before the merge (and before you
merge tape management systems), so all duplicate volumes need to be deleted or
renamed prior to the merge. Once the volumes are DELVOLed, the duplicate DVL records
will be deleted.
DCL RECORDS: The DCL record describes the attributes of a HSM dump class.
Duplicates can be safely ignored (the duplicates will be discarded during the merge job),
provided the related dump classes are defined the same on each system. If the duplicates
are different, then you should review and consolidate the dump class definitions in
ARCCMDxx.
MCC RECORDS: The MCC record describes a backup version of a data set. The key
contains the backup data set name, and, like the MCA records, it contains the first two
qualifiers of the original data set and the date and time that it was backed up. It is
theoretically possible, although unlikely, that you will have duplicate MCC records. If you
do, you must delete the related backup data set using the BDELETE HSM command.
Before you do this, however, you need to be sure that that backup is no longer required.
After you issue the BDELETE commands, the duplicate MCC records should no longer
exist.
MCM RECORDS: The MCM record describes a data set that has been backed up by the
BACKDS or HBACKDS command, and is currently residing on a level 1 migration volume.
It is rare that any will exist, and it is very unlikely that any duplicates will also exist. You
should issue a FREEVOL ML1BACKUPVERSIONS command prior shutdown to move
all these backup data sets from ML1 to tape. When this command completes, all the MCM
records should have been deleted.
MCL RECORDS: The MCL record describes a changed data set that has migrated from a
primary volume, and needs to be backed up. It is rare that any will exist, and it is very
unlikely that any duplicates will also exist. The key is the data set name, so duplicate keys
are an indicator of duplicate data set names. If you eliminate all duplicate data set names,
there should be no duplicate MCL records.
MCP RECORDS: The MCP record contains information about volumes that have been
backed up or dumped by DFSMShsm. The key is the volume serial number, so if you have
duplicate MCP records, it is an indicator that you have, or had, duplicate volsers. While
you should resolve the duplicate volser situation, that will not remove the duplicate MCP
records. The records will only be removed when all backups and dumps have been
deleted. Because the backups or dumps may contain information that is needed, you must
decide how you want to handle each one. We do not recommend proceeding with the
merge if there are still duplicate MCP records, as the merge process could discard
information about backups that you still need.
DGN RECORDS: The DGN record contains information about the dump generation of a
given volume when this volume has been processed by the full-volume dump function.
The key is the DASD volser followed by the date and time the dump was created. If you
have duplicate records, it is an indication that you have duplicate DASD volsers. While you
should resolve this before the merge, that action on its own will not remove the duplicate
DGN records. The DGN records will only be deleted when the related dump is deleted. As
229
for the MCP records, care must be taken when deciding to delete dump volumes in case
they contain information that is still required. However, you cannot eliminate the DGN
records until you delete the related dumps. We do not recommend proceeding with the
merge until all the duplicate DGN records have been addressed.
ABR RECORDS: The ABR record contains information related to a specific version and
copy created during an aggregate backup or processed during an aggregate recovery. It is
unlikely there will be any duplicate ABR records. However, if there are:
Duplicate AGGREGATE names should be identified as part of the preparation for the
merge of the DFSMS SCDS constructs. If duplicates are found, they need to be
resolved at this point.
If no duplicate aggregate names are found in the SCDSs, but there are duplicates in
the BCDSs, then it is likely that one site has some dormant ABR records that can
probably be omitted from the mergehowever, you must verify this before proceeding.
If the aggregates are required, and the duplicate records are not addressed, you will
end up losing versions of the aggregate you want to keep.
For any duplicates that are identified, you need to determine:
If the applications are being merged, you need to decide which ABR records to
keep.
If applications are not being merged, you have to decide which application will keep
the aggregate name and decide on a new aggregate name needed for other
application.
In the latter case, you must set up definitions and establish an ABR history for the
new aggregate (by using ABACKUP). Once you have done this, the duplicate ABR
records can probably be dropped prior to or during the merge.
MCT RECORDS: The MCT record describes a volume used for containing backup
versions. The key of these records is the volser of the backup volume, so duplicate MCT
records indicate duplicate backup tape volsers. The duplicate volsers must be addressed,
probably by doing a RECYCLE followed by a DELVOL of the tapes in question. However,
to get the best value from the RECYCLE, you should do an EXPIREBV prior to running the
RECYCLEthis will ensure that all expired data is deleted before you clean up the tapes.
Once the DELVOL has been issued, there should no longer be any duplicate MCT
records.
BCR RECORDS: The BCR record contains status information about backup processing.
Each HSM host that is allowed to run backup (that is, SETSYS BACKUP) has a defined
host identification (specified in the HOST keyword in the startup JCL) and a unique BCR
record. The duplicate BCR records will automatically be discarded during the merge job,
so you dont have to do anything with these records.
BVR RECORDS:
Important: Unlike the other CDS records, where duplicate records should be addressed
well in advance of the merge, the process for fixing up the BVR records must be carried out
at the time the CDSs are merged. We include the information here because it logically fits
in with the discussion in this section; however, the actions listed here should not be carried
out until the time of the merge.
The BVR record describes both tape and DASD backup volumes that are: (1) assigned for
use on a particular day of the backup cycle; (2) to be used for spill processing; and (3)
unassigned.
230
Figure 15-1 shows how the BVR records are represented in the BCDS.
H S M B V R R e c o rd s
B V R -d d -0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
B V R -d d -n n n n
B V R -U N -0 0 0 0
B V R -S P -0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B V R -U N -n n n n
B V R -S P -n n n n
Each of the BVRs shown in Figure 15-1 constitutes a day in the daily backup cycle (where
dd is 01 through the number of days defined by the DEFINE BACKUPCYCLE), or SP
for spill volumes, or UN for unassigned volumes. Each BVR record holds up to 164
entries. So, for example, if you had 180 tapes assigned to DAY 1, you would have a record
with a key of BVR-01-0000 containing 164 entries, and another one with a key of
BVR-01-0001 containing 16 entries.
Because of this naming convention for the BVR records, you will almost definitely have
duplicate BVR records across the two HSMplexes. Fortunately, APARs OW37110 and
OW35561 have made it significantly easier to merge CDSs containing these recordsso
much easier, in fact, that if you do not have those APARs applied, we recommend waiting
until these APARs are applied to both HSMplexes before you start the merge.
Once these APARs are applied, the process you use is as follows:
a. DELVOL ... BACKUP(MARKFULL) all partial backup tapes on the incoming system
prior to the merge. If you are already using MARKFULL, this can be ignored.
b. Stop all HSM address spaces.
c. Back up the CDSs of all systems.
d. Merge the BCDSs. Duplicate BVR records do not need to be removed prior to the
merge; the duplicates can be safely ignored, provided you keep the records that
correspond to the system you are merging into.
e. Start one of the HSMs using the new merged databases.
f. Issue a FIXCDS R BVR REFESH(ON) HSM command from that HSM. HSM will
automatically rebuild the BVR records to include information for all partially filled
backup volumes, regardless of which BCDS they originated in.
g. Issue the BACKVOL CDS command to get another backup of the CDSs.
231
Note: If you dont have APARs OW37110 and OW35561 applied to your systems,
and you cant apply them for some reason, you should use the following process.
The objective of this step is to move volumes from one BVR to another so that each
BVR structure is unique to each configuration.
The way to move volumes from one BVR to another is to use the ADDVOL and
DELVOL commands. DELVOL UNASSIGN moves a volume from a daily backup or
spill BVR into the unassigned BVR. ADDVOL can move a volume from the
unassigned BVR into any of the daily backup or spill BVRs.
Thus, if both of the HSMplexes are running with seven-day backup cycles, the
storage administrator should DELVOL UNASSIGN all the daily backup volumes from
one of the HSMplexes and ADDVOL them to BVR01. The spill volumes would be
handled similarly, moving them from BVRSP to BVR02. The volumes that were
originally unassigned can be ADDVOLed to BVR03. The storage administrator
would repeat the process in the other HSMplex, using BVR04, BVR05, and BVR06.
When the backup volumes are moved, the residual empty BVR records on each of
the configurations should be deleted with the FIXCDS command.
DCR RECORDS: The DCR record contains control information for dump processing.
Each HSM host that is allowed to run dump has a defined host identification and a unique
DCR record. The duplicate DCR records will automatically be deleted by the merge job, so
you dont have to do anything with these records.
4. Are there duplicate DASD or tape volsers?
There may be DASD and/or tape volumes that exist in both HSMplexes. If so, it is likely
that you will have discovered them as you reviewed the duplicate CDS records. However, it
is possible that there are duplicate volumes that HSM was not aware of at the time you
reviewed the CDSs.
To check for duplicate DASD volsers, get a list of all the DASD from both HSMplexes and
use a tool such as SUPERC to check for any matches.
If you find a duplicate DASD volume, you can use something like DFDSS COPY to move
Level 0-resident data sets to another volume with a unique volser. If the duplicate volume
is an ML1 volume, you should use FREEVOL to move ML1 and ML2 DASD-resident data
sets to another volume with a unique volser.
To check for duplicate tape volumes, check the tape management systems from both
HSMplexes. There should be a tape management system merge project going on in
tandem with the HSMplex merge, and identifying duplicate volsers will be one of the first
tasks of that project. It is unlikely that you will find HSM-owned duplicate tape volumes that
were not already discovered during the CDS cleanup process; if you do find any, go back
and review the steps for the related record type. Duplicate tapes that are not owned by
HSM will be addressed by the tape management system project. If you do find a duplicate
HSM tape volume, RECYCLE that tape to move the contents to another volume with a
unique volser.
After you have freed up the duplicate volumes, issue a DELVOL PURGE command to
remove the volumes from HSM. If the duplicate volume is a Level 0 volume, delete the
HSM VTOCCOPY data sets (there should be two of these for each Level 0 volume with
the AUTOBACKUP attribute, then delete the MCP record for these volumes.
232
233
If you change the MIGRATEPREFIX and BACKUPPREFIXs as part of the merge, after
the merge you will still be able to access the migrated and backup data sets that were
created before the merge; however, newly-created migrated and backup versions will
use the new HLQs.
Note: Recalls from the SDSP data set will not work because for SDSP the system
doing the recall reconstructs the SDSP file name using the current MIGRATEPREFIX
value. HSM does not record the actual name of the SDSP anywhere.
In addition, recalls or recovers from tape may be an issue if you are running an OEM
tape package that compares the full 44-byte name on the allocation request to what it
has recorded in its records. Some tape packages specifically do this, although they
provide an option to turn it off. This is not an issue with RMM.
You should also review the HSM startup JCL:
Decide whether you are going to use separate HSM Procs for each system, or one
version that utilizes symbols defined in IEASYMxx.
Update the HOST parameters as appropriate. You will more than likely have to change
the host identification of the incoming system. Review if this parameter should be
specified via a symbol defined in IEASYMxx.
Make sure each HSM system has unique data sets for ARCPDOX/Y & ARCLOGX/Y.
We suggest that you include the HSM address space name and the system name
(&SYSNAME) in the data set name.
Review the CDSR & CDSQ valuesthey may need to be updated to support sharing
of the CDSs. Refer to the chapter entitled DFSMShsm in a Multiple-Image
Environment in z/OS DFSMShsm Implementation and Customization Guide,
SC35-0418. This may require updates to the GRS RNLs which you will need to
coordinate to take effect at the same time as you merge the CDSs.
8. Are the CDSs and the CDS backup data sets large enough for the merged configuration
and do they have the correct attributes?
To allow for a simplified backout process, we recommend that you create new versions of
the CDSs and Journal. On the actual merge day, you simply rename the HSM procs so
that you use the new procs that refer to the new names.
You should calculate the required sizes for the new, combined CDSs based on information
from the CDSs that will be merged, and allocate the CDSs. At the same time, calculate the
size for the new Journal data set, and allocate the new Journal data set.
Resize the CDS backup data sets to accommodate the new CDS and Journal sizes.
Remember that HSM will reuse and rename existing DASD-resident backup data sets
when it backs up the CDSs or Journal, so if you want to switch to larger backup data sets,
you will have to allocate the new backup data sets in advance. You must define as many
backup data sets for each CDS as are specified on the BACKUPCOPIES statement in
ARCCMDxx.
Review the HSM-provided job to allocate the CDSs (HSM.SAMPLE.CNTL(STARTER)
which is created by member ARCSTRST in SYS1.SAMPLIB), to check that you are
defining the new CDSs with the correct record size, bufferspace, CIsize, Shareoptions,
and so on.
9. Is ABARS used?
If both HSMplexes use ABARS, check for duplicate Aggregate names. If duplicates exist,
only one system will be able to keep the original name. It may be a complex task to
remove the duplicates if it is determined that you must maintain past data from both
aggregate groups.
234
235
The method we provide in this book presumes that a planned HSM outage is possible.
14.Consider running an AUDIT command with the NOFIX option, to display any errors.
15.3.2 Backout
If backout is required at any stage, because you created new versions of the CDSs, Journal,
ARCCMDxx Parmlib member and startup JCL, you can simply stop all the HSMs, rename the
Parmlib and Proclib members back to their original names, and restart all the HSMs.
237
238
239
240
16
Chapter 16.
DFSMSrmm considerations
This chapter discusses the following aspects of moving a system that is using Removable
Media Manager (DFSMSrmm) into a sysplex where all the existing systems share a single
DFSMSrmm database:
Is it necessary to merge DFSMSrmm environments, or is it possible to have more than
one DFSMSrmm environment in a single sysplex?
A checklist of things to consider should you decide to merge the DFSMSrmms.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
While some of the concepts in this chapter probably apply to tape management systems
other than DFSMSrmm, this chapter has been written specifically for DFSMSrmm, by people
familiar with that product. Correspondingly, the term we use in this chapter and in the index to
describe the systems sharing a single tape management system database is RMMplex (as
opposed to TAPEplex or TMSplex or something similar).
241
Note
Type
Review DFSMSrmm CDS and Journal data set sizes, and attributes
Security Considerations
Done
The Type specified in Table 16-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes indicated in
Table 16-1 are described below:
1. Ensure toleration service is installed if necessary
242
Before you can start a DFSMSrmm merge process check the DFSMSrmm software levels.
If the DFSMSrmm software levels are different, then you should check for compatibility
PTFs, and confirm they are installed and applied.
To get a list of toleration and compatibility APARs for DFSMSrmm, refer to the DFSMS
program directory, the DFSMSrmm PSP bucket, and Information APAR II08878.
2. Identify and eliminate duplicate records within the DFSMSrmm CDSs
Note: There are numerous references in this section to duplicate records and how to
handle them. However, the first step has to be to identify if you have duplicate records.
The easiest way to do this is to run the jobs in Example 16-1 on page 255 and
Example 16-2 on page 255. The description of the first step of the job, STEP S001ICET
on page 252, discusses how the merge job handles the various record types, and
specifically, what it does with any duplicate records that it finds. You should look in the
data sets containing the duplicate records to see which records types you need to
address.
Figure 16-1 contains a graphiucal summary of the process of merging multiple RMM
CDSs. In this example, each LPAR has its own CDS, whereas in our sysplex merge we will
hopefully only have twoone from the incoming system, and one from the target sysplex.
LPAR 1
LPAR 2
LPAR 3
LPAR 4
LPAR n
Merge
Remove
control
records
Split and
remove
Vol Rec
Key=V
Rack Rec
Key=E
Key=F
Key=U
Bin Rec
Key=R
Key=S
PP Rec
Key=P
DSN Rec
Key=D
Owner Rec
Key=O
VRS Rec
Key=K
Merge
Create
control
records
Check
new
CDS
Start
new
RMM
When merging CDSs, it is important to understand the contents of the CDS. For example,
if you are merging two CDSs, you cannot merge them if the same resource is defined in
each CDS. You must analyze the contents of the CDS to ensure that there are no
duplicate resources (or at least, no conflicting duplicates).
243
If you find that there are a large number of duplicate tape volsers, then you should
consider whether you really want to merge your DFSMSrmm CDSs. If you have a large
number of duplicate volumes, and you would still like to merge the systems, then before
you perform the merge, you should consider using one of the many TAPECOPY utility
programs available, to copy the duplicate volumes to volumes with unique volsers.
There is a sample merge job shown in Example 16-2 on page 255. As well as performing
the merge, the job can also be used to identify duplicate records in advance of the actual
merge. The job can be continually rerun until all duplicates (or at least those that require
action) have been resolved. The merge job is described in more detail in 16.3.1, Merging
the CDSs on page 252.
The merge job repros the merging systems DFSMSrmm CDSs into a flat file and sorts the
file, discarding duplicate records in the process, and creates a new merged DFSMSrmm
CDS. Duplicate records are written to sysout files for you to investigate and act on.
Note that all the duplicates that are deleted are from the same system. If you plan to use
the merge process to remove duplicate records, all of the records that you want to discard
must come from the same systemfor example, you cant use the merge to discard VRS
records from RMMPLEX1 and Volume records from RMMPLEX2.
In this section we describe each resource type in the CDS and explain what you can do to
handle duplicate records.
The record types contained in the DFSMSrmm CDS are:
Key type
Record type
C
CA
CM
D
E
F
K
O
P
R
S
U
V
Control
Action
Move
Data set
Empty rack
Scratch rack
VRS
Owner
Product
Empty bin
In-use bin
In-use rack
Volume
Volume record - The volume record contains detailed information about a volume and its
contents. It also identifies the data sets that are on the volume, and volume chaining
information. Some fields in the records identify the existence of other records in the CDS.
When moving volume records from one CDS to another, you must be sure to move the
related records as well. These include the data set records, owner record, rack record, bin
records, and product record. If you do not move them, the EDGUTIL utility may not be able
to identify all the missing records and therefore may not be able to create all the missing
records.
When you are merging CDSs, you should not have any duplicate volume serials. In fact,
from an integrity point of view, you should not have duplicate volume serials in the same
sysplex anyway, regardless of whether it causes a problem in DFSMSrmm or not. If you
do, you must remove the duplicate volumes before starting the merge process. You should
understand why there are duplicate volumes and, based on your understanding, select the
244
best approach for dealing with them. While the merge job will fix some duplicate records
problems by just discarding one of the duplicates, you must ensure that there are no
duplicate volumes records before you start the merge. When you run the merge job,
duplicate volume records are written to the file specified on the DUPSMAIN DD statement.
If you have duplicate volumes, there could also be duplicate data set records for those
volumes. If you delete the volumes, the related duplicate data set records will get deleted
automatically as part of the process of removing the duplicate volume.
Control record - Each DFSMSrmm CDS contains a control record. The control record
contains the dates and times of when the inventory management functions last ran, the
counts of rack numbers and bin numbers for built-in storage locations, and the settings for
some DFSMSrmm options.
When merging databases, only one control record is required. You must use the EDGUTIL
program with PARM=MEND to correct the record counts after the merge job has
completed, and before you start DFSMSrmm using the new CDS.
The control record also contains the CDSID value. When merging the databases, retain
the control record from the system that contained the CDSID you wish to continue to use
(probably the one from DFSMSrmm on the target sysplex). All systems that use the
DFSMSrmm CDSs must have a matching CDSID specified in the EDGRMMxx member of
Parmlib before being allowed to use the CDS.
As an alternative, you can bypass copying all control records, and after you have loaded
the new DFSMSrmm CDS, you can create a new control record by using EDGUTIL with
PARM=CREATE. After this has completed, use the same program with PARM=MEND to
update the control record to correct the rack and bin counts.
You should use the RMM LISTCONTROL CNTL subcommand to check the options which
are in use, such as EXTENDEDBIN and STACKEDVOLUME. If the options in use differ
between the CDSs, we recommend that you enable options to bring the CDSs in line with
each other and follow the steps to implement those options before starting the merge.
Action and Move records - Action records contain information about the librarian actions
that are outstanding. They include the release actions and any volume movements that
are required. Duplicate records will be deleted by the merge job, because there is a strong
chance that they exist in each CDS, and the records will be rebuilt by DFSMSrmm during
inventory management after the merge.
Data set record - A data set record exists for each volume on which the related data set
has been written. Each record is uniquely identified by the volser and physical file number
of the data set. Remember that it is perfectly acceptable to have multiple tapes all
containing the same data set (unlike DASD data sets where you generally would try to
avoid having the same data set name on more than one volume).
When merging CDSs, you want to be sure that there are no duplicate records across the
two CDSs. Because the key of the data set record includes the volser, avoiding duplicate
volsers ensures that there will be no duplicate data set records.
VRS record - The VRS records contain the retention and movement policies for your data.
When merging the CDSs, you may find that you have the same policies defined in each
CDS for some types of data. The policies are identified by fully qualified or generic data
set name and job name, volser, and volser prefix.
When you merge CDSs, you must check for and resolve any duplicate policies. To help
you do this, use the RMM SEARCHVRS TSO subcommand to get a list of all policies. If
you have a policy for the same data set name and job name combination in more than one
CDS, you should check that the retention and movement defined are the same. If they are
the same, you can use the policy from any CDS. If they are not the same, you must
resolve the conflict before merging the CDSs - that way duplicates can be ignored,
regardless of which system the records are taken from.
Chapter 16. DFSMSrmm considerations
245
The most important reason for checking the VRS definitions is to check which VRS policy
will apply to a given data set/jobname combination, and what impact that could have after
the merge. If you use generic VRS definitions (like TOTHSM.**), it is possible that the VRS
definition controlling a given data set/job could change even if you dont have duplicate
policy names, potentially resulting in you keeping fewer generations of that data set/job
than you had intended. (This is similar to the discussion in Chapter 13, RACF
considerations on page 189 about data sets being protected by different RACF profiles
after a merge.) You should check all the VRS definitions for potential clashes between
VRS definitionsif you find a clash, check, and, if necessary, modify the attributes of the
VRS to meet your requirements.
There is a way to check whether the VRS records in the merged CDS will provide the
results you expect. You can run EDGHSKP with PARM='VRSEL,VERIFY' using the test
CDS that is created by the merge job. Specifying this PARM results in a trial run on
inventory management. The resulting ACTIVITY file will contain a list of all changes that
will be made. By using the EDGJACTP sample job and looking at the summary of changes
in matching VRS, you can see if in fact you would have this problem.
Owner record - There is one owner record for each identified user that owns a
DFSMSrmm resource such as a volume or VRS. There can also be owner records for
users that own no resources. Perhaps you have defined these so that the information can
be used within your installation.
The owner record contains detailed information about the user and the count of volumes
owned. For users who own volumes, DFSMSrmm uses multiple owner records to provide
directed retrieval of volume and data set information during search commands. You only
need to copy the Base Owner records. When there are multiple owner records, there is
no need to copy the Owner records that contain just volume information. Once the records
are loaded into the new CDS, your merge job will use the EDGUTIL utility with the
PARM=MEND option to set the correct counts of owned volumes and build the owner
volume records for owned volumes. A sample is shown in Step S004MEND in the
DFSMSrmm Merge job in Example 16-2 on page 255.
Duplicate owner records are not a problem if the duplicate owners are the same user. In
this case, duplicates can be safely ignored, regardless of which system the records are
taken from.
If the duplicate owners are different users, you must decide how to handle that situation.
You may have to assign different user IDs or define a new owner ID and transfer the
owned resources to the new owner ID. Your objective is to have no duplicate owners, or to
have any duplicates be for the same user in each case.
To add a new owner, issue the following command on the RMMplex that the owner is being
changed on:
RMM ADDOWNER new_owner_id DEPARTMENT(MERGE)
To delete the duplicate owner and transfer volume ownership to the new owner ID (after
you have added the new owner ID), use the following command on the same RMMplex as
the ADDOWNER command:
RMM DELETEOWNER old_owner_id NEWOWNER(new_owner-id
Product record - A product record identifies a single product and software level and the
volumes associated with that product. When you are merging CDSs, the existence of
duplicate product records can be resolved by running EDGUTIL MEND after the merge.
As an alternative to relying on MEND to resolve any duplicates, you can resolve the
duplicates before the merge by choosing one of the product names to change. Firstly, list
all the volumes associated with the product name, using a command similar to the
following:
246
Then create a new product name using an ADDPRODUCT command, using a command
similar to the following:
RMM ADDPRODUCT newproduct LEVEL(V01R01M00) NAME(.....) DESCRIPTION(.....)
OWNER(....)
Then change all the volumes that were associated with the old product name, to the new
product name using a CHANGEVOLUME command, as follows:
RMM CHANGEVOLUME vvvvvv NUMBER(newproduct) FEATCD(A001) LEVEL(V01R03M00)
Finally you should delete the old product name using a DELETEPRODUCT command, as
follows:
RMM DELETEPRODUCT oldproduct LEVEL(V01R01M00)
If you do not resolve the duplicates before the merge, there will be duplicate product
records during the merge, and all the ones in the second CDS specified on the EXTRACT
DD statement will be discarded during the mergethe discarded records are written to
the file specified on the DUPSMAIN DD statement. Therefore, if you decide to leave some
duplicate records in the CDS so they can be deleted during the merge, you must make
sure that all those records are in the same CDS. After the merge, running EDGUTIL
MEND will clean up the associations between the Product and Volume records.
Rack numbers record - There is a single rack record for each shelf location you have
defined to DFSMSrmm. Rack records are optional when the rack number and the volume
serial number are the same value. There may be one for each volume defined, but there
can be more if you also defined empty rack numbers for use when volumes are added to
the CDS. The rack number represents the volume's external volser.
The DFSMSrmm record for each rack number can begin with one of three keys - E for
empty, F for full, and U for in-use. So, it is possible that you could have duplicate rack
numbers in the two CDSs without the merge job spotting them as duplicates. For example,
rack number 200000 in one DFSMSrmm CDS could be Empty (so the key would be
E200000), and rack number 200000 in the ther DFSMSrmm CDS could be Full (so the key
would be F200000)however, the merge job ignores the key when identifying duplicate
records, so duplicate rack numbers will be identified.
When you merge CDSs, you must not have any duplicate rack numbers. Any duplicates
must be deleted prior to the merge. Note that if there are duplicate records, the U records
will be selected ahead of the F records, and the E records will be selected last.
Duplicates will be discarded, regardless of which CDS each record may have resided in
prior to the merge. Any duplicates found at the time of the merge will be written to the file
specified on the DUPSEFU DD statement.
Each rack number must have only one record typeeither E, F, or U; you cannot have two
record types for an individual rack number. If the duplicate already contains a volume, you
can reassign the volume to another rack number by using the RMM CHANGEVOLUME
subcommand. There are two different ways of reassigning a volume to a new rack
number:
RMM CHANGEVOLUME PP1234 POOL(A*)
RMM CHANGEVOLUME PP1234 RACK(AP1234)
or,
In the first command, DFSMSrmm picks an empty rack in pool A*. In the second example,
you have selected the empty rack to be used. Remember that the rack number is the
external label on the tape, so changing large numbers of rack numbers is not a trivial
exercise. It helps if you can select a new rack number that is similar to the old one (for
example, we changed rack number PP1234 to rack number AP1234). Before cleaning up
any duplicate rack numbers, refer to the description of the Volume record on page 244 for
an explanation of how to handle duplicate volumes.
247
If you currently have rack numbers for volumes when the rack number and the volume
serial number are the same value, you could choose to stop using rack numbers before
starting the merge. This would reduce the numbers of records in the CDS. To change a
volume to no longer use a rack number you can use the following commands:
RMM CHANGEVOLUME 100200 NORACK
RMM DELETERACK 100200
Bin numbers record - There is a single bin record for each storage location shelf location
you have defined to DFSMSrmm.
As with rack numbers, each bin number record can potentially have one of two different
keys; in this case, R (empty) and S (in-use). Because the same bin can be represented by
two different keys, the merge jobs may not correctly identify all duplicates. For example,
bin number 000029 could be represented by record R000029 in one CDS and by record
S000029 in the other. However, as with the rack numbers, the sample merge job is set up
to ignore the key when searching for duplicate bin records, so duplicates will be identified
(they are written to the file specified on the DUPSRS DD statement). Note that if there are
duplicate records, the S records will be selected and the R records discarded,
regardless of which CDS each record may have resided in prior to the merge.
When you merge CDSs, you must not have any duplicate bin numbers. Any duplicates
must be addressed before you do the merge. If the duplicate already contains a volume,
you can reassign the volume to another bin number by using the RMM CHANGEVOLUME
subcommand. You can reassign a volume to a new bin number by issuing the following
command:
RMM CHANGEVOLUME A00123 BIN(000029)
You can also remove the volume from the storage location, free up the bin number, and
delete the unused bin number. The following commands show how to delete duplicate bin
number X00022 in location VAULT1 of media name CARTS:
RMM CHANGEVOLUME A00123 LOCATION(HOME) CMOVE
RMM DELETEBIN X00022 MEDIANAME(CARTS) LOCATION(VAULT1)
The next time you run inventory management, the volume is reassigned to the storage
location, and another bin number is assigned.
Note: When you need to reassign bin numbers, you should consider how off-site
movements are managed in your installation. If you depend on the volume being in a
specific slot in a storage location, you should ensure that any volumes you change are
physically moved in the storage location.
3. Review DFSMSrmm Parmlib options
To allow for a simplified backout process, it is recommended that you do not change the
current DFSMSrmm Parmlib member, but rather create a new merged version. On the
actual merge day, you can simply rename the members to pick up the merged definitions.
You should review the IEFSSNxx member, and confirm the DFSMSrmm subsystem name
and definitions are the same on all systems that will be sharing the same DFSMSrmm
CDS.
You should review the EDGRMMxx member for differences, and consolidate the members.
Be aware of any changes you make that require supporting changes to RACF. You should
also investigate whether all systems can share the same EDGRMMxx Parmlib member as
opposed to having a separate member for each DFSMSrmm instance.
The EDGRMMxx options that are affected by the merging of CDSs are:
OPTIONS
LOCDEF
VLPOOL
248
REJECT
OPTIONS - The options identify the installation options for DFSMSrmm. When you are
merging CDSs, consolidate the OPTIONS command operands from all CDSs you are
merging to ensure that you are using the same OPTIONS command operands on all
DFSMSrmm subsystems.
After you merge DFSMSrmm CDSs, the CDSID of at least one DFSMSrmm subsystem
will probably change. You must update the CDSID parameter to match the new CDSID, or
if you have created a new DFSMSrmm CDS control record and you have not specified a
CDSID, DFSMSrmm creates the ID in the control record from the CDSID automatically.
If the DFSMSrmm CDSs are kept in synch with the catalogs on the system, and you have
enabled this using the inventory management CATSYNCH parameter, you should review
the CATSYSID values specified for each system.
If all systems sharing the merged DFSMSrmm CDS will also have access to the same
user catalogs for tape data sets (which should be the case), you can use CATSYSID(*).
However, if the catalogs are not fully shared, you must produce a consolidated list of
systems IDs for use with the CATSYSID and set the correct values in EDGRMMxx for
each system.
Before you use the merged CDS, we recommend that you mark the CDS as not in synch
with the catalogs. Use the EDGUTIL program with UPDATE and the SYSIN statements
with CATSYNCH(NO) to do this. Once you have started DFSMSrmm using the newly
merged CDS, you can re-enable catalog synchronization using inventory management as
detailed in the z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405.
If you use the VRSMIN option to specify a minimum number of VRSs, remember to update
the value if you have a different number as a result of the merge.
LOCDEF - These options identify your locations to DFSMSrmm. When you are merging
CDSs, consolidate the LOCDEF options from all CDSs you are merging to ensure that you
have no duplicates and that no locations are left out. If you have duplicates, ensure that
they are identified as the same location type and with the same media names.
VLPOOL - These options identify the ranges of rack numbers and pools of shelf space
you have in your library. If your library is not changing, we recommend that you do not
change your current VLPOOL options.
If you are merging CDSs and your existing VLPOOL definitions are different for each
existing CDS, you must merge the definitions together to cover the merged sets of
volumes and rack numbers. Ensure that the operands specified on each VLPOOL are
correct when the same pool prefix is defined in more than one source environment.
REJECT - These options identify the ranges of volumes to be rejected for use on the
system. The reject specifies the rack number prefix or, if a volume has no library shelf
location, DFSMSrmm checks the volume serial number. The range of volumes you specify
on the REJECT statement should match the VLPOOL definitions that you have defined in
RMM. For example, if you currently specify that the incoming system can only use
volumes beginning with A*, and the target sysplex systems can only use volumes
beginning with B*, you will need to update the REJECT statement at the time of the merge
to indicate that all those systems can use tapes starting with either A* or B*.
Where you have a tape library shared by multiple systems, define the same VLPOOL
definitions to all systems and use the REJECT definitions to prevent systems from using
volumes that are for use on other systems.
The actions you take for the REJECT definitions should match how you handle the
VLPOOL definitions.
249
The REJECT options can be used, for volumes defined to DFSMSrmm, to control the use
of volumes on the system. And for undefined volumes, it can be used to control the
partitioning of system managed libraries. Consider both of these aspects as you merge
the REJECT definitions.
4. Review the DFSMSrmm started task JCL
To allow for a simplified backout process, it is recommended that you do not change the
current DFSMSrmm procedure, but rather create a new merged version. On the actual
merge day, you can simply rename the two procedures to perform the merge.
You should review the DFSMSrmm procedure and investigate whether all systems can
share the same DFSMSrmm Procedure. This should be possible with little or no work as
the only system-specific information that is contained in the DFSMSrmm procedure is the
names of the PDO (EDGPDOx) data sets. PDO data sets are optional, but if you do define
them, they should be unique on each systemwe suggest that you include the system
name (&SYSNAME) in the data set name.
The DFSMSrmm CDS and Journal names can be specified in either the DFSMSrmm JCL
or in the Parmlib member EDGRMMxx. For flexibility, we recommend that they are
specified in EDGRMMxx; this allows you to easily change the CDS name by switching to a
different Parmlib member.
You should also check that the REGION size specified for the DFSMSrmm started
procedure is large enough; review the latest recommendations for this in the z/OS
DFSMSrmm Implementation and Customization Guide, SC26-7405.
5. Review the DFSMSrmm CDS and Journal data set sizes and attributes
To allow for a simplified backout process, we recommend that you create new versions of
the DFSMSrmm CDS and Journal data sets. On the actual merge day, you can simply
rename the old ones, and rename the new ones to be ready to start DFSMSrmm after the
merge.
Before you create the new data sets, calculate the new combined CDS size, and update
the merge job with this information so that the CDS allocated in that job is valid.
At the same time, calculate the new Journal data set size, and create a job to allocate the
data set.
Review the sample allocation jobs (for the DFSMSrmm CDS, member EDGJMFAL in
SYS1.SAMPLIB, and for the DFSMSrmm Journal, member EDGJNLAL in
SYS1.SAMPLIB) to ensure that you are defining the new CDS with the latest information
regarding record size, CISize, Shareoptions, and so on. For more information about these
data sets, refer to z/OS DFSMSrmm Implementation and Customization Guide,
SC26-7405.
6. TCDB Tape Configuration data set
If the systems to be merged contain Volume Entries cataloged inside either a General
Tape Volume Catalog (SYS1.VOLCAT.VGENERAL, for example) or a specific Tape
Volume Catalog (SYS1.VOLCAT.Vx, for example), you need to list all the volume entries
and create the volume entries in the target TCDB with the same attributes as in the current
system. You can use a REPRO MERGECAT to copy entries from the old to the new
volume catalogs if required.
You will also need to confirm that the SMS ACS routines are allocating tape data sets to
the appropriate Storage Group(s) associated with the Tape Library(s).
Review the DEVSUPxx member in SYS1.PARMLIB if the TCDB is being merged and
make sure that all systems are entering the tapes with the same category number.
250
If the category number changes for any systems, there is no need to make any changes
for private volumes since when the volume returns to scratch, its category will be changed
to one of the new scratch categories. However, you will need to make changes for volumes
already in Scratch status. There is more information about this in the section entitled
Processing Default Categories when using DEVSUPxx in an ATLDS in z/OS DFSMS
Object Access Method Planning, Installation, and Storage Administration Guide for Tape
Libraries, SC35-0427.
You will need to obtain a list of volumes whose storage group name is *SCRTCH* using
ISMF Option 2.3 Mountable Tape. Use the ISMF ALTER command (not the line operator)
to change the volume use attribute for all volumes in the list from scratch to scratch; this
causes the library manager category for each volume to be changed to the new value
established through DEVSUPxx.
7. Review any housekeeping jobs
Review these jobs, confirming that data set names are valid for the new merged
environment. It should be possible to eliminate any DFSMSrmm jobs that were running on
the incoming system previously, except for any report jobs that were not running on the
target sysplex. If you have switched on the catalog synchronization feature, you must
check that the CATSYSID definitions are correct and that the inventory management vital
record processing is running on the correct system. Also, the inventory management
expiration processing must be checked and running on multiple systems if not all user
catalogs are shared (hopefully, in a PlatinumPlex, all catalogs are shared between all
systems).
8. Security considerations
Chapter 8 in the z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
discusses security for DFSMSrmm. If the systems to be merged are also merging the
RACF environment at the same time, and you have set up the Parmlib definitions the
same on all systems, there should be no additional security concerns. If TAPEVOL and/or
TAPEDSN are active, make sure all EDGRMMxx definitions for TPRACF are the same. If
you have not switched on TAPEVOL and/or TAPEDSN on all systems before you have
merged the DFSMSrmm CDS, but now you will use them on all systems sharing the
DFSMSrmm CDS, you must define all needed resources before the merged DFSMSrmm
is started.
9. Review DFSMSrmm exits
You will need to determine what DFSMSrmm exits are in use (for example, EDGUX100,
EDGUX200, CBRUXCUA, CBRUXENT, CBRUXEJC and CBRUXVNL). You should review
the functions of these exits on the incoming system and the target sysplex systems. If they
are not the same, you should see if you can create a single version of each exit so that all
systems will be using identical exits, this ensuring consistent processing.
251
An alternate approach is to create DFSMSrmm ADD....commands for all volumes, data sets,
racks, bins and vital record specifications being merged. This approach can be used to avoid
any outage to DFSMSrmm. However, care must be taken to ensure that the appropriate
security options are already in place; for example, TAPEVOL/TAPEDSN(TVTOC). As there
are no tools available to do this, you would need to create these yourself.
252
The MAINCNTL DD statement contains the ICETOOL statements that will be used to sort
and select most record types (C, K, P, and V). The associated SELECT statement in the
TOOLIN DD specifies that duplicates should not be written to the output file for these
record types (MAIN1S), but should instead be written to DUPSMAIN for you to investigate
and resolve.
The EFUXCNTL DD statement contains the ICETOOL statements that will be used to sort
and select record types E, F, and U. They also ensure that the output file (EFU1S) will
have only one record type for each rack. If the same rack number is discovered in more
than one record type, records will be selected with the U types first, followed by the F
types, and finally the E types. Duplicate records will be written to DUPSEFU for you to
investigate and resolve.
The RSXXCNTL DD statement contains the ICETOOL statements that will be used to sort
and select record types R and S into the data set specified on the RS1S DD statement.
They also make sure you have only one record type for each bin. If the same bin number is
discovered in more than one record type, the S record will be selected, and the R
record will be written to DUPSRS for you to investigate and resolve.
The OOOOCNTL DD statement contains the ICETOOL statements that will be used to
sort and select the type O records into the data set specified on the OO1S DD statement
and to make sure you have only one record for each owner. It will also discard type O
records that just contain volume information (refer to the owner record description on page
246).
Finally, the DDDDCNTL DD statement contains the ICETOOL statements that will be used
to sort and select all the data set records. If you have addressed any duplicate volumes,
there should be no issues with copying all data set records, as the key includes the volser
and the physical file number; therefore, we have not provided a DUPSD DD statement as
a destination for duplicate records.
Step S002ICET
This step also uses the ICETOOL Utility of DFSORT. The input data sets are the
temporary data set created by S001ICET that contain the (non-duplicate) CDS records.
An output data set (DD MERGED) will be created in the same format as the input data
sets for S001ICET.
Step S003AMS
This step uses IDCAMS to define a new DFSMSrmm CDS, and Repro' the sequential file
created in S002ICET (DD MERGED) into the new CDSthis is the CDS that will be used
in the new merged RMMplex.
Step S004MEND
This step uses the EDGUTIL utility to repair all the record counts, set correct owners, and
create missing records, as described previously. Running EDGUTIL with the MEND
parameter only updates the CDS referred to on the MASTER DD card, so you can safely
run this utility as many times as you like prior to the actual merge, as long as you ensure
that the MASTER DD points at your test CDS.
Note that if any of the volumes defined in the newly merged DFSMSrmm CDS are
contained within an ATL or VTS, then the TCDB will need to already contain correct
entries for these volumesthat is, SYS1.VOLCAT.VGENERAL will need to already have
these volumes defined before the MEND can complete successfully. If all the volumes are
inside an ATL or VTS, you can use the EDGUTIL MEND(SMSTAPE) function to
add/correct the volumes in the TCDB. Note that if you specify SMSTAPE, the TCDB will
be updated, so you should not use this option until you do the actual merge.
253
Step S005MEND
Note that, if there are many records to be fixed, there can be quite a lot of output from
EDGUTIL. In order to get a clear picture of which records the MEND was unable to fix (and
there should be very few of these), you should run the EDGUTIL program again, this time
with the VERIFY parameter. This will identify any actions that MEND was not able to take,
and therefore the records you need to consider taking action for. The messages you are
most likely to see are "VOLUME xxxxxx NOT FOUND IN VOLUME CATALOG", and these
will be fixed when you run MEND with the SMSTAPE option.
6. Run EDGUTIL with UPDATE to turn off CATSYNCH. It is neceassary to turn off catalog
synchronization before the merge is complete as the catalogs will need to be
re-synchronized. Use the sample job in Example 16-3 on page 258.
7. Perform any renames of CDSs, Journals, Parmlib members, started task JCL and so on.
8. Start the DFSMSrmm procedure, using the new Parmlib member.
9. Check that the merged information is valid and correct by using DFSMSrmm commands
and the ISPF dialog to retrieve and display information.
10.Run EDGHSKP with CATSYNCH if catalog synchronization is to be maintained by
DFSMSrmm. We recommend doing this for the performance benefits it can have for
inventory management when use is made of policies on catalog retention. This should be
done before the first housekeeping job (EDGHSKP) is run after the merge.
11.If re-synchronization between the new merged DFSMSrmm CDS and TCDB is required,
the DFSMSrmm EDGUTIL program should be run with a parm of VERIFY(VOLCAT). In
addition, if the systems in the RMMplex are all running OS/390 2.10 or later, you should
also specify VERIFY(SMSTAPE).
12.Run DFSMSrmm inventory management on the new CDS. During this run, DFSMSrmm
VRSEL and DSTORE processing identifies volumes to be moved to the correct location. If
any moves are identified, use EDGRPTD to produce the picking lists and actions of any
moves that you know are still required.
The inventory management run should execute all inventory management functions,
including backup.
16.3.2 Backout
If backout is required at any stage, because you created new versions of the CDS, Journal,
Parmlib member and started task JCL, you can simply rename the old data sets back to the
original names. There should be no problems as a result of the additional Volume Entries you
may have created in the TCDB; however, they will need to be cleaned up before re-attempting
the merge process.
254
16.4.1 Tools
Example 16-1 contains sample JCL that can be used to back up the DFSMSrmm CDSs and
Journal data sets after the DFSMSrmm subsystems have been stopped. Note that this job will
not reset the contents of the Journal data sets. Even though we will not be making any
changes to any of these data sets during the merge process, it is prudent to take a backup of
them before any changes are made, simply to protect yourself from any mistakes that might
arise during the merge.
You can also use this JCL to create the input to the merge job for dummy runs, in order to
identify any duplicate records that may exist. Note, however, that if you run this JCL while the
DFSMSrmm subsystems are running, then you must comment out the //MASTER and
//JOURNAL DD cards. If DFSMSrmm is not running when you run the EDGBKUP job, you will
need to uncomment the MASTER and JOURNAL DD statements and fill in the correct names
for those data sets.
Note: During the dummy runs, while you are testing the merge jobs and identifying
duplicate records, the jobs can be run while the RMM subsystem is up and running.
However, when you are ready to do the actual merge, the RMM subsystem should be
stopped before the job is run.
Example 16-1 JCL to Back up the DFSMSrmm CDSs and Journals
//S001BKUP EXEC PGM=EDGBKUP,PARM='BACKUP(NREORG)'
//SYSPRINT DD SYSOUT=*
//*MASTER
DD DISP=SHR,DSN=RMMPLEX1.RMM.CDS
//*JOURNAL
DD DISP=SHR,DSN=RMMPLEX1.RMM.JOURNAL
//BACKUP
DD DSN=hlq.RMMPLEX1.CDS.BACKUP,DISP=(,CATLG,DELETE),
//
DCB=(DSORG=PS,RECFM=VB,LRECL=9216),
//
UNIT=SYSALLDA,SPACE=(CYL,(xxx,xx),RLSE)
//JRNLBKUP DD DSN=hlq.RMMPLEX1.JRNL.BACKUP,DISP=(,CATLG,DELETE),
//
DCB=(DSORG=PS,RECFM=VB,LRECL=32756,BLKSIZE=32760),
//
UNIT=SYSALLDA,SPACE=(CYL,(xxx,xx),RLSE)
//S002BKUP EXEC PGM=EDGBKUP,PARM='BACKUP(NREORG)'
//SYSPRINT DD SYSOUT=*
//*MASTER
DD DISP=SHR,DSN=RMMPLEX2.RMM.CDS
//*JOURNAL
DD DISP=SHR,DSN=RMMPLEX2.RMM.JOURNAL
//BACKUP
DD DSN=hlq.RMMPLEX2.CDS.BACKUP,DISP=(,CATLG,DELETE),
//
DCB=(DSORG=PS,RECFM=VB,LRECL=9216),
//
UNIT=SYSALLDA,SPACE=(CYL,(xxx,xx),RLSE)
//JRNLBKUP DD DSN=hlq.RMMPLEX2.JRNL.BACKUP,DISP=(,CATLG,DELETE),
//
DCB=(DSORG=PS,RECFM=VB,LRECL=32756,BLKSIZE=32760),
//
UNIT=SYSALLDA,SPACE=(CYL,(xxx,xx),RLSE)
Example 16-2 contains the merge job that we have referred to and discussed earlier in this
chapter. This job can also be used to identify duplicate records during the preparation for the
merge.
Example 16-2 DFSMSrmm Merge Job
//CLEANUP EXEC PGM=IEFBR14
//DD1
DD DSN=hlq.MERGED.RMM.MASTER.SEQ,DISP=(MOD,DELETE),
//
SPACE=(CYL,0),UNIT=SYSDA
//DD2
DD DSN=hlq.MERGED.RMM.MASTER,DISP=(MOD,DELETE),
//
SPACE=(CYL,0),UNIT=SYSDA
/*
//S001ICET EXEC PGM=ICETOOL
255
//TOOLMSG DD SYSOUT=*
//DFSMSG
DD SYSOUT=*
//DFSPARM DD *
VLSHRT
/*
//DUPSMAIN DD SYSOUT=*
//DUPSEFU DD SYSOUT=*
//DUPSRS
DD SYSOUT=*
//DUPSO
DD SYSOUT=*
//EXTRACT DD DISP=SHR,DSN=hlq.RMMPLEX1.CDS.BACKUP
//
DD DISP=SHR,DSN=hlq.RMMPLEX2.CDS.BACKUP
//MAIN
DD DSN=&&MAIN,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//EFU
DD DSN=&&EFU,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//RS
DD DSN=&&RS,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//OOOO
DD DSN=&&OOOO,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//DDDD
DD DSN=&&DDDD,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//MAIN1S
DD DSN=&&MAIN1S,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//EFU1S
DD DSN=&&EFU1S,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//RS1S
DD DSN=&&RS1S,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//OOO1S
DD DSN=&&OOO1S,REFDD=*.EXTRACT,SPACE=(CYL,(1,10),RLSE),
//
UNIT=3390,DISP=(,PASS)
//TOOLIN
DD *
SORT FROM(EXTRACT) TO(MAIN) USING(MAIN)
SELECT FROM(MAIN) TO(MAIN1S) ON(5,44,CH) FIRST DISCARD(DUPSMAIN)
SORT FROM(EXTRACT) TO(EFU) USING(EFUX)
SELECT FROM(EFU) TO(EFU1S) ON(15,6,CH) FIRST DISCARD(DUPSEFU)
SORT FROM(EXTRACT) TO(RS) USING(RSXX)
SELECT FROM(RS) TO(RS1S) ON(6,23,CH) FIRST DISCARD(DUPSRS)
SORT FROM(EXTRACT) TO(OOOO) USING(OOOO)
SELECT FROM(OOOO) TO(OOO1S) ON(5,44,CH) FIRST DISCARD(DUPSO)
SORT FROM(EXTRACT) TO(DDDD) USING(DDDD)
//MAINCNTL DD *
OPTION EQUALS
SORT FIELDS=(5,44,CH,A)
INCLUDE COND=(5,2,CH,NE,C'CA',AND,5,2,CH,NE,C'CM',AND,
5,1,CH,NE,C'O',AND,
5,1,CH,NE,C'D',AND,
5,1,CH,NE,C'R',AND,5,1,CH,NE,C'S',AND,
5,1,CH,NE,C'E',AND,5,1,CH,NE,C'F',AND,5,1,CH,NE,C'U')
/*
//EFUXCNTL DD *
OPTION EQUALS
SORT FIELDS=(5,1,CH,D,15,6,CH,A)
INCLUDE COND=(5,1,CH,EQ,C'E',OR,5,1,CH,EQ,C'F',OR,5,1,CH,EQ,C'U')
/*
//RSXXCNTL DD *
OPTION EQUALS
SORT FIELDS=(6,23,CH,A,5,1,CH,D)
256
INCLUDE COND=(5,1,CH,EQ,C'R',OR,5,1,CH,EQ,C'S')
/*
//OOOOCNTL DD *
OPTION EQUALS
SORT FIELDS=(5,1,CH,A)
INCLUDE COND=(5,1,CH,EQ,C'O',AND,14,1,CH,EQ,X'00')
/*
//DDDDCNTL DD *
OPTION EQUALS
SORT FIELDS=(5,1,CH,A)
INCLUDE COND=(5,1,CH,EQ,C'D')
/*
//S002ICET EXEC PGM=ICETOOL
//TOOLMSG DD SYSOUT=*
//DFSMSG
DD SYSOUT=*
//DFSPARM DD *
VLSHRT
/*
//EXTRACT DD DISP=SHR,DSN=&&MAIN1S
//
DD DISP=SHR,DSN=&&EFU1S
//
DD DISP=SHR,DSN=&&RS1S
//
DD DISP=SHR,DSN=&&OOO1S
//
DD DISP=SHR,DSN=&&DDDD
//MERGED
DD DSN=hlq.Merged.RMM.Master.Seq,
//
REFDD=*.EXTRACT,SPACE=(CYL,(xxx,yy),RLSE),
//
UNIT=SYSALLDA,DISP=(,CATLG)
//TOOLIN
DD *
SORT FROM(EXTRACT) TO(MERGED) USING(MOST)
//MOSTCNTL DD *
OPTION EQUALS
SORT FIELDS=(5,56,CH,A)
/*
//S003AMS EXEC PGM=IDCAMS,REGION=0M
//SYSPRINT DD SYSOUT=*,OUTLIM=1000000
//RMM
DD DISP=SHR,DSN=hlq.Merged.RMM.Master.Seq
//SYSIN
DD *
DEFINE CLUSTER(NAME(hlq.Merged.RMM.Master) FREESPACE(20 20) KEYS(56 0) REUSE RECSZ(512 9216) SHR(3 3) CYLINDERS(xxx yy) VOLUMES(vvvvvv)) DATA(NAME(hlq.Merged.RMM.Master.DATA) CISZ(10240)) INDEX(NAME(hlq.Merged.RMM.Master.INDEX) CISZ(2048) NOIMBED NOREPLICATE)
REPRO IFILE(RMM) ODS(hlq.Merged.RMM.Master)
/*
//*--------------------------------------------------------------------//S004MEND EXEC PGM=EDGUTIL,PARM='MEND'
//SYSPRINT DD SYSOUT=*
//MASTER
DD DISP=SHR,DSN=hlq.Merged.RMM.Master
//SYSIN
DD DUMMY
//*--------------------------------------------------------------------//S005MEND EXEC PGM=EDGUTIL,PARM='VERIFY'
//SYSPRINT DD SYSOUT=*
257
//MASTER
//SYSIN
DD
DD
DISP=SHR,DSN=hlq.Merged.RMM.Master
DUMMY
Example 16-3 contains sample JCL that can be used to turn the CATSYNCH feature off after
you have merged the CDSs and before you start the DFSMSrmm subsystems using the new
CDS.
Example 16-3 CATSYNCH Job
//EDGUTIL EXEC PGM=EDGUTIL,PARM='UPDATE'
//SYSPRINT DD SYSOUT=*
//MASTER
DD DISP=SHR,DSN=hlq.Merged.RMM.Master
//SYSIN
DD *
CONTROL CDSID(Merged) CATSYNCH(NO)
/*
16.4.2 Documentation
Finally, the following documents contain information that will be helpful as you proceed
through this project:
z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
z/OS DFSMSrmm Reporting, SC26-7406
Converting to RMM - A Practical Guide, SG24-4998
DFSMSrmm Primer, SG24-5983
258
17
Chapter 17.
OPC considerations
Nearly all OS/390 and z/OS customers use some batch scheduling product to control their
production batch work. IBMs offering in this area is called Operations Planning and Control
(OPC). This chapter discusses the following aspects of moving a system which has its own
OPC controller into a sysplex that is currently using its own OPC controller:
Is it necessary to merge OPC environments, or is it possible to have more than one OPC
environment in a single sysplex?
A checklist of things to consider should you decide to merge the OPCs.
A methodology for doing the merge.
A list of tools and documentation that can assist with this exercise.
While some aspects of the contents of this chapter may also apply to other scheduling
products, this chapter was written specifically for OPC, by specialists with extensive OPC
experience, and therefore cannot be considered a generic guide for merging job scheduling
products.
Note: OPC has been superseded by a product called Tivoli Workload Scheduler (TWS)
V8. However, the discussion and examples provided in this chapter apply equally to OPC
and TWS, so we use the term OPC to refer to both OPC and TWS.
259
260
If your target environment is a BronzePlex, the incoming system will not share any user DASD
with the systems in the target sysplex, and it will also not be in the same MAS as any of those
systems. Therefore, the OPC controller will send the job JCL to the tracker to submit. So, if
you want to merge OPCs, there should not be a problem as long as you are able to store the
JCL for the incoming systems OPC-controlled jobs on a volume that is accessible to the
target sysplex systems (the incoming system does not need access to the JCL library, so the
volume does not necessarily need to be shared between systems).
If you do decide not to merge the OPCs, there should be no reason you cannot move to the
BronzePlex environment using two completely independent OPC subsystems. If you decide
to go with this configuration, you will need to ensure that the XCF group and member names
specified for OPC on the incoming system are different from the names used by OPC in the
target sysplex.
If your target environment is a GoldPlex, the incoming system would again not be sharing
user DASD with the other systems in the target sysplex, and would also normally not be in the
same MAS as those systems. Once again, as long as you can make the incoming systems
jobs JCL accessible to the OPC controller, there should not be a problem if you wish to merge
OPCs.
Finally, if your target environment is a PlatinumPlex, the incoming system will be sharing the
user DASD with the other systems in the target sysplex. In this case, there is no impediment
to merging OPC, at least from an accessibility point of view.
Regardless of which target environment you select, in most cases there is no obvious good
technical reason not to merge the OPCplexes. The managability benefits are significant, and
you do not lose any flexibility by having a single OPCplex.
Done
261
Consideration
Done
262
Review the database in OPC in the incoming system and the target sysplex, looking for
definitions with duplicate names.
Unload the database definitions to a flat file using a sample exec that we provide.
Manually edit the flat file to remove any duplicate definitions.
In addition, if you want to change some of the definitions, you could make the change by
editing the flat file using information about the record layouts that we provide in Record
Layout of REPORT1 File on page 289. If you use this approach, the changes will not
affect the OPC-managed work in the incoming system until you merge the OPC
controllers.
An alternate is to make any changes you need in OPC in the incoming system. The benefit
of this approach is that you can see the impact of the change prior to the merge. An
additional benefit is that it avoids having to edit the records in the flat file - an exercise that
can be complex for records with complex record layouts.
After you have made any required changes to the flat file, load the records into the target
database using another sample exec we have provided. At this point, the target database
should contain all the required definitions from both sets of databases.
WS - Work Station
CL - Calendars
PR - Periods
SR - Special Resources
JCLVAR - JCL Variables
This is because the definitions in these databases are not used until the Application
Description (AD) database has been merged. The data in these databases is only used by
the definitions in the AD database.
263
The final merge is when the target OPCplex takes over scheduling and running the work that
was previously controlled by OPC on the incoming system.
The final merge would consist of:
Merging the Application Description (AD) database.
Merging the Operator Instructions (OI) database.
Merging the Event Triggered Tracking (ETT) database.
Adding Special Resources from the Current Plan on the incoming system to the target
OPCplex.
Stopping and restarting the tracker on the incoming system using the new parameters
which will connect this tracker to the target OPCplex controller via an XCF connection.
Stopping and restarting the controller on the target OPCplex using the additional
parameters to connect to the incoming systems tracker via an XCF connection.
Running a Long Term Plan (LTP) Modify All batch job to update the LTP with the work from
the incoming system.
Extending the Current Plan (CP) to include the work from the incoming system.
We will look at the merge from two aspects:
1. Merging OPC system-related definitions. This consists of:
264
OPC start/stop
Job submit
Job library read
AD feedback
Operation status change
Operation initiation
Job-tracking log write
265
EQQUX004
EQQUX005
EQQUX006
EQQUX008
EQQUX010
EQQUX011
Event filtering
JCC SYSOUT archiving
JCC incident-record create
Pre-catalog management
Job-log retrieval
Job-tracking log write
There are also a number of other exits which are called during various stages of OPC
processing. These must also be considered before merging the OPCplexes. They are
User-defined
User-defined
User-defined
EQQDPUE1
All of these exits are documented in Tivoli Workload Scheduler for z/OS Customization and
Tuning, SH19-4544.
266
- ETT
OPCOPTS
DSTOPTS
DSTUTIL
FLOPTS
- DSTTASK
- All parameters
- All parameters
- All parameters
267
History Function
OPCOPTS - DB2SYSTEM
Resource Object Data Manager (RODM)
OPCOPTS - RODMTASK, RODMPARM
RODMOPTS - All parameters
If these functions are already active in one or both of the OPCplexes, then the parameters will
need to be reviewed during the merging of the initialization parameters. First, you should
check if the same functions are being used in the target OPCplex. If not, then these
initialization statements will need to be added to that system.
If the functions are in use by both OPCplexes, you will need to check the target OPCplexes
initialization statements to ensure they are compatible with those from the incoming system.
Tivoli OPC Customization and Tuning, SH19-4380, contains a description of all these
parameters and can be used to determine how to code these initialization statements, or, if
they are already in use, to determine the meaning of each statement.
Initialization statements
We have now addressed nearly all the OPC initialization statements that are likely to be
affected by the merge. However, before proceeding, all the remaining OPC initialization
statements in both environments should be compared to ensure there are no incompatible
differences.
268
Each tracker connects to an XCF group and identifies itself to XCF using a member name.
The group name must be the same for all trackers and controllers in the OPCplex. The tracker
member name must be unique in the OPCplex. The XCF groupname can be found in the
initialization parameters of the controllers and trackers on the XCFOPTS statement under the
GROUP parameter. The member name used for each tracker or controller is the name
specified on the MEMBER parameter of the XCFOPTS statement when joining the XCF
group. For example:
XCFOPTS GROUP(OPCPLEX) MEMBER(&SYSNAME.OPCC)
This statement could be used by the controller and standby controllers (the last character in
the member name indicating that this is a controller).
The following statement could be used by all the trackers:
XCFOPTS GROUP(OPCPLEX) MEMBER(&SYSNAME.OPCT)
A good naming convention for the member name is to use the system name variable
&SYSNAME (or the SMF ID, if the SYSNAME is longer than 4 characters) concatenated to
the name of the started task. This structure allows you to share the OPC parmlib between all
systems in the OPCplex.
These statements define each tracker and controller to XCF. Next, you need to define a path
from the controller to each tracker, and from each tracker to the controller. A controller uses
the XCF parameter of the ROUTEOPTS initialization statement to connect to each tracker.
The name identified on the XCF parameter is the XCF member name that each tracker has
specified as their member name on their XCFOPTS initialization statement. For example:
ROUTEOPTS XCF(SYS1OPCT,SYS2OPCT,SYS3OPCT)
This allows the controller to connect to three trackers; one on SYS1, one on SYS2, and one
on SYS3. The above statement specifies that XCF is to be used to communicate from the
controller to each of these three trackers.
Then you need to define a path from the tracker to the controller. This is done using the
trackers HOSTCON parameter of the TRROPTS initialization statement. All you need to
define here is that the connection to the controller is an XCF connection. For example:
TRROPTS HOSTCON(XCF)
This is all that is required to connect the trackers and controllers that reside in the same
sysplex.
Important: Obviously, you would not activate these changes until the point at which you
are ready to move to a single controller. However, the members could be set up in
advance, and simply renamed at the time of the OPC controller merge.
269
OPC performance
OPC is capable of running over 100,000 jobs a day, so performance in most situations should
not be an issue. If you are concerned about the impact on performance, review the IBM
Redbook Maximizing Your OPC/ESA Throughput, SG24-2130, for tuning recommendations.
270
Database
Unload exec
Reload exec
Merge
necessary?
Workstations
(WS)
UNLOAD
RELOAD
Yes
EQQWSDS
Calendar (CL)
UNLOAD
RELOAD
Yes
EQQWSDS
Database
Unload exec
Reload exec
Merge
necessary?
Special
Resources (SR)
UNLOAD and
SRCPUNLD
RELOAD and
SRSTAT job
Yes
EQQRDDS
EQQCPxDS
Operator
Instructions (OI)
Use BCIT
Use BATCHL
Yes
EQQOIDS
Current Plan
(CP)
N/A
N/A
No
EQQCPxDS
N/A
N/A
No
EQQLTDS
Period (PR)
UNLOAD
RELOAD
Yes
EQQWSDS
Event Triggered
Tracking (ETT)
UNLOAD
RELOAD
Yes
EQQSIDS
JCL Variables
(JCLVAR)
JCLCLREC
followed by
UNLOAD
RELOAD
Yes
EQQADDS
Application
Description (AD)
Use BCIT
Use BATCHL
Yes
EQQADDS
We start by checking the definitions in the workstation, calendar, period, and special
resources databases. The reason we address these databases first is because the definitions
in these databases are referenced by entries in the application description database, and by
the OPSTAT, SRSTAT, and WSSTAT TSO commands. If the names of any of the entries in the
workstation, calendar, period, or special resource databases need to change, then any
references to these names in the application description database or in batch jobs will also
need to change.
As we will see, this should not cause too many problems as the process used to merge the
application description database allows the use of TSO edit commands to change all
references to each name with one command. Where batch jobs use OPSTAT, SRSTAT, or
WSSTAT commands, edit macros can be used to make global changes to the affected jobs.
Tip: Where there are duplicate records and we suggest editing the file created by the
UNLOAD exec, you should only ever change the NAME of the record. Do not try to edit any
of the definitions in the unloaded file, as this would be prone to error. If you wish to change
the definitions, do this using the OPC dialog before unloading the databases.
271
The first step is to look through the workstation definitions in both OPCplexes and determine
if there are any duplicate workstation names. This can be done using the standard OPC ISPF
front-end, or by using the Workstation Description Report that you can generate through the
OPC panels, or through a printout that can be obtained using the Batch Command Interface
Tool (BCIT).
Figure 17-1 on page 272 contains an extract from a sample Workstation Description Report.
: CPU1
: Computer Automatic
:
:
:
:
:
:
:
:
:
:
:
DEFAULTS
TRANSPORT TIME
DURATION
COMPUTER
AUTO.
SYSPRINT
NO
RESOURCE 1
NAME
PLANNING
CONTROL
NO
NO
NO
NO
U104208
DAY/DATE
RESOURCE 2
NAME
PLANNING
CONTROL
ON 03/10/02
OPEN TIME
AT 16.46
PARALLEL
RESOURCES
Figure 17-1 Sample Workstation report generated from OPC ISPF panels
Figure 17-2 contains an extract from a sample workstation report from BCIT.
WORKSTATION
_______________________
:CPU1
:Computer Automatic
:
1
:C
:A
:N
:CA
:N
:TA
:N
:N
:01
:N
:N
:
0
:N
WORKSTATION ID
DESCRIPTION
INTERVALS NUMBER
WORKSTATION TYPE
REPORTING ATTRIBUTE
CONTROL ON PARALLEL SERVERS
R1 NAME
R1 USED AT CONTROL
R2 NAME
R2 USED AT CONTROL
SETUP ABILITY
RECORD VERSION NUMBER
STARTED TASK OPTION
WTO MESSAGE OPTION
DEFAULT DURATION in SEC*100
FAULT-TOLERANT WS
If there are any duplicate workstation names, you need to determine if the workstation
characteristics are the same. If there are any workstations with the same name, but different
characteristics, then the names of those workstations will need to be changed before merging
them into the target sysplex OPCplex database (unless, of course, you can bring the
definitions into line without impacting any of the work).
272
You can change the workstation name by editing the output file produced from the sample
OPC PIF exec called UNLOAD before reloading the data with the sample OPC PIF exec
called RELOAD.
Note: Keep track of any workstations you have renamed, and the new names that you
usedyou will need this information later on in the merge.
If all the workstations are defined the same, there is no need to reload the definitionjust use
the existing workstation definitions.
If there are workstations that are only defined in the incoming system OPCplex, then those
will need to be added to the target sysplex OPCplex. To do a selective merge, the output file
from the UNLOAD sample exec must be edited to remove any workstations which do not
need to be merged before running the RELOAD exec.
273
274
The RELOAD exec loads all the WS database records from the WS file into the target
OPCplex. This exec should finish with a zero return code. If it ends with a return code of
16, the problem is probably that the input file contains a duplicate record. In this case, you
should see a message similar to the following:
EQQY724E INSERT OF RECORD WITH KEY xxxxxx
EQQY724I NOT DONE, EXISTS. RESOURCE IS WS.
The text after the word KEY in the first message identifies the duplicate record.
This completes the merge process for the WS database.
275
The UNLOAD exec determines the LRECL of each database record as it runs, and writes
records of the correct length to the CL file. There are four steps to merge the CL database:
1. Allocate the CL output file.
This file can either be allocated in the batch job which runs the UNLOAD exec, or by you
using ISPF. The LRECL is 32000 and the RECFM is FB. It is placed on the CL DD
statement in the BATCH2 job.
2. Unload the CL database using the UNLOAD exec.
Use the sample job BATCH2 to run the UNLOAD exec. Change the SYSEXEC and
EQQMLIB DD statements to point to your data sets. The UNLOAD exec is identified on the
PARM on the EXEC card as PARM=UNLOAD. The SYSTSIN DD * statement should
contain xxxx CL nnn*, where xxxx is the name of the incoming systems OPC
subsystem, and nnn* is the name of the workstation that you want to unload (see
Program Interface samples on page 287 for more information about this parameter).
The UNLOAD exec unloads all the CL database records to the CL output file. Each record
is written to one line of the file. This exec should finish with a zero return code.
3. Edit the CL output file, as required.
Change the names of any calendars as required and delete all duplicate records and any
records which you do not want to load in to the target OPCplex. See 17.4, Tools and
documentation on page 286 for an explanation of the record layout of each database
record in the CL output file.
4. Load the CL database definitions into the target OPCplex using the RELOAD exec.
Use the sample job BATCH2 to run the RELOAD exec. Change the SYSEXEC and
EQQMLIB DD statements to point to your data sets. The RELOAD exec is identified on the
PARM on the EXEC card as PARM=RELOAD. The SYSTSIN DD * statement should
contain yyyy CL, where yyyy is the name of the target sysplex OPC subsystem.
The RELOAD exec loads the CL database records to the target OPCplex. This exec
should finish with a zero return code. If it ends with a return code of 16, the problem is
probably that the input file contains a duplicate record. In this case, you should see a
message similar to:
EQQY724E INSERT OF RECORD WITH KEY xxxxxx
EQQY724I NOT DONE, EXISTS. RESOURCE IS CL.
The text after the word KEY in the first message identifies the duplicate record.
This completes the merge process for the CL database.
276
Note: Keep track of any calendars you have renamed, and the new names that you
usedyou will need this information later on in the merge.
If the period definitions are the same, there is no need to rename or redefine them; just
continue using the periods already defined in the target sysplex OPCplex period database.
However, if there are periods in the incoming system OPCplex that are not defined in the
target sysplex OPCplex, you must add those definitions.
Once you have identified what changes need to be made, and which definitions need to be
merged, you are ready to proceed. Once again, the merge process is similar to the WS
databasenamely, unload the database using the sample UNLOAD exec, make any changes
that may be required in the resulting flat file, and then run the RELOAD exec to load the
definitions into the target sysplex OPC PR database.
277
The text after the word KEY in the first message identifies the duplicate record.
This completes the merge process for the PR database.
Note: Some Special Resources can be merged in advance of the merge of the OPC
subsystems. This section must be used in conjunction with Special Resources in the
Current Plan on page 279.
Special resources are typically defined to represent physical or logical objects used by jobs.
For example, a special resource can be used to serialize access to a data set, or to limit the
number of file transfers on a particular network link. The resource does not have to represent
a physical object in your configuration, although it often does.
When preparing to merge the SR databases, you need to check if there are any SRs with the
same name but different characteristics, for example if there are different numbers of the
resource defined in one database than in the other. If this is the case, you will either need to
change the name of the special resource, or bring the two definitions into line (which might
not be easy or even possible prior to the merge of the OPC subsystems).You can easily
change the name of the resource by editing the output file from the UNLOAD exec.
Note: If you need to change the name of any SRs, you must keep a record of the old and new
namesyou will need this information later on.
Tip: An SR can be used to represent physical system resources such as cartridge or tape
drives. If you use this type of SR, it is likely that the SR name used to represent the drives
on the incoming system is the same as the name used on the target system. In this case, it
may not be possible to merge these SRs until the point at which the OPC subsystems are
being merged.
For example, if you have an SR that represents the number of tape drives available to the
OPCplex, and there are 10 drives in the target sysplex and 5 in the incoming system, you
cannot change that definition to, for example, 15 until you merge the OPCs.
If you have this type of SR, you should delete them from the flat file produced by the
UNLOAD exec, and make a note that you need to go back and address them manually at
the time of the merge.
If all the SRs are identical (which is unlikely), there is no need to rename or redefine them, or
even to run the merge jobyou can just use your existing SRs in the target sysplex OPCplex.
278
The UNLOAD exec determines the LRECL of each database record as it runs and writes
records of the correct length to the SR file. There are four steps to merge the SR databases,
as follows:
1. Allocate the SR output file.
This file can either be allocated in the batch job which runs the UNLOAD exec, or by you
using ISPF. The LRECL is 32000 and the RECFM is FB. It is placed on the DD statement
with a DDNAME of SR in the BATCH2 job.
2. Unload the SR database using the UNLOAD exec.
Use the sample job BATCH2 to run the UNLOAD exec. Change the SYSEXEC and
EQQMLIB DD statements to point to your data sets. The UNLOAD exec is identified on the
PARM on the EXEC card as PARM=UNLOAD. The SYSTSIN DD * statement should
contain xxxx SR nnn*, where xxxx is the name of the incoming systems OPC
subsystem, and nnn* is the name of the workstation that you want to unload (see
Program Interface samples on page 287 for more information about this parameter).
The UNLOAD exec unloads all the SR database records to the SR output file. Each record
is written to one line of the file. This exec should finish with a zero return code.
3. Edit the SR output file as required.
Change the names of any special resources, or delete any records which you do not want
to load to the target OPCplex. See 17.4, Tools and documentation on page 286 for an
explanation of the record layout of each database record in the SR output file.
4. Load the SR database definitions into the target OPCplex using the RELOAD exec.
Use the sample job BATCH2 to run the RELOAD exec. Change the SYSEXEC and
EQQMLIB DD statements to point to your data sets. The RELOAD exec is identified on the
PARM on the EXEC card as PARM=RELOAD. The SYSTSIN DD * statement should
contain yyyy SR, where yyyy is the name of the target sysplex OPC subsystem.
The RELOAD exec loads all the SR database records in the input file to the target
OPCplex. This exec should finish with a zero return code. If it ends with a return code of
16, the problem is probably that the input file contains a duplicate record. In this case, you
should see a message similar to:
EQQY724E INSERT OF RECORD WITH KEY DEFAULT
EQQY724I NOT DONE, EXISTS. RESOURCE IS SR.
The text after the word KEY in the first message identifies the duplicate record.
279
Once a CSR has been generated in the CP, the operator can override the default values using
the Modify Current Plan (MCP) option of the OPC ISPF dialog, or by using the SRSTAT
command or the Program Interface (PIF). The next time the CP is extended, any of the default
values changed by the operator, SRSTAT, or PIF are reset to the defaults found in the RD
database, with three exceptions:
Availability
Quantity
Deviation
If any of these attributes have been altered, the altered values remain in force indefinitely.
When the CP is next extended, these values are not returned to the default values which are
specified in the RD database. To nullify these overrides, requires use of the RESET keyword
against each attribute.
While CSRs can be unloaded with the UNLOAD exec, you cannot use the RELOAD exec to
load the definitions back into OPC. Therefore, a special exec called SRCPUNLD has been
written to handle these resourcesyou should not use the UNLOAD/RELOAD execs for
these resources.
The SRCPUNLD exec produces a batch job which can be run to create CSRs in the CP of the
target OPCplex. This job uses the SRSTAT command. The SRCPUNLD exec takes into
account any of the three attributes that have been overridden, and generates an SRSTAT
command, which will cause these attributes to show up as overridden in the target OPCplex.
For example, if the CSR has no values overridden, the SRSTAT command would look like:
SRSTAT SPECIAL_RESOURCE1 SUBSYS(MSTR)
However, if the CSR has the availability attribute overridden, the command would look like:
SRSTAT SPECIAL_RESOURCE1 SUBSYS(MSTR) AVAIL(Y)
Each CSR that has one of the three attributes overridden will have the name of that attribute
added to the SRSTAT command with the overridden value. If there is a need to change the
name of any of the CSRs before merging them with the target OPCplex, the names can be
altered in the generated batch job before running it.
280
281
VARDEPS is the name of the variable table, in this example. If there are multiple records
longer than 32000 bytes, all of those records will be listed.
If you have any JCL variable tables which have an LRECL greater than 32000, those
records will be truncated to 32000 bytes in the output file produced by the UNLOAD exec.
In this case you have two choices:
a. Perform the merge manually using the OPC ISPF dialog. This could be
time-consuming.
b. Identify the long records and manually delete them from the output file. The RELOAD
exec can then be used to load the remaining records into the target OPCplex.
This exec should finish with a 0 return code.
2. Allocate the JCLV output file.
This file can either be allocated in the batch job which runs the unload exec, or by you,
using ISPF. The LRECL is 32000 and the RECFM is FB.
3. Unload the JCLVAR database using the UNLOAD exec.
Use the sample job BATCH2 to run the UNLOAD exec. Change the DD statements of
SYSEXEC and EQQMLIB to point to your data sets. The UNLOAD exec is identified on
the PARM on the EXEC card as PARM=UNLOAD. The SYSTSIN DD * statement should
contain xxxx JCLV nnn*, where xxxx is the name of the incoming system OPC
subsystem, and nnn* is the name of the workstation that you want to unload (see
Program Interface samples on page 287 for more information about this parameter).
The UNLOAD exec unloads all the JCLVAR database records to the JCLV file. Each record
is written to one line of the file. This exec should finish with a zero return code.
4. Edit the JCLV file as required.
Change the names of any JCL variable tables, or delete any records which you do not
want to reload in to the target OPCplex. See 17.4, Tools and documentation on page 286
for an explanation of the record layout of each database record in the REPORT1 file.
5. Reload the JCLV database into the target OPCplex using the RELOAD exec.
Use the sample job BATCH2 to run the RELOAD exec. Change the DD statements of
SYSEXEC and EQQMLIB to point to your data sets. The RELOAD exec is identified on the
PARM on the EXEC card as PARM=RELOAD. The SYSTSIN DD * statement should
contain yyyy JCLV, where yyyy is the name of the target sysplex OPC subsystem.
The RELOAD exec reloads all the JCLVAR database records to the target OPCplex. This
exec should finish with a zero return code.
This completes the merge process of the JCLV database.
282
Workstation
Workstations can be referenced in an AD in:
1. The operations.
Each operation is defined in the AD using a WS name.
2. External dependencies and extra internal dependencies.
External dependencies refer to the AD name, WS name, Operation number, and
Jobname. Up to eight internal dependencies can be defined on the operations panel. If
they are defined on this panel, there is no need to specify a WS name. You only need to
use an operation number.
If there are more than eight internal dependencies for an operation, you will need to select
the operation and then choose the dependencies option. This is where you defined
external dependencies and internal dependencies if you have more than eight. In the
dependency panel, you will need to use a WS name, as well as an operation number, to
define extra internal dependencies.
Calendar
Calendars are referenced on the AD general information panel.
Periods
Periods are referenced in the AD run cycle.
283
Special Resources
Special Resources can be referenced in the operation descriptions within each application.
Once you know which definitions have been renamed in the associated databases, then
reflecting those name changes in the AD database should not be difficult. We have provided a
sample job called BCIT which uses the BCIT to unload the AD database data from the
merging OPCplex.
The following BCIT command can be used to unload the data:
ACTION=LIST,RESOURCE=ADCOM,ADID=*.
The ADID parameter can be specific, or you can use wild card characters. So, if you wish to
unload all applications with the prefix FRED, for example, ADID can be specified as:
ACTION=LIST,RESOURCE=ADCOM,ADID=FRED*.
This command produces a file containing batch loader statements. After you have made any
required changes to the statements (to reflect any definitions that were renamed), this file can
then be used to load the data to the target OPCplex. For example, if a workstation name has
changed from MAN1 to MNL1, you can use the ISPF CHANGE command to change all the
references in the batch loader statements file, as follows:
C MAN1 MNL1 ALL
Once the output file has been edited and changed as required, it can be used as input to the
batch loader program. We have provided another sample called BATCHL can be used to
execute the batch loader. Details of the BCIT and batch loader are provided in 17.4, Tools
and documentation on page 286.
Tip: If you have changed the names of any applications or jobnames in the target system
during the merge, remember that any OI definitions related to these entities also need to
be changed.
The OI database can be unloaded and reloaded using the same process as the AD database.
The following BCIT command can be used to unload the data:
ACTION=LIST,RESOURCE=OICOM,ADID=*.
This command produces a file containing batch loader statements. This file is then used as
input to the batch loader program. You should make any changes as required based on
workstations, applications, or operations that you may have renamed. Once you have
updated the batch loader statement, you can use the provided sample BATCHL job to invoke
the batch loader to load the definitions into the target sysplex OPC OI database.
284
The text after the word KEY in the first message identifies the duplicate record.
This completes the merge process for the ETT database.
285
17.4.1 Tools
A number of tools are available in OPC that can assist with the migration of data:
Batch Command Interface Tool (BCIT)
Batch Loader
Program Interface (PIF)
Batch Loader
The Batch Loader uses statements to load data directly into OPCs AD and OI databases.
This tool can be used as an alternative to the OPC ISPF dialogs, and is especially suited to
making large numbers of changes, as in when merging databases. When used in conjunction
with the output from the BCIT, this allows you to easily unload and load the AD and OI
databases.
A sample job called BATCHL is provided in the Additional Materials for this redbook to run the
Batch Loader. Before running the job, you will need to change the data sets on the following
DD statements:
EQQMLIB - Change to your OPC message data set SEQQMLIB.
EQQWSDS - Change this to your target sysplex OPC WS database.
EQQADDS - Change this to your target sysplex OPC AD database.
EQQOIDS - Change this to your target sysplex OPC OI database.
286
Also included in the sample on the SYSIN DD statement are the following statements:
OPTIONS
ACTION(ADD)
SUBSYS(TWSC)
CHECK(Y)
ADID(EBCDIC)
OWNER(EBCDIC)
MSGLEVEL(2)
These statement are required once, at the beginning of a set of Batch Loader statements.
They provide the defaults. Most important is the SUBSYS name; this must be changed to
your target sysplex OPC controller subsystem name. The other statements can remain as
they are.
Note: The OPTIONS statement has been included in the BATCHL job because the BCIT
does not produce an OPTIONS statement in its output.
When you are ready to reload your AD or OI data into the target OPCplex, use this sample job
to submit your Batch Loader statements.
The OPC subsystem must be active when the Batch Loader job runs.
The Batch Loader is documented in Tivoli Workload Scheduler Planning and Scheduling the
Workload, SH19-4546.
287
BATCH2 is used to run both the UNLOAD and RELOAD execs for each database, and the
SRCPUNLD exec for Special Resources in the Current Plan.
The OPC Controller subsystem must be active when these execs are run.
You must review the names of the OPC libraries in the BATCH1 and BATCH2 jobs. Before
running the jobs you need to:
1. Update the EQQMLIB DD statement to point to your OPC message library
2. Update the SYSEXEC DD statement to contain the name of the library that holds the
sample execs
In addition, the BATCH2 job requires the following changes:
1. Update the CL, ETT, JCLV, PR, SR, CSR, and WS DD statements to point at a data set
that will hold the offloaded definitions for associated database.
2. Update the PARM on the EXEC card to identify which exec you are running:
PARM=UNLOAD for the unload exec.
PARM=RELOAD for the reload exec.
3. Update the SYSTSIN DD statement to identify:
a. The name of the target OPC subsystem. For the UNLOAD exec, this will be the name
of the OPC subsystem on the incoming system. For the RELOAD exec, it will be the
name of the test or the target OPC subsystem.
b. Which database the exec will access:
c. The name of the records that you want to unload. If you want to unload all records from
the database, just specify *. If you want to unload all records whose name starts with
FRED, specify FRED*. And if you just want to unload a single record, and the name of
that record is HARRY, specify HARRY.
The supplied sample execs, and the functions they provide, are as follows:
JCLCLREC is used to identify which JCL variable tables have an LRECL greater than
32000.
UNLOAD is used to unload the definitions from an OPC database into the output
sequential file identified by the DD statement with the same DDNAME as the resource of
the database it is unloading.
RELOAD is used to reload the OPC definitions from the sequential file identified by the DD
statement with the same DDNAME as the resource of the database it is reloading into the
target OPCplex database.
SRCPUNLD is used to create a job that will issue SRSTAT commands to define Special
Resources in the target OPCplex CP.
288
WSCOM
WSCOM
WSCOM
WSCOM
WSWD
WSWD
WSWD
WSWD
{
{
{
WSIVL
WSIVL
WSIVL
WSIVL
0
0
WSWD
0
&
CPU1 CAN
GZ GSN
WSWD
JCL1 GSY
Records 000001, 000002, 000004 all have three headers: WSCOM, WSWD. and WSIVL.
After WSIVL, you can see the blank eight bytes of the next header, signifying the end of the
headers. Immediately after this, on record 000001, you can see the name CPU1. This is the
name of a workstation and is therefore the data you will probably be editing.
Record 000003 has five headers that you can see in the example: WSCOM, WSWD, WSIVL,
WSWD, and WSWD. You would need to scroll right until you find the blank header to find the
data portion of this record.
Once you have identified the name of the record, you can edit it or delete the entire record if
required. You can then save the file and use the appropriate RELOAD exec to load the data
into the target OPCplex.
17.4.2 Documentation
All of the supporting documentation you should need can be found in one of the following
manuals:
Tivoli OPC Customization and Tuning, SH19-4380
Tivoli Workload Scheduler for z/OS Programming Interfaces, SH19-4545
Tivoli Workload Scheduler Planning and Scheduling the Workload, SH19-4546
289
290
18
Chapter 18.
291
292
Consideration
Note
Type
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
B, G, P
Done
B, G, P
G, P
10
G, P
11
G, P
12
G, P
13
G, P
G, P
14
G, P
15
G, P
G, P
16
17
18
19
The Type specified in Table 18-1 relates to the sysplex target environmentB represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
Chapter 18. System Automation for OS/390 considerations
293
Notes indicated in Table 18-1 on page 292 refer to the following points:
1. Messages generated in a Parallel Sysplex environment need to be considered before the
incoming system is merged into the target sysplex. These additional messages will need
to be suppressed, displayed, automated, or given special consideration in line with your
existing automation standards.
Assuming that this issue has already been addressed within your existing target sysplex
environment, then it is reasonable to assume that you would want to include the same
levels of automation on the incoming system.
Another approach would be to collect a number of days of SYSLOG data and analyze it
using a program such as the MVS SYSLOG Message Analysis Program, which can be
downloaded from:
http://www.ibm.com/servers/eserver/zseries/software/sa/sadownload.html
294
These WTOR messages cannot be automated because System Automation for OS/390 is
not yet active.
If you are using Processor Operations (or ProcOps, as it is commonly referred to), it is
recommended that you add the incoming system target processor hardware to your
ProcOps definitions on your automation focal point. ProcOps provides a single point of
control to power off/on, reset, perform IPLs, set the time of day clocks, respond to
messages, monitor status, and detect and resolve wait states.
If you are not using ProcOps, we recommend that Sysplex Failure Management (SFM) be
used to avoid XCF WTOR messages.
For more information on ProcOps customization, refer to:
Chapter 4 of the IBM Redbook Parallel Sysplex Automation: Using System Automation
for OS/390, SG24-5442
System Automation for OS/390 Planning and Installation, SC33-7038
6. Within System Automation for OS/390, you can specify a group id to define a subset of
systems within the sysplex environment. The logical sysplex group ID may be 1 or 2
alphanumeric or national characters, and is specified on the GRPID parameter in
HSAPRM00.
All of the systems that you want to place in the same subplex should specify the same
GRPID value. This value will be prefixed with the string INGXSG to construct the XCF
group name. To determine existing XCF group names, you can use the D XCF,GROUP
system command.
7. We recommend that you use a single System Automation for OS/390 PDB within your
enterprise automation environment. ACFs are sharable within a sysplex, therefore, a build
of the PDB will create an ACF containing all relevant system definitions.
For System Automation for OS/390 V2R1, a PDB build will create an ACF and an
Automation Manager Configuration (AMC) file. Both files must be shared between all
System Automation for OS/390 and Automation Managers within your automation sysplex.
There is no utility to merge ACFs from different policy data bases. You must manually
compare the resources defined in each PDB, and make appropriate changes to your
target sysplex enterprise PDB to add any resources that are only defined in the incoming
system.
An overview of the steps to accomplish the merge to a single enterprise PDB could be as
follows:
Before you start adding definitions for the incoming system, we recommend that you
take a backup of your target sysplex PDB.
Define the incoming system to the target PDB, using the definitions from the incoming
systems PDB as a model.
Connect the incoming system (defined above in the target PDB) to the sysplex group.
Define a new group (for example, incoming system) to contain all applications to be
defined for the incoming system:
Applications that are the same or common between the incoming system and target
sysplex need to be connected to the application group associated to the incoming
system.
Applications with different requirements between the incoming system and target
sysplex environments must first be defined within the target PDB, and then
connected to the application group associated to the incoming system.
295
These steps are in line with the normal automation policy definition process. For
more information, refer to System Automation for OS/390 Defining Automation
Policy, SC33-7039.
Any MVS messages that are unique to the incoming system must be added to the
target sysplex PDB.
If you are not sharing the JES2 spool, then the JES2-defined resources, including
spool recovery, should be duplicated on the target PDB from existing definitions
already within the incoming systems PDB.
Add any unique user-entry definitions for the incoming system.
Perform an ACF Build and load the Automation Manager into the target sysplex
environment. Once loaded, INGLIST can be used to verify the correct applications and
their relationships to the incoming system. This is possible without having to first bring
the incoming system into the target sysplex, as the information is loaded into the
Automation Manager.
8. Depending on your level of automation, and how much change is being made to the
incoming system based upon what resources you will be sharing with the target sysplex,
you will need to carefully consider any accompanying automation policy changes required
in order to accommodate these changes.
For example, if you currently have some JES2 automation in place, this may need to be
reviewed if JES2 on the incoming system was to become part of a MAS (shared spool
environment with the other target sysplex systems). Other environmental changes that
may affect your automation policies could be LPAR, SMF id, NJE, Initiators, Printers,
VTAM Domain, HSM, Subsystem names, Job names, DASD and other device definitions,
and so on. It is important that you work closely with your systems programmers and the
other support teams involved to ensure that issues are identified and addressed.
9. In preparation for sharing a common sysplex-wide Parmlib, you should review your
MPFLSTxx definitions and automation message management to conform to a single
standard. Another option would be to set up separate MPFLSTxx members in a common
Parmlib for use by individual systems within the sysplex.
Note: Given a choice, we recommend the use of a common MPFLSTxx member on all
systems in the target sysplex.
10.NetView has supported the use of MVS system- and user-defined symbols since NetView
V1R1. For more information on how to use symbols in a NetView environment, see the
IBM Redbook Automation Using Tivoli Netview OS/390 V1R3 and System Automation
OS/390 V1R3, SG24-5515.
11.Although it is not a requirement, unless you have a domain name conflict within the target
sysplex, we recommend that you implement a standard NetView domain name standard
and convert the incoming system to conform to it.
12.In a GoldPlex or PlatinumPlex, where the sysres and other systems software libraries will
be shared sysplex-wide, we recommend that you implement a NetView automation data
set naming standard in line with that used on the target sysplex.
Note: Consider the impact of changing data set names on facilities such as the System
Automation Policy Database dialog, in that you will need to review and modify any TSO
logon procedures or allocation methods to reflect data set name changes.
13.In order to reduce the effort and time required to implement and maintain your NetView
and System Automation for OS/390 environment sysplex-wide, we recommend that you
implement a standard library structure of Install, Common and Domain-specific libraries
across all participating members of the sysplex. For more information, see Chapter 4 of
the IBM Redbook Automation Using Tivoli Netview OS/390 V1R3 and System Automation
OS/390 V1R3, SG24-5515.
296
14.In a GoldPlex or PlatinumPlex, where the target sysplex Parmlib and systems software
environment is to be shared with the incoming system, you will need to review and
possibly modify your SMF post-processing automation (via in-house written automation
routines, or via SAs own SMF dump automation policy, or by using the SMF
post-processing exit (IEFU29)).
15.Automatic Restart Manager (ARM) is an MVS function designed to provide efficient
restarts of critical applications when they fail. System Automation for OS/390 is able to
track functions performed by ARM. For more information, see section 3.5 of the IBM
Redbook Parallel Sysplex Automation: Using System Automation for OS/390, SG24-5442.
16.If you are going to be running either a network NetView, or an automation NetView, or
both, you should implement a standard for system subsystem extensions (defined in the
IEFSSNxx member in parmlib) in order to ensure unique names in a shared sysplex or
shared automation environment.
You should also consider, and implement, a standard to ensure unique SSIR tasknames,
which is best accomplished through the use of system symbols; for example, change the
following statement in member AOFMSGSY of your common NETV.DSIPARM library: SYN
%AOFSIRTASK% = &DOMAIN.SIR
17.NetView security can be set up to use SAF for your external security product
(CMDAUTH=SAF), or to use NetViews own security, using the NetView command
authorization table (CAT).
If the incoming system and target sysplex environment both use CAT, you will need to
review both environments in order to ensure the required level of access is maintained
across your automation sysplex.
However, if the incoming system and target sysplex automation environments use a
combination of CAT and SAF, then you will presumably want to convert the incoming
system to the standard already in place on your target sysplex. You will, of course, also
need to consider your incoming system and target sysplexs security definitions, and make
changes as necessary in order to ensure the required level of access is maintained across
your automation sysplex.
For more information on converting between the different System Automation for OS/390
security methods, see Scenarios for Converting Types of Security in the publication Tivoli
Netview for OS/390 Security Reference, SC31-8606.
18.For System Automation for OS/390 V2R1, there is now the concept of the Automation
Manager (AM) and the Automation Agent (AA). The AM runs as a separate address
space, where you will need at least one primary AM with possibly one or more backups.
The AM effectively acts as the automation decision server to the AAs within the automated
environment.
Moving from a single system to a sysplex-wide automation environment increases the
need for continuous availability. You may want to consider using the incoming system as a
secondary AM environment that would be used to take over the primary AM function in the
event of a failure or scheduled outage. In such a scenario, both the primary and any
secondary automation managers must have access to the following:
Shared DASD, containing:
The configuration data (result of the ACF and AMC build process)
The configuration data (schedule overrides VSAM file)
The configuration information data set, which is a mini file in which the Automation
Manager stores the parameters with which to initialize the next time it is HOT or
WARM started.
297
All automation agents must have access to the CF. If youre using MQSeries for
communication, then the following queues are used by SA/390:
WORKITEM.QUEUE for the Workitem Queue
STATE.QUEUE for the Automation State Queue
AGENT.QUEUE for the Automation Agent Queue
It is expected that the incoming systems System Automation for OS/390 environment
would be set up as an AA, communicating to your AM. For communication between AAs
and the AM, you need to coordinate the GRPID statement in the AMs Parmlib member
(HSAPRMxx), with the INGXINIT member defined in the DSIPARM of each associated
AA. Assuming you are already running a similar setup within your target sysplex
environment anyway, then none of this should be too unfamiliar.
For more information, see System Automation for OS/390 Planning and Installation,
SC33-7038.
19.Although it would be more normal to have a single automation focal point when running in
a PlatinumPlex, it is also possible that you would want to avail of this capability in a
GoldPlex or even (although less likely) in a BronzePlex. To achieve a common focal point,
remember that you will need to take the following actions:
Define gateway connections and forwarding definitions in the target sysplex PDB to
connect the new system to the focal point.
If you are using RACF or another SAF product for userid validation, the gateway
operators must be defined to the appropriate security subsystem.
18.3.1 Tools
SYSLOG Message Analysis Program, which can be downloaded from:
http://www.ibm.com/servers/eserver/zseries/software/sa/sadownload.html
298
299
300
19
Chapter 19.
Operations considerations
There are many operational considerations when moving a system into a sysplex. This
chapter discusses the considerations for Hardware Management Console (HMC) setup,
console setup and management, and other general operations procedures.
In this chapter, we will discuss the following:
Should you merge your HMCs?
What do you need to consider when planning a merge of HMCs?
Should you merge consoles?
What do you need to consider when planning a merge of consoles?
General operational considerations when merging a system(s) into a sysplex environment.
We will also provide:
A checklist of items to consider, should you decide to merge your HMCs and/or consoles.
A methodology for carrying out the merge of the HMCs and/or consoles.
A list of tools and documentation that can assist with these tasks.
301
In this IBM Redbook, you may see the term Central Processor Complex (CPC) or Central
Electronic Complex (CEC) being referenced. These terms are inter-changeable and can be
used to describe a hardware environment which consists of central storage, one or more
central processing units, time-of-day clocks, and channels which are, or can be, placed in a
single configuration. A CPC or CEC also includes channel subsystems, service processors
and expanded storage, where installed.
302
For the GoldPlex and PlatinumPlex environments, merging the HMCs would provide the
benefits of a consolidated operational environment. The daily operation and management of
multiple CPCs would be simplified, and a single operational focal point would be established.
Note
Type
If you merge the HMCplexes, will you have more HMCs in the
HMCplex than you require?
B, G, P
If you are going to eliminate some HMCs, which ones should you
eliminate?
B, G, P
B, G, P
G, P
G, P
Ensure all required CPCs are accessible from all HMCs in the
HMCplex.
G, P
B, G, P
B, G, P
B, G, P
10
B, G, P
11
B, G, P
12
B, G, P
13
B, G, P
Group the incoming system(s) into the sysplex group with the
other existing systems in the sysplex.
14
G, P
Update Image profile for the Initial and Reserved values for
processors.
15
G, P
Done
G, P
16
B, G, P
17
B, G, P
303
Consideration
Note
Type
18
B, G, P
Done
The Type specified in Table 19-1 on page 303 relates to the target sysplex environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. In the
table, if the type column contains a B, this represents an environment where you have
decided not to merge the HMC for the incoming system with the HMCplex controlling the
target sysplex. For example, you only need to review the physical and network security
considerations for the HMCs (fourth row) if your target environment is a GoldPlex or a
PlatinumPlex (because you will not be making any changes in this area). However, we do
recommend reviewing your backup and recovery procedures regardless of what your target
environment is (Type column contains a B, a G, and a P).
Notes indicated in Table 19-1 on page 303 are described below:
1. Perhaps the first question we should look at is what is the ideal number of HMCs in a
HMCplex? The answer, of course, is that it depends on your operational setup. First,
what are the limitations:
Any given HMC can manage no more than 100 CPCs.
Any given CPC can simultaneously be managed by no more than 32 HMCs.
Any given CPC must be managed by at least one local HMC (local means the HMC
and SE are on the same physical LAN). The LAN can be either Token Ring or Ethernet
(assuming the CPCs and HMCs are at a level that support Ethernet (HMC and SE
1.4.0 and later)).
Now, the ideal number of HMCs really depends on how many locations you want to
directly manage your CPCs from. As a rule of thumb, we suggest two (for redundancy)
local HMCs (that is, on the raised floor in reasonable proximity to the CPCs), and two
more (for redundancy) HMCs at each operations center (that is, where the operators sit).
The HMC Web interface can also be used from other locations, but we don't normally
recommend this be used as the full time method of management. The Web interface is
intended for occasional use by users such as system programmers, or as a backup
method of management.
So, if you have one computer room, and all your operators are in one location outside the
computer room, then the ideal number of HMCs would be four. If merging the incoming
system HMC(s) into the target sysplex HMCplex would result in more than four HMCs in
this environment, then it may be better to just add the security definitions from the
incoming system HMC(s) to the target sysplex HMCs, and eliminate some of the HMCs as
part of the merge process.
2. If you are going to eliminate some HMCs as part of the merge, you should eliminate the
oldest or the most poorly configured ones. As new levels of support are announced, they
typically have more requirements in terms of MHz, disk space, memory, and so on, so
keep the newest HMCs that have the largest configurations.
3. If you are going to merge HMCplexes, you will need to change each HMC so that they all
have the same definitions. This means probably changing the Management Domain of
some of the HMCs and also that of the CPC that the incoming system runs on (so that it
can be in the same Management Domain as all the target sysplex CPCs).
304
You should also have the same security definitions (userids and the groups and actions
they have access to) on all HMCs in the HMCplex. At the time of writing, there is no
automatic way to do this. If the incoming system HMCs have userids that are not defined
on the target sysplex HMCs, you will need to add those definitions manually. Similarly, if
you are going to move the incoming system HMC(s) into the HMCplex, you will need to
add the security definitions from the target sysplex HMCs to the incoming system HMC(s)
4. If security is a major concern for your environment, connect all HMCs and CPCs onto a
private LAN. The use of a private LAN offers security as follows:
Direct access to the LAN is limited to the HMCs and CPCs connected to it, so there is
no possibility of anyone trying to hack the HMCs or SEs.
There is no possibility that a faulty user PC or faulty card in a user PC could disrupt the
HMC LAN.
To get the required security from such a configuration, do the following:
Assign a unique domain name that includes all the CPCs controlled from the HMCplex.
Create userids and passwords for the HMCs and change the passwords for the default
userids.
If using the HMC Web interface, ensure that only the appropriate people are given
access.
5. All HMCs in the HMCplex must support the highest level CPC that is accessible from that
HMCplex. If any of the HMCs do not support that level, those HMCs will only be able to
manage a subset of the CPCs, and therefore not provide the level of redundancy that you
need. The driver level on the HMC will need to be verified, as a down-level HMC cannot
support an up-level processor. Your hardware CE can help with this task.
6. Ensure that the HMCs that are going to make up the HMCplex are able to access all of the
required CPC SEs. This is because the CPC SEs can be on completely different networks
and may even be in different locations. However, even if the CPCs are remote, they can
still be managed from the same HMC in a centralized location.
You will also need to be aware of the HMC and CPC SE network addresses (TCP/IP),
Network names (SNA) and CPC object names.
7. If your existing backup and recovery procedures have been tested and are satisfactory,
there should be no need to change the procedures even if you decide to merge the HMCs
into a single HMCplex. All the HMCs are peers of each other, so all should be backed up in
a similar manner.
8. The HMC microcode delivery and load procedures used by the hardware representative
should not change if you are merging HMCs into the HMCplex.
9. If all HMCs in the HMCplex have access to all the SEs (which they should have), and the
SEs are running Version 1.6.2 or higher, then all the HMCs in the HMCplex should have
LIC change enabled. This is a change in 1.6.2 that is intended to provide better
availability in case of a failure in any of the HMCs.
Tip: At least one HMC in the domain must have LIC change enabled.
10.It is possible to use an HMC PC to manage your ESCON Directors and Sysplex Timers, in
addition to its role as an HMC. Note, however that these programs run in a loop, polling the
devices they manage, and as a result can use up to half the processing power of the HMC.
If the HMC will be managing a large number of CPCs, we do not recommend running the
ESCON Director or Sysplex Timer code on that HMC. Refer to the relevant ESCON
Director and Sysplex Timer manuals for more information on the required configuration
and setup tasks.
Chapter 19. Operations considerations
305
11.If you want to operate the HMC from a remote location, the recommendation is to have a
fully configured HMC in the remote site. The remote HMC must be kept at the same level
of LIC, or higher, than the SEs it controls. An alternative to this is to access the HMC
through a Web browser.
You can use the Web browser as a remote operation option to monitor and control a local
HMC and its configured SEs. HMC 1.4.2 (originally shipped with 9672 G4 CPCs) and
latest level HMCs have a Web Server built in. When enabled and properly configured, the
HMC can provide a representation of the HMC user interface to any PC with a supported
Web browser connected through your LAN using the TCP/IP protocol.
Important: Security for a browser connection is provided by the HMC enablement
functions, HMC user logon controls, and network access controls. Encryption of the
browser data should be added by selecting the enable secure only selection
(available with HMC version 1.6.2 and levels). This requires that the users Web
browser supports the HTTPS protocol. If this option is not selected, the userids and
passwords use to logon to the HMC are sent in clear text across the network.
12.If you decide to enable the HMC Web interface, we strongly recommend that you review
your security requirements prior to enabling this feature. Additional information can be
found in the Hardware Management Console Operations Guide, SC28-6809.
13.Starting with HMC version 1.6.2, the HMC provides two types of Web browser access:
Remote entire Hardware Management Console desktop and Perform Hardware
Management Console Application tasks. The Remote entire Hardware Management
Console desktop type remotes the keyboard, mouse, and display of the HMC to the Web
browser. Any updates at the HMC are immediately updated at the Web browser. Only one
user at a time can use this type of access.
The Perform Hardware Management Console Application tasks type of access sends
simple HTML and Java scripts to the Web browser, and the HMC instructs the Web
browser to automatically refresh the page contents on a fixed interval. By default this is
every 2 minutes, but this can be changed from the Web browser by selecting Console
Actions -> personalize.
Despite this capability, we do not recommend using this interface for normal operations for
the following reasons:
Some customers have experienced problems with Web browsers leaking memory
when accessing Web pages. Since the Perform Hardware Management Console
Application tasks relies on the Web browser refreshing the page for updates, these
memory leaks add up. We found that the Web browsers would run out of memory after
about 24 hours of continuous use in this mode. (However, the memory leaks are not a
problem for occasional, limited time use.)
Only one person at a time can use the Remote entire Hardware Management Console
desktop Web interface. This interface also uses more network bandwidth since it is
remoting the contents of the display pixels. If this is not a problem for you, then
continuous operation should be fine. (Note that this mode does not have the memory
leak problem because the Web browser ends up running a Java applet, and thus the
Web browser is not refreshing the page nor leaking memory.)
A real HMC will automatically re-establish communication with the SEs following a
temporary network outage. A real HMC will also automatically try to use a second LAN
path (if present) should the first become unavailable. In the Web browser case,
someone would have to manually restart the session following a temporary network
outage. This may or may not be significant depending on your requirements and
expectations.
306
14.HMC groups contain logical collections of objects. They provide a quick and easy means
of performing tasks against the same set of objects more than once. The Groups Work
Area on the HMC contains the groups that a user has been authorized to. There are both
system-defined groups, which are provided with the system and consist of all CPCs or all
images that are in the HMC Management Domain, and user-defined groups which can be
created from these groups to represent views of CPCs or images that your installation
wishes to work with. Figure 19-1 shows the screen in the Customize User Controls dialog
where you can specify which groups, and which object within those groups, a given user
can manage. Hardware Management Console Operations Guide, SC28-6809 contains
further information.
15.In the Image Profile, you can assign the number of logical processors to the LPAR. The
LPAR can be defined with the number of logical processors (Initial + Reserved) greater
than the number of processing units (PUs) currently installed. This is to cater for Capacity
Upgrade on Demand (CUoD) allowing for a non-disruptive upgrade to install additional
PUs. You will not need to deactivate, re-define and activate the LPAR to make use of the
newly-installed PUs.
16.SYSCONS cannot be specified as a console to NIP. However, if none of the consoles
specified in NIPCONS are available, then NIP automatically uses this interface. By
specifying DEVUM(SYSCONS) in your CONSOLxx member, this definition can be used
by every system in the sysplex because the console is automatically assigned a name
equal to the z/OS system name. This console can be assigned routing codes, but in
normal operation no messages are sent to it. As this interface can be rather slow, we
recommend you do not use this console unless there is very low message traffic or an
emergency situation.
17.There are three type of profiles (Reset, Image and Load) and they are stored on the SE. A
Load Profile is needed to define the device address that the operating system is to be
loaded from. Assuming the incoming system will continue to run in the same LPAR, you
should not need to set up any new profiles for that system. However, you will need to
review the load parameter value of the Load Profile that is assigned to the incoming
system, as there may be a different Initialization Message Suppression Indicator (IMSI)
character used or required.
307
z/OS MVS System Commands, SA22-7627 contains detailed information on the values
available for the Initialization Message Suppression Indicator (IMSI).
18.You should not have the console integration support of the HMC (SYSCONS) as the only
console in a sysplex. The console integration facility allows the HMC to be a NIP console
(when there are no consoles available to be used during NIP as specified in your NIPCON
definitions in the IODF), or it can be used as an MVS console, but only when absolutely
necessary (for example, when there is a no-consoles condition).
The sysplex master console is always channel-attached to a system in the sysplex. If your
target sysplex configuration allows, there should be at least two channel-attached
consoles to two systems in the sysplex, preferably on two different CPCs. For better
operational control, it is recommended that the first system to IPL and the last system
leaving the sysplex have a physically attached console.
19.1.2 Security
Since multiple HMCs and internal SEs require connection via a LAN, and remote connections
are possible, it is important to understand the use and capabilities enabled for each HMC in
your configuration.
HMCs operate as peers with equal access to the CPCs configured to them. The SE for each
CPC serializes command requests from HMC applications on a first-come, first-served basis.
There is no guarantee of exclusive control across a sequence of commands sent from a
single HMC.
Security recommendations that you should consider are:
Following the merge of the CPC, HMC and SE into your target sysplex configuration, the
access administrator should verify and if necessary, change the default logon passwords
at the HMC and Support Element.
Define the HMC, Support Element and all CPCs onto a private LAN. Using a private LAN
offers several security, availability and performance advantages:
Direct access to the LAN is limited to the HMC, Support Element and CPC. Outsiders
cannot connect to it.
Traffic disruption due to temporary outages on the LAN is reduced - including
disruptions caused by plugging in and powering on new devices, or the LAN-to-LAN
adapters being run at the wrong speed.
LAN traffic is minimized, reducing the possibility of delays at the HMC/SE user
interface.
If a private LAN is not practical, isolate the LAN segment containing the HMC, Support
Element, and CPC by providing a TCP/IP router between the isolated LAN and the
backbone LAN to provide an intermediate level of isolation. Some customers have
experienced problems using LAN bridges, so we recommend the use of a router instead.
Assign a unique domain name that includes all the CPCs controlled from the HMCplex.
Locate at least one of the HMCs in the machine room near the CPCs that form its domain.
Establish a focal point and control access using the enable/disable controls provided for:
Licensed Internal Code (LIC) update
Remote service support
Remote customer access
Remote service access
Auto-answer of the modem
308
309
Operations Area
Operations Area
Complex 1
Complex 2
HMC 1
V1.7.3
HMC 2
V1.7.3
IP addresses
9.12.14.1
9.12.13.1
LAN 1
Machine Room
Area
Complex 1
zSeries
Processor
IP addresses
9.12.14.3
9.12.13.3
IP addresses
9.12.15.1
9.12.16.1
IP addresses
9.12.14.2
9.12.13.2
SNA Network
USIBMSC1
IP Network
9.12.14.xx
HMC 4
V1.7.3
LAN 2
IP Network
9.12.13.xx
HMC 3
V1.7.3
zSeries
Processor
HMC 5
V1.7.3
SNA Network
USIBMSC2
LAN 3
Machine Room
Area
IP Network
9.12.15.xx
Complex 2
IP addresses
9.12.14.2
9.12.13.2
LAN 4
IP Network
9.12.16.xx
HMC 6
V1.7.3
IP addresses
9.12.15.3
9.12.16.3
zSeries
Processor
9.12.14.4
9.12.13.4
9.12.14.6
9.12.13.6
9.12.15.6
9.12.16.6
9.12.14.5
9.12.13.5
9.12.14.7
9.12.13.7
9.12.15.7
9.12.16.7
CPC Object
SCZP701
Support
Elements
Support
Elements
CPC Object
SCZP801
Support
Elements
CPC Object
SCZP901
Figure 19-2 HMC configurations for target sysplex and incoming systems
Figure 19-2 illustrates the possible HMC configuration that you may have for your existing
target sysplex and existing incoming system prior to the merge process. This follows our
assumption that the CPC containing the incoming system is currently in a different HMCplex
to the CPCs containing the target sysplex systems.
In this case, prior to the merge, the incoming system CPC is in a different machine room than
the target sysplex CPCs, and the operators of that CPC have two HMCs for that CPC in the
operations area. There is also one HMC in the machine room. The CPCs containing the
target sysplex systems also have one HMC in that machine room, and two HMCs in the
operations area. This results in a total of six HMCs.
In this example, the decision was made to place all the CPCs in the same Management
Domain, and to just have a total of four HMCs managing all three CPCs. The post-merge
configuration is shown in Figure 19-3 on page 311.
310
Operations Area
Complex 1
HMC 1
V1.7.3
IP addresses
9.12.14.1
9.12.13.1
LAN 1
IP addresses
9.12.14.2
9.12.13.2
SNA Network
USIBMSC1
LAN 1
IP Network
9.12.13.xx
Complex 1
IP Network
9.12.14.xx
zSeries
Processor
HMC 2
V1.7.3
IP addresses
9.12.14.3
9.12.13.3
HMC 3
V1.7.3
9.12.14.10
9.12.13.10
zSeries
Processor
zSeries
Processor
9.12.14.4
9.12.13.4
9.12.14.6
9.12.13.6
9.12.14.8
9.12.13.9
9.12.14.5
9.12.13.5
9.12.14.7
9.12.13.7
9.12.14.9
9.12.13.9
CPC Object
SCZP701
Support
Elements
Support
Elements
CPC Object
SCZP801
Support
Elements
CPC Object
SCZP901
Figure 19-3 illustrates the final HMCplex configuration after the merge has taken place.
Attention: In Figure 19-2 on page 310 and Figure 19-3, the level shown on the HMCs is
significant. The version level of the HMCs must be equal to or greater than that of any CPC
(SE) that they will be managing.
19.2 Consoles
Consoles are a critical resource in a sysplex. A complete loss of consoles limits an operators
ability to effectively manage the systems in a sysplex environment, and could potentially
result in a system or even a sysplex outage if not addressed promptly. A correct console
configuration is particularly important in a sysplex, because the way consoles are managed is
fundamentally different than in a multisystem environment that is not a sysplex.
In a sysplex, there is a single pool of 99 consoles for the whole sysplex, whereas in a
non-sysplex environment, each system can have up to 99 consoles. In addition, in a sysplex,
every console can not only potentially see the messages issued by every system in the
sysplex, it can also potentially issue commands to any system in the sysplex.
Additionally, in a sysplex, some of the keywords specified in the CONSOLxx member have a
sysplex-wide effect and are only processed by the first system to be IPLed into the sysplex.
These keywords will be ignored by any subsequent systems as they IPL.
311
In this section, we assume that the starting position is that there is a pool of consoles in use
by the target sysplex systems, and that every console can see the messages from every
system in the sysplex and issue commands to every system in the sysplex. In addition, we
assume that the incoming system has its own pool of consoles, and those consoles only have
access to the incoming system.
When you move a system into the sysplex, there are really two levels of merging that you
need to consider:
The first one is that as soon as the incoming system joins the target sysplex, all the
systems can now potentially see the console traffic from the incoming system, and
correspondingly, the incoming system can see the console traffic from the other systems.
While you can set up your consoles (in CONSOLxx) so that each console only appears to
receive the same messages as prior to the merge, the important thing is that each console
has the capability to see all messages.
Equally, every console in the sysplex can potentially use the ROUTE command to send
commands to any other system in the sysplex.
The second, and secondary, level to consider is whether you will merge the functions of
the consoles. You might decide to have one console that handles tape messages for all
systems in the sysplex. Or you might split it the other way and have a console that handles
all messages, but only from one or a subset of systems. This depends largely on your
target environment (BronzePlex, GoldPlex, or PlatinumPlex).
In a BronzePlex configuration, you have to consider how important is it to maintain
segregation between the incoming system and the existing systems in the target sysplex. You
can set up your consoles in CONSOLxx so that some consoles will only see messages from
the incoming system, and others will only see messages from the target sysplex systems.
However, if segregation is very important, this is not sufficient, as someone could issue a
CONTROL command and change the MSCOPE for a console.
You could go a step further and protect the commands with something like RACF; however,
this would require operators to logon to the console and could prove cumbersome. Even if
you do this, however, there is still the problem that the messages from every system get sent
to the subsystem interface on every system in the sysplex, meaning that any program
listening on that interface can see the message traffic from every system.
In a GoldPlex configuration, some of the consoles from the incoming system will typically be
merged with those from the target sysplex. In a GoldPlex, you would normally not be as
concerned with segregating the incoming system so the fact that all systems can potentially
see all messages is not considered to be an issue. This configuration may be chosen to allow
some level of amalgamation of operational functions (for example, tape or print consoles into
the target sysplex console configuration).
In a PlatinumPlex configuration, the consoles of the incoming system will be fully merged into
the target sysplex console configuration. If physical desk space is a consideration for the
merge, then this configuration would be recommended as it makes full use of the existing
consoles in the target sysplex. It also delivers the fully centralized operational management
capability possible with a sysplex.
Several factors may affect your decision as to the level of console merging:
How close are you to the sysplex limit of 99 consoles
If there are already 99 consoles in the target sysplex, you cannot have any more consoles
in the sysplex. Therefore, you have the choice of reducing (by consolidating) the number
of consoles in the target sysplex prior to the merge, or you will have to use the existing
target sysplex consoles to also support the incoming system.
312
Physical security
You may wish (or need) to limit access to consoles to one secure area. In this case, you
would probably not wish to enable SMCS consoles.
Workload profile - is the incoming system running a similar workload
If the target environment is a PlatinumPlex, the incoming system should be managed in
the same manner as all the other systems in the sysplex, so it makes sense in this case
that the consoles are fully integrated.
Consolidation of similar operational tasks
For example, if the printing for the incoming system will be handled in the same physical
area as the printing for all the other systems, then it makes sense to have a single console
that is dedicated to all print handling.
Availability of physical desk space
If you are constrained for space in the operations area, the merge of the incoming system
into the target sysplex provides a good opportunity to reduce the number of consoles
required to manage the configuration.
As a general statement, it makes sense to consolidate onto as few consoles as are required
to let the operators effectively manage the environment, while bearing in mind the
considerations for avoiding single points of failure, and any business requirements to
separate operations for the various systems.
There are four types of consoles that can be used to operate a z/OS environment:
1. MCS consoles
These consoles are attached to a local, non-SNA controller - there can be up to 99 defined
in a sysplex via the CONSOLxx parmlib member.
2. EMCS consoles
These consoles are defined and activated by product interfaces such as TSO/E, SDSF,
and NetView.
3. Hardware consoles
The HMC is used for tasks such as activating an LPAR or doing a LOAD. The HMC also
provides a console capability (SYSCONS); however, this should only be used during NIP
(if no NIPCONS is available) or in emergencies.
4. SNA MCS
These full-function MVS consoles that connect through VTAM rather than
channel-attached non-SNA controllers. Note that SNA consoles count towards the limit of
99 consoles in a sysplex.
To understand the current console configuration for the incoming system or target sysplex,
use the DISPLAY CONSOLE command.
If a console becomes inoperable or de-activated, and it cannot be re-activated, then console
recovery attempts to switch the console attributes of the failed console to an alternate. It
determines what consoles are candidates to be an alternate by using the console group
member information. The console group is defined in the CNGRPxx member of Parmlib.
To display the current console group configuration of the incoming system or the target
sysplex, use the DISPLAY CNGRUP command. Refer to Figure 19-3 for sample output from the
command. When there is a console group member active, the response from the command
includes all groups that are defined, along with the consoles in each group.
313
D CNGRP
IEE679I 18.57.38 CNGRP DISPLAY 258
CONSOLE GROUPS ACTIVATED FROM SYSTEM SYSA
---GROUP--- ---------------------MEMBERS-------------MASTER
00 SYSAM01 SYSAM02 SYSAM03
HCGRP
00 *SYSLOG*
Figure 19-4 Displaying console groups
There is no default ALTGRP for a console. The only way for a console to have an ALTGRP is
if an ALTGRP is specified via the CONSOLxx member of Parmlib, or if one is assigned via the
VARY CN command. We recommend using ALTGRP, since this will allow for the console
function to switch to consoles that are on other systems within the sysplex.
314
CONSOLE
DEVNUM(SMCS)
NAME(CON1)
ALTGRP(MASTER)
AREA(NONE)
AUTH(MASTER)
CMDSYS(*)
CON(N)
DEL(R)
LEVEL(ALL)
LOGON(REQUIRED)
MFORM(T,S,J,X)
MSCOPE(*ALL)
PFKTAB(PFKTAB1)
RNUM(28)
ROUTCODE(ALL)
RTME(1/4)
SEG(28)
USE(FC)
While SMCS consoles provide great flexibility, there are some restrictions on the SMCS
consoles:
They cannot handle Synchronous WTORs (DCCF).
SMCS consoles are not available during NIP.
SMCS consoles cannot be used before VTAM has started.
SMCS consoles must be activated differently than MCS consoles (no VARY CONSOLE and
VARY CN,ONLINE support).
SMCS does not support printer consoles and SMCS consoles cannot be used as
hardcopy devices.
On the other hand, SMCS consoles:
Support all of the CONTROL (K) commands
Can be the sysplex Master Console
Can go through console switch processing
Can be removed by IEARELCN
Can be logged on to
Can receive route codes, UD messages, and so on
Can issue commands
Are included in the 99-console limit
Must be defined in the CONSOLxx member of Parmlib
Have the same set of valid and invalid characters
315
Note
Type
G,P
B,G,P
B,G,P
B,G,P
B,G,P
G,P
B,G,P
B,G,P
B,G,P
10
B,G,P
11
B,G,P
12
B,G,P
13
B,G,P
14
B,G,P
15
B,G,P
16
B,G,P
316
B,G,P
Done
Consideration
Note
Type
B,G,P
B,G,P
Done
The Type specified in Table 19-2 on page 316 relates to the target sysplex environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes
indicated in Table 19-2 on page 316 are described below:
1. When operating in a multi-system sysplex, certain CONSOLxx keywords have a sysplex
scope. When the first system is IPLed, the values specified on these keywords take effect
for the entire sysplex. Table 19-3 summarizes the scope (system or sysplex) of each
CONSOLxx keyword.
Table 19-3 Scope of CONSOLxx keywords
CONSOLxx statement
System scope
Sysplex scope
CONSOLE
DEVNUM
UNIT
PFKTAB
NAME
ALTERNATE
ALTGRP
AUTH
USE
CON
SEG
DEL
RNUM
RTME
AREA
UTME
MSGRT
MONITOR
ROUTCODE
LEVEL
MFORM
UD
MSCOPE
CMDSYS
SYSTEM
LOGON
LU
INIT
PFK
MONITOR
CMDDELIM
MPF
UEXIT
MMS
MLIM
LOGLIM
NOCCGRP
APPLID
AMRF
RLIM
CNGRP
ROUTTIME
GENERIC
317
CONSOLxx statement
System scope
Sysplex scope
DEFAULT
LOGON
HOLDMODE
ROUTCODE
SYNCHDEST
RMAX
HARDCOPY
All keywords
2. We do not recommend coding MSCOPE=*ALL for any console, as this has the potential of
generating large amounts of unsolicited message traffic to the corresponding console. Be
aware that the default value is MSCOPE=*ALL, so you will to specifically override this to
change it. The MSCOPE parameter allows you to specify the systems in the sysplex from
which this console is to receive messages not explicitly routed to this console. We
recommend that you set this value to MSCOPE=*. The asterisk (*) indicates the system to
which this CONSOLE is attached.
3. The RMAX value specifies the maximum value of a reply id in the sysplex, and also
determines the size of the reply id displayed in the message text. Specifying a value of
9999 will result in 4-character reply ids. Note that the value that you specify for RMAX can
affect the size of the XCF Couple Data Set. Refer to z/OS MVS Initialization and Tunning
Reference, SA22-7592 and z/OS MVS Setting Up a Sysplex, SA22-7625 for further
information.
4. When a system is IPLed and rejoins the sysplex, a new consoleid gets allocated if the
consoles on that system prior to the IPL were not named. As a result, repeated IPLs may
result in exceeding the limit on the number of consoles in the sysplex. Should this happen,
you can recover by using the IEARELCN program provided in SYS1.SAMPLIB. The only
other way to free up those consoleids is by a sysplex-wide IPL.
Subsystem consoles also take from the pool of 99 consoles, so you should:
Use EMCS rather than subsystem consoles wherever possible.
For products that do not support EMCS consoles, make sure you name all defined
subsystem consoles.
If you wish, you can use system symbols in consoles namesthis allows you to have a
single CONSOLxx member that is shared by all systems, and still have unique names on
each system.
5. Wherever possible, consolidate consoles that perform duplicate functions. This should
result in a simpler operations environment and free up consoleids.
6. We do not recommend using multiple CONSOLxx members. In most cases, the use of
system symbols will allow system-unique values if needed. However, if there is still a
requirement for separate CONSOLxx members, ensure that the sysplex-wide parameters
are consistent across all members of the sysplex.
7. The sysplex only has one SYSPLEX MASTER console. The SYSPLEX MASTER console
is established when the first system with consoles attached that have MASTER authority
is IPLed into the sysplex.
8. Undelivered (UD) messages are, by default, sent to the SYSPLEX MASTER console. We
recommend sending these messages to the HARDCOPY device instead.
9. Alternate console groups are defined in the CNGRPxx member of Parmlib. This member
is used to define console groups as candidates for switch selection in the event of a
console failure. You can specify both MCS and EMCS consoles as candidates. Every
console should be part of an alternate console group, and should have an ALTGRP
specified on the console definition in CONSOLxx.
318
D EMCS,ST=L
IEE129I 23.21.54 DISPLAY EMCS 360
DISPLAY EMCS,ST=L
NUMBER OF CONSOLES MATCHING CRITERIA: 49
SYSA
MSOSASIR AOPER5SA SYSB
MSOSBSIR
WELLIE2 *ROUT061 AUTLOGSA AUTMSGSA AUTRPCSA
AUTRECSA ARONNSA *ROUT062 SUHOOD *ROUT058
AAUTO1SB *ROUT060 ALBERTO RONN
KYNEF
AUTGSSSA ATXCF2SA AWRK03SA AWRK01SA AUTMONSA
*OPLOG01 AAUTO2SA *OPLOG02 AOPER5SB *OPLOG03
ATSHUTSA AUTCONSA AHW001SA AUTXCFSA AUTSYSSA
AAUTO2SB
ATNET1SA
AAUTO1SA
BERNICE
VALERIA
HIR
AHW4459
SYSC
AUTJESSA
*ROUT059
AWRK02SA
RAMJET
ATBASESA
GRANT
319
16.Routing codes are associated with each message. When defining a console, you can
specify which routing codes are to be sent to that console. More than one routing code can
be assigned to a message to send it to more than one console. The system will use these
route codes and the console definitions to determine which console or consoles should
receive the message.
Route codes are not shown with the message on the console. To determine the route
codes for each console in your sysplex configuration, issue the DISPLAY CONSOLE,A
command. You can limit, and dynamically modify, the types of messages sent to a console
by assigning a route code or codes to a console. You can specify the route codes on the
VARY CN command. For example, you would use the
VARY CN(CONS01A),CONSOLE,ROUT=(1,2,9,10) command to assign route codes 1, 2, 9, and
10 to a console named CONS01A.
In addition to the points mentioned here, you should also refer to two excellent white papers:
Console Performance Hints and Tips for a Parallel Sysplex Environment, available at:
http://www.ibm.com/servers/eserver/zseries/library/techpapers/pdf/gm130166.pdf
Parallel Sysplex Availability Checklist, available at:
http://www.ibm.com/servers/eserver/zseries/library/whitepapers/pdf/availchk_parsys.pdf
For general information about console use and planning, refer to:
z/OS MVS Routing and Descriptor Codes, SA22-7624
z/OS MVS System Commands, SA22-7627
z/OS MVS Planning: Operations, SA22-7601
z/OS MVS Initialization and Tunning Reference, SA22-7592
320
Operations will need to understand how to correctly initialize a sysplex environment with
multiple systems, and what the procedure is for removing a system(s) from the sysplex.
If your incoming system was in a GRS environment that was not in a sysplex, you need to
update your operational procedures when it joins the target sysplex; the V GRS command is
not valid in a sysplex environment.
IXC102A XCF IS WAITING FOR SYSTEM sysname DEACTIVATION. REPLY DOWN WHEN
MVS ON sysname HAS BEEN SYSTEM RESET.
Figure 19-7 Partitioning a system out of a sysplex
By using an SFM policy to act upon this message, there will be no requirement for operator
intervention. The system will automatically be reset and partitioned out of the sysplex.
Without SFM in place, operations must manually do a System Reset of the LPAR to release
any hardware Reserves that may still be in place, followed by replying DOWN to the IXC102A
message.
For information on how to define and activate an SFM Policy, refer to z/OS MVS Setting Up a
Sysplex, SA22-7625.
ARM has the ability to restart failed address spaces on the same system as the failure, or to
do cross-system restarts. The implementation of an ARM policy could affect the way your
automation environment handles restarts of address spaces.
321
The Command Prefix Facility allows the entering of a subsystem command on any system
in the sysplex and have the command routed to whichever system the subsystem is
running on.
The CMDSYS parameter in CONSOLxx specifies which system commands entered on
that console are to be automatically routed to.
The SYNCHDEST parameter in CONSOLxx controls the handling of synchronous
WTORs.
The ROUTE command sends commands from any console to various systems in the
sysplex.
SDSF in conjunction with OPERLOG can provide a sysplex-wide view of all the syslogs
and outstanding WTORs from each system. If there is a requirement to limit or restrict the
ability to view such displays, then customization of the SDSF parameters is required.
Further information on this can be found in z/OS SDSF Operation and Customization,
SA22-7670.
322
19.3.9 Housekeeping
When merging a system into the sysplex, certain housekeeping jobs may need to be updated
to include the incoming system. If System Logger has been implemented for OPERLOG
and/or EREP, a review of the housekeeping jobs is required to ensure that the data generated
in these environments is processed and maintained accordingly. The IEAMDBLG and
IFBEREP members of SYS1.SAMPLIB provide samples of how to define housekeeping jobs
to manage an OPERLOG and/or EREP environment that has been configured to use the
System Logger.
Other housekeeping functions, such as SMF dump processing, will need to be reviewed once
the systems have been merged, as there could be a common SMF switching exit (IEFU29)
being used which may cause conflict with the target sysplex.
323
19.4.1 Tools
The following tools may be of assistance for this merge task:
IEECMDPF
SYS1.SAMPLIB(IEECMDPF) is a sample program to define a Command Prefix that is equal
to the system name.
IFBEREPS
SYS1.SAMPLIB(IFBEREPS) is sample JCL to use the LOGREC logstream subsystem data
set interface.
IFBLSJCL
SYS1.SAMPLIB(IFBLSJCL) is sample JCL to define the LOGREC log stream.
IEARELCN
SYS1.SAMPLIB(IEARELCN) is a sample program that can remove a console definition from
a single system or sysplex.
IEAMDBLG
SYS1.SAMPLIB(IEAMDBLG) is a sample program to read records from the OPERLOG log
stream and convert them to syslog format. It can also be used to delete records from the log
stream.
IEACONXX
SYS1.SAMPLIB(IEACONXX) is a sample CONSOLxx member that utilizes console and
subsystem console naming and also has definitions for SNA MCS consoles.
19.4.2 Documentation
The following documentation may be of assistance for this merge task:
An IBM White paper entitled Console Performance Hints and Tips for a Parallel Sysplex
Environment, available on the Web at:
http://www.ibm.com/servers/eserver/zseries/library/techpapers/pdf/gm130166.pdf
Controlling S/390 Processors Using the HMC, SG24-4832
Hardware Management Console Operations Guide, SC28-6809
OS/390 MVS Multisystem Consoles Implementing Sysplex Operations, SG24-4626
zSeries 900 System Overview, SA22-1027
324
325
326
20
Chapter 20.
Hardware configuration
considerations
This chapter discusses the considerations for hardware configuration, management, and
definition when you are merging a system into an existing sysplex. In this chapter we will
discuss hardware configuration items to be considered when merging a system into a
sysplex, including the following:
A migration table describing what is suitable for BronzePlex, GoldPlex or PlatinumPlex
configurations.
A checklist of items to consider when merging.
A methodology for carrying out the merge.
A list of tools and documentation that can assist with these tasks.
327
20.1 Introduction
The considerations discussed in this chapter will be described against the three following
target environments:
The BronzePlex environment is where the minimum number of resources are shared;
basically only the DASD volumes where the sysplex CDSs reside. A BronzePlex
configuration would typically consist of the minimal amount of sharing necessary to qualify
for PSLC or WLC license charging.
The GoldPlex environment is where a subset of the configuration is shared. This might
consist of the DASD volumes where the sysplex CDSs are located and some or all of the
system volumes.
The PlatinumPlex environment is where all resources within the sysplex configuration are
shared. This configuration potentially provides the largest benefits in terms of flexibility,
availability, and manageability.
20.2 Assumptions
The following assumptions have been made in this chapter to establish a baseline
configuration for the environment:
All systems and attached peripheral devices are currently located in the same machine
room.
There are two CFs.
The incoming system will remain in the same LPAR that it is in at the moment.
All CPCs containing members of the sysplex are accessible from all the HMCs in the
HMCplex.
There is currently one IODF for all the systems in the target sysplex, and another separate
IODF for the incoming system.
328
Consideration
Note
Type
Consider how many IODFs you plan to use - a single IODF for the
sysplex, or multiple IODFs.
B,G,P
G,P
G,P
B,G,P
B,G,P
B,G,P
Done?
Consideration
Note
Type
B,G,P
G,P
B,G,P
Review CF connectivity.
10
G,P
Create new CFRM policy and ensure all systems have connectivity to
all CDS DASD.
11
G,P
12
B,G,P
13
G,P
14
G,P
15
G,P
16
B,G,P
17
B,G,P
Done?
The Type specified in Table 20-1 on page 328 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. Notes
indicated in Table 20-1 on page 328 are described below:
1. Depending on your configuration, we recommend that a single IODF containing all of your
I/O definitions be used wherever possible. This will greatly reduce the amount of work
required to maintain your I/O configuration definitions. If you want to limit certain systems
to a subset of devices, you can still use a single IODF, but in this case use multiple
operating system configurations within the IODF and specify the appropriate config ID
within the LOADxx member.
If you feel that your I/O configuration is too large for effective management within a single
IODF, you may consider having a master IODF that you can carry out your changes to,
and then generate a subset IODF that is transferred to the corresponding target system
where it is imported and used as the IODF for that system.
The subset IODF constitutes a fully functional IODF. When it is built from a master IODF,
the processor tokens are preserved. There are no strict rules about what a subset IODF
must consist of, however, it typically contains:
A processor configuration with its related OS configuration, or
All I/O configurations describing the LPARs in a single CPC, or
All I/O configurations describing the systems of a sysplex
The contents of a subset IODF are specified in a configuration package. Refer to z/OS
Hardware Configuration Definition (HCD) Users Guide, SC33-7988 for information on the
Master IODF concept, subset IODFs, and working with configuration packages.
This subject is discussed in more detail in 20.4, Single or multiple production IODFs on
page 333.
2. Merging the incoming system into the target sysplex could present duplicate device
numbers that will need to be resolved. To check for this, use the HCD panels to get a
printout out of both configurations, checking that any duplicate device numbers are
actually referring to the same physical device. Any duplicates that are not referring to the
same device will need to be resolved. As the merge of the system will require an outage, it
Chapter 20. Hardware configuration considerations
329
is an opportune time to resolve any duplicate device numbers. See 20.8, Duplicate device
numbers on page 341 for a further discussion about this.
3. It is possible, in a large sysplex, that not every system will have a channel-attached
console in a secure area. In this situation, you can use the HMC as the NIPCONS for
those systems. A advantage of using the HMC is that you can scroll back and see
messages issued earlier in the NIP process, whereas with a channel-attached NIP device,
once the messages roll off the screen, they cant be viewed until the system is fully up and
running.
On the other hand, the HMC is slower than a normal console, so if you have many
messages issued during NIP, the use of the HMC may slow down the process somewhat.
Refer to 19.2, Consoles on page 311 for further information. This is also discussed in
20.9, NIP console requirements on page 341
4. Virtual Storage Constraint Relief can be achieved by specifying that the UCBs for devices
reside above the 16 MB line by using the LOCANY parameter in HCD. You will need to
carefully review which devices are specified with the LOCANY parameter, as you may be
using products that do not support this. Refer to z/OS Hardware Configuration Definition
(HCD) Users Guide, SC33-7988 for further information.
You may also need to review the SQA parameter of the IEASYSxx member of Parmlib.
OS/390 and z/OS obtain an initial amount of SQA during IPL/NIP, prior to processing the
SQA=(x,y) parameter in IEASYSxx. This initial SQA allocation may not be sufficient if a
large number of duplicate volsers must be handled during NIP processing. The
INITSQA=(a,b) parameter in LOADxx can be used to circumvent SQA shortages during
NIP.
5. The Channel Measurement Block (CMB) accumulates utilization data for I/O devices. You
need to specify, on the CMB parameter in the IEASYSxx member of Parmlib, the sum of
the number of I/O devices that you need to measure that are not DASD or tape, plus the
number of devices that you plan to dynamically add and measure. Refer to z/OS MVS
Initialization and Tunning Reference, SA22-7592 for further information.
6. If you expect to make dynamic I/O configuration changes, you must specify a percentage
expansion for HSA. The amount of HSA that is required by the channel subsystem
depends on the configuration definition, processor, and microcode levels. This expansion
value can only be changed at Power-On Reset (POR). The D IOS,CONFIG(HSA) command
will display the amount of HSA available to perform dynamic configuration changes. Refer
to Figure 22-1.
D IOS,CONFIG(HSA)
IOS506I 16.21.08 I/O CONFIG DATA 361
HARDWARE SYSTEM AREA AVAILABLE FOR CONFIGURATION CHANGES
829 PHYSICAL CONTROL UNITS
4314 SUBCHANNELS FOR SHARED CHANNEL PATHS
1082 SUBCHANNELS FOR UNSHARED CHANNEL PATHS
623 LOGICAL CONTROL UNITS FOR SHARED CHANNEL PATHS
156 LOGICAL CONTROL UNITS FOR UNSHARED CHANNEL PATHS
Figure 20-1 Output from the D IOS,CONFIG(HSA) command
The Percent Expansion value specified on the HMC represents the portion of the HSA
that is reserved for the dynamic activation of a new hardware I/O configuration. It is based
on the amount of HSA that will be used to accommodate the I/O configuration in the
IOCDS that is to be the target of the next Power-On Reset (POR).
330
The Reset Profile will need to be updated if the value for the expansion factor needs to be
changed. Refer to 2064 zSeries 900 Support Element Operations Guide, SC28-6811 for
further information.
7. As part of the planning for the merge of the incoming system, you should have identified
all the devices currently accessed by the target sysplex systems that the incoming system
will require access to. Similarly, you should have identified which devices currently used by
the incoming system will be required by the other systems in the target sysplex. This
information will be part of the input into your decision about how many OS configurations
you will have in the target environment.
If your target environment is a BronzePlex, you would more than likely wish to have
multiple OS configurations, especially if all systems are using the same master IODF. The
use of multiple OS configurations can help you ensure that each system in the sysplex
only has access to the appropriate subset of devices. Relying on varying certain devices
offline at IPL time is not as secure as not making them accessible to the operating system
in the first place.
If your target is a GoldPlex, the decision of how many OS configurations to have will
depend on the degree of separation you want between the systems. If you would like the
ability to bring certain devices from the other subplex online at certain times, it may be
better to have just one OS configuration. On the other hand, if your requirement is closer
to a BronzePlex and you definitely do not want either subplex to be able to access the
other subplexs devices, then two OS configurations would be more appropriate to that
requirement.
If your target configuration is a PlatinumPlex, we recommend that you create a single,
merged OS configuration. By having just a single OS config, you reduce the complexity
and potential errors introduced when having to maintain multiple OS configurations for a
multisystem sysplex environment. Additionally, you ensure that the systems will behave
consistently (for example, specifying UNIT=TAPE will refer to the same pool of devices
regardless of which system the job runs on).
This topic is discussed further in 20.11, Single or multiple OS Configs on page 342.
8. When deciding to merge a system into an existing sysplex, you will need to consider the
possibility of conflicting or no-longer-valid esoterics.
If your target environment is a BronzePlex, you will more than likely have a different OS
configuration for each subplex. In this case, you can probably leave the esoteric definitions
unchanged.
If you are moving to a single OS configuration, you will need to ensure that the esoterics
effectively support all systems. You should check the existing definitions in each system,
addressing any discrepancies, and bearing in mind that in a GoldPlex all of the devices will
not necessarily be online to all systems.
The decision to merge an incoming system into the target sysplex provides an opportunity
to clean up esoterics that may no longer be required in the target sysplex environment.
However, before you proceed with any deletes, you must ensure that esoterics were never
used when cataloging data sets. This restriction is documented in the section that
discusses defining non-VSAM data sets in z/OS DFSMS Access Methods Services for
Catalogs, SC26-7394.
This topic is discussed further in 20.10, Eligible Device Table and esoterics on page 341.
9. If you decide to configure your environment to have multiple IODFs, you will lose the ability
to initiate a single sysplex-wide dynamic I/O reconfiguration activate from the HCD panels
unless all the IODFs have duplicate names. The reason for this is that the HCD dialogs
checks the high level qualifier of the current IODF on every system against the high level
qualifier of the IODF that is being activated, and refuses the activate if they are not the
same. Even if the HLQs were the same, the IODF suffix must also be the same, because
Chapter 20. Hardware configuration considerations
331
the same suffix is used on the ACTIVATE command that is issued on every systemthere
is no capability to specify different suffixes for the different systems.
10.Because the incoming system was not in the sysplex prior to the merge, you will need to
add connectivity from that LPAR to all the CFs in use in the sysplex (this presumes there
are already other LPARs on the same CPC already communicating with the CFs in
question). Generally speaking, two CF links from each CPC to each CF should provide
acceptable performance. To verify this, check that the utilization of each link is below about
30%. If utilization increases above that amount, you should consider adding more CF link
capacity, or using faster CF link technology.
11.There can only be one CFRM Policy active for the entire sysplex. If you are merging a
system that was already connected to an existing CF, you will need to review its CFRM
Policy definitions so that they can be migrated to the CFRM Policy of the target sysplex.
This is discussed in more detail in 2.4.1, Considerations for merging CFRM on page 28.
The DASD volume containing the Primary, Alternate, and Spare CFRM CDSs must be
accessible to all systems in the sysplex.
12.As soon as the incoming system joins the target sysplex, it will require XCF connectivity to
all the other systems in the sysplex. This connectivity can be obtained using CTCs
(ESCON or FICON), CF signalling structures, or a combination of the two.
The great advantage of using only CF structures is that no additional CTC paths are
required as you add more systems to the sysplex. The number of CTC paths you need to
provide full connectivity with redundancy is n*(n-1), where n is the number of systems in
the sysplex. So, you can see that as the number of systems in the sysplex grows, each
additional system requires a significant number of additional CTC paths, along with all the
attendant work to define and maintain this information in HCD and in the COUPLExx
members. On the other hand, if you are only using CF structures for XCF signalling, the
only change required when you add a system to the sysplex should be to increase the size
of the signalling structures. To assist in the resizing of the XCF Structure, you can use the
Parallel Sysplex Coupling Facility Structure Sizer (CFSizer), available on the Web at:
http://www.ibm.com/servers/eserver/zseries/cfsizer/
If you will only be using ESCON CTCs for your XCF signalling, you will need to update the
PATHIN/PATHOUT definitions defined in the COUPLExx member of Parmlib, to add the
new paths between the incoming system and the other systems in the target sysplex. If
you have not established a numbering scheme for your ESCON CTCs, this may be an
opportune time to do so. For a large CTC configuration, it can be difficult and complex to
design and define the CTC connections. To help minimize the complexity involved in
defining such a configuration, IBM developed a CTC device numbering scheme. The IBM
CTC device numbering scheme is discussed in detail in the zSeries 900 ESCON
Channel-to-Channel Reference, SB10-7034.
If your target sysplex configuration will be utilizing FICON, we recommend that you use
FICON for your CTC communications. Unlike the ESCON channel CTC communication,
which uses a pair of ESCON CTC-CNC channels, the FICON (FC mode) channel CTC
communication does not require a pair of channels because it can communicate with any
FICON (FC mode) channel that has a corresponding FCTC control unit defined. This
means that FICON CTC communications can be provided using only a single FICON (FC
mode) channel per processor. Further information on FICON CTC implementation can be
found in the IBM RedPaper entitled FICON CTC Implementation.
13.When merging the incoming system into the target sysplex, you will need to ensure that
the incoming system has access to the same Sysplex Timers as the other systems in the
sysplex. If there are other LPARs on the same CPC as the incoming system that are
already part of the target sysplex, then you know that the required connectivity already
exists.
332
If the incoming system is the first LPAR on its CPC to join the target sysplex, the required
connectivity may not exist and must be put in place before the merge. Note that it is not
sufficient to just have connectivity to a Sysplex Timerit must be the same Sysplex Timer
that the other members of the sysplex are connected to.
Finally, remember that the CLOCKxx member of Parmlib must be updated to indicate that
the incoming system will be using an external time reference (by specifying ETRMODE
YES).
14.You will need to ensure that the Coupling Facility Control Code (CFCC) level that is
running in your CF is at a high enough level to support any functions that may be in use on
the incoming system prior to the merge. A single CPC cannot support multiple CF levels.
This is discussed further in 2.4, Coupling Facility Resource Management on page 27.
Also, the following Web site contains a Coupling Facility Level (CFLEVEL)
Considerations document:
http://www.ibm.com/servers/eserver/zseries/pso/
15.In a GoldPlex, and certainly in a PlatinumPlex, both the incoming system and the systems
in the target sysplex will be accessing more DASD than they were previously. Or, to look at
it another way, the DASD belonging to those systems will have more systems using them,
which means more logical paths will be in use.
Unfortunately, there is no easy way to find out how many logical paths are in use for each
LCU today. The only way to get this information is to run a DSF ANALYZE report. This is
discussed in more detail in 20.12, ESCON logical paths on page 342. For now, it suffices
to say that you need to determine how many logical paths are in use for each LCU prior to
the merge, and then calculate how many additional paths will be required for each LCU
when you merge the incoming system and the target sysplex systems into a single
sysplex.
Even if your target environment is a BronzePlex, at a minimum the LCUs containing the
sysplex CDSs will see an increase in the number of logical paths in use. So, regardless of
whether you are aiming for a BronzePlex, a GoldPlex, or a PlatinumPlex, this task must be
carried out. The only difference is that in a BronzePlex, there will probably be fewer LCUs
affected by the merge.
16.To maintain a single point of control and ease of operational management, you should
connect all of the CPCs so that they are managed from a single HMCplex. Refer to 19.1,
One or more HMCplexes on page 302 for further discussion of this item.
17.It is important that you review the hardware microcode levels and associated software
levels or fixes. If the incoming system will be remaining in the same LPAR it is in today,
there should not be any concerns about CPC microcode levels (because none of that
should be changing as part of the merge). However, it is possible that the incoming system
will be accessing control units that it was not using previously. Similarly, the target sysplex
systems could be accessing control units that were previously only used by the incoming
system. In either case, you must ensure that you have any software fixes that might be
associated with the microcode levels of the control units. Your hardware engineer should
be able to provide you with information about the microcode levels of all hardware, and
any software pre-requisites or co-requisites. You should also review the relevant
Preventative Service Planning (PSP) buckets for that hardware.
333
Will all the systems use the same physical IODF data set when they IPL, or will you create
copies of that data set, so that some systems use one copy, and other systems use a
different one?
The decision about whether to have a single IODF containing all CPC and I/O definitions, or
to have multiple IODFs, each containing a subset of the configuration, depends to some
extent on the environment of your incoming system and target sysplex, and on what type of
configuration you are aiming for (BronzePlex, GoldPlex, or PlatinumPlex). There are two
terms we use to describe the different types of IODFs:
Master IODF
Cloned IODF
We will first explain why it is advantageous (or even necessary) to keep all the I/O definitions
in a single master IODF. After that, we discuss the considerations relating to how many
cloned IODFs you should have.
As a background to this discussion, let us consider a typical modern configuration. If the
configuration is small (one sysplex, one or maybe two CPCs), it is likely that all definitions will
be kept in a single master IODF, so we will specifically address a larger configuration. In this
case, there will probably be a number of sysplexes (production, development, and
sysprog/test), a number of CPCs, and a significant number of large capacity, high
performance DASD and tape subsystems. In this case, it is likely that each CPC will contain
multiple LPARs, with those LPARs being spread across a number of sysplexes. Similarly,
because of the capacity of the tape and DASD subsystems, and the wish to avoid single
points of failure, each DASD and tape subsystem will probably contain volumes that are
accessed by more than one CPC, and some of the volumes will be used by one sysplex, and
other volumes will belong to a different sysplex. So, your configuration is effectively a matrix,
with everything connected to everything else.
In a large, complex, environment like this, you want to do as much as possible to minimize the
overhead of maintaining the configuration (by eliminating duplication where possible), you
want to minimize the opportunities for mistakes (again by eliminating duplication), and you
want to centralize control and management as much as possible.
Against this background, we now discuss the considerations for how many master and cloned
IODFs you should have.
334
CF support
When connecting a CPC to a CF in HCD, both the CF and the CPC need to be defined in the
IODF. If you are connecting multiple CPCs to the same CF, you either have to define the same
CF over and over again in multiple master IODFs (if you have an IODF for each CPC), or else
you maintain a single master IODF containing all the CPCs and the CFs they will be
connecting to.
Switch connections
We recommend that you maintain your switch configurations in the same master IODF as the
hardware and software configuration definitions to provide complete validation of the data
path. In order to look up the port connections of an ESCON director, all connected objects to
the ports of a switch have to be defined in the same IODF.
CTC checking
HCD offers you the possibility to view and verify your CTC connections that are defined
through a switch. This is a very valuable function, as complex CTC configurations can be
difficult to manage and debug. However, in order to be able to fully utilize this capability, all
CPCs connected by the CTCs must be defined in the same IODF.
Use of HCM
If you use Hardware Configuration Manager (HCM), or are contemplating using this product, it
will be necessary to use a single master IODF if you wish to have a single HCM configuration
covering your total configuration. HCM does not provide the ability to extract from multiple
different IODFs into a single HCM configuration.
Summary
Based on the preceding points, we recommend that you keep all your configuration
information in a single master IODF. By using a single master IODF, you reduce the amount of
effort and time required to implement any I/O configuration change within your sysplex.
335
Summary
While HCD provides the ability to create an IODF containing just a subset of the master IODF,
there is probably not a significant benefit in using this capability. All management of the
contents of the IODF should always be done on the master, meaning that the subset IODFs
are really only used for IPL, in which case there is not a significant difference as to whether
the IODF contains the total configuration or just a subset.
If all the cloned IODFs contain the complete configuration, there is little difference from a
functionality point of view whether all the systems use the same IODF clone or if there is more
than one clone, within the restriction that for sysplex-wide dynamic I/O reconfiguration to
work, all the cloned IODF data sets must have the same names. Given that duplicate data set
names are not recommended if they can be avoided, you should ideally have a single IODF
data set that every member of the sysplex uses.
The exception is in a BronzePlex, where there is minimal sharing of DASD. Because the
SYS1.PARMLIB data set must be on the IODF volume or on the sysres, and you will probably
not be sharing SYS1.PARMLIB in a BronzePlex, that infers that you would also not be sharing
the IODF data set between the incoming system and the other systems in the target sysplex.
In this case, you should use the HCD Export/Import facility to send the IODF to the incoming
system and keep the same data set name on that system.
The IBM Redbook MVS/ESA HCD and Dynamic I/O Reconfiguration Primer, SG24-4037,
contains an excellent description of dynamic I/O reconfiguration and how it relates to IODFs.
It is recommended reading for anyone deciding how to best manage your IODFs.
336
337
The above diagram presumes that there are three LPARs. Therefore, the activates are a
software-only change in the first LPAR, a software-only change in the second LPAR, and a
hardware/software change in the last LPAR.
338
The activation of the dynamic change can be achieved by the MVS ACTIVATE command or via
HCD. For further information on this, refer to z/OS Hardware Configuration Definition (HCD)
Users Guide, SC33-7988 and z/OS MVS System Commands, SA22-7627.
Hardware and software activate (ACTIVATE IODF=yy) on the last system on the CPC
whose configuration is being changed. You may need to specify the FORCE option on
the ACTIVATE command if hardware deletes are involved.
The process outlined above is a general high-level overview. Detailed information on how to
implement a dynamic I/O reconfiguration change can be found in Hardware Configuration
Definition (HCD) Planning, GA22-7525, and Hardware Configuration Definition (HCD)
Scenarios, SC33-7987.
339
When you press Enter, the screen shown in Example 20-3 is displayed.
Example 20-3 Status of dynamic I/O reconfiguration readiness
Esssssssssssssssssssssssssssssss Message List ssssssssssssssssssssssssssssssssN
e Save Query Help
e
e -------------------------------------------------------------------------- e
e
Row 1 of 2 e
e Command ===> ___________________________________________ Scroll ===> CSR
e
e
e
e Messages are sorted by severity. Select one or more, then press Enter.
e
e
e
e / Sev Msg. ID Message Text
e
e _ I
CBDA781I Your system configuration provides full dynamic
e
e #
reconfiguration capability.
e
e ***************************** Bottom of data ****************************** e
340
By using the preceding process, you will then able to confirm whether your systems are in
sync for a dynamic I/O reconfiguration change.
341
You could also use the Catalog Search Interface (CSI) to retrieve this information from the
catalogs. Catalog Search Interface (CSI) is a read-only general-use programming interface
that is used to obtain information about entries contained in ICF Catalogs. Refer to
SYS1.SAMPLIB(IGGCSI*) for sample programs on how to use the Catalog Search Interface.
342
The output from the ANALYZE command (shown in Example 20-5) tells you which paths are
established, which host adapter each is associated with, and the CPU serial number and
partition number of the host that is using each logical path.
Restriction: The ICKDSF ANALYZE command does not work for RAMAC Virtual Arrays
(9393).
If your environment consists of IBM and non-IBM hardware, you will need to be in liaison with
the hardware support representative to verify that you will not exceed the logical paths for the
respective control unit.
343
20.13 CF connectivity
Connectivity of all systems in the target sysplex to the CF (integrated or external) will need to
be considered.
The connectivity from an operating system to a CF and vice versa is provided by coupling
links. z/OS and CF images may be running on the same or on separate machines. Every
z/OS image in the target sysplex must have at least one coupling link to each CF image.
For availability reasons, there should be:
At least two coupling links between z/OS and CF images
At least two CFs (not running on the same CPC)
At least one standalone CF (unless you plan on using System-Managed CF Structure
Duplexing)
344
There are several types of coupling links available, and their support will depend on the type
of CPC(s) you have installed in your target sysplex environment.
The types of coupling links available are:
9672
IC (connection within the same CPC only)
ICB (connection up to 10 metres)
ISC (connection up to 40 km)
2064 (z900)
IC3 (peer IC, connection within the same CPC only)
ICB (connection up to 10 metres - compatible with ICB on 9672)
ICB3 (peer ICB, connection up to 10 metres - zSeries only)
ISC (connection up to 40 km - compatible with ISC on 9672)
ISC3 (peer ISC, connection up to 40km - zSeries only)
2066 (z800)
IC3 (peer IC, connection within the same CPC only)
ICB3 (peer ICB, connection up to 10 metres - zSeries only)
ISC3 (peer ISC, connection up to 40km - zSeries only)
ISC (connection up to 40 km - compatible with ISC on 9672)
If your target sysplex environment has a mixed processor type configuration, you will need to
ensure that you have the correct coupling links installed.
345
ESCON
FICON
At least 2
1 or 2
Yes (1 of the 2)
No
Up to 512
Up to 16384
12-17 MB/sec
60-90+ MB/sec
Up to 32
Half duplex
Full duplex
20.15.2 Parmlib
If you will be using a Sysplex Timer, you must ensure that the CLOCKxx member of Parmlib is
configured correctly and that all systems will be using the same CLOCKxx member (or at
least, that all members are using a CLOCKxx with the same statements).The CLOCKxx
member for a system that is a member of a multisystem sysplex must contain a specification
of ETRMODE YES. The system then uses the Sysplex Timer to synchronize itself with the
other members of the sysplex. The system uses a synchronized time stamp to provide
appropriate sequencing and serialization of events within the sysplex.
More information about the interaction between the Sysplex Timer and the systems in a
sysplex is available in the IBM Redbook entitled S/390 Time Management and IBM 9037
Sysplex Timer, SG24-2070.
347
20.18.1 Tools
The following tools may be of assistance for this merge task:
XISOLATE
The XISOLATE tool can be used to assist you in identifying if there are single points of failure
affecting critical data sets that you specify. More information about XISOLATE can be found
at:
http://www.ibm.com/servers/eserver/zseries/pso/
IEFESOJL
This is a sample program provided in SYS1.SAMPLIB that is used to analyze the output from
an IDCAMS LISTCAT command, and report on any data sets that have been catalogued
using an esoteric.
ICKDSF
Using the ANALYZE command of ICKDSF, you can obtain a logical path status report for your
DASD subsystems.
348
20.18.2 Documentation
The following publications provide information that may be of assistance for this merge task:
FICON CTC Implementation, REDP0158
Hardware Configuration Definition (HCD) Planning, GA22-7525
Hardware Configuration Definition (HCD) Scenarios, SC33-7987
MVS/ESA HCD and Dynamic I/O Reconfiguration Primer, SG24-4037
OS/390 Parallel Sysplex Configuration Volume 3: Connectivity, SG24-5639
z/OS Hardware Configuration Definition (HCD) Users Guide, SC33-7988
zSeries 900 ESCON Channel-to-Channel Reference, SB10-7034
zSeries 900 System Overview, SA22-1027
zSeries 900 Technical Guide, SG24-5975
Coupling Facility Level information, available on the Web at:
http://www.ibm.com/servers/eserver/zseries/pso/
349
350
21
Chapter 21.
351
21.1 Overview
There are several reasons to rationalize system modifications as part of the sysplex merge
process. If the libraries containing the usermods and exits are sharedfor example, on a
shared sysres, then the rationalizing of those modifications is a requirement for the sysplex
merge.
Even if the libraries are not shared, the modification may affect processing on other systems
in the sysplex. In a JES2 MAS environment, for example, converter/interpreter processing
can occur on any system within the MAS. A job submitted on one system within the MAS,
may be interpreted on another system and execute on a third system. Any exits executed
during that process must act in a consistent and coherent manner across the sysplex.
Aside from the technical issues, there are the maintenance considerations. Maintaining
multiple copies of exits makes it difficult to build systems from a common base. Multiple
software bases incur larger support overheads. Part of sysplex merge planning should
include a full review of all system modifications. Only then can an informed decision be made
as to when and how the modifications need to be handled.
21.2 Definitions
An exit is a defined interface point, where the operating system or program product will call a
user-written program to make some sort of decision. Exits are usually written in assembler
and developed at a user installation.
A usermod is a modification to the operating system or support software. A usermod can
range from a simple modification of an ISPF panel, to changes to JES2 source code, to zaps
to system modules.
21.3 Simplicity
Any modifications to the operating system and support software should be avoided wherever
possible. The bulk of the time spent installing a new release is retrofitting exits and products
that are not supplied on the ServerPacfar more than the time that is spent testing.
Also, a significant amount of time is required to test and debug operating system
modifications. When debugging system software problems, exits and usermods are always
suspect. A handy rule of thumb is if the problem you are experiencing has not already been
reported in IBMLINK, then it is a good idea to check if local exits or usermods could be
involved.
Operating system exit points are maintained from release to release. There is a clearly
defined interface. The exit may need recompiling when migrating to another software release,
but it is unlikely that the interface will change.
Usermods, however, are not as robust. There is no defined interface, so even minor product
code changes can force a change to a usermod. Usermods are usually applied manually,
which can be a time-consuming task, especially if the modification is extensive. Even if
SMP/E is used to make the modification, refitting the modification can be tedious. Panel
updates, for example, cannot be packaged as updates. A full replacement usermod is
required. A refit of this type of user mod requires extraction of the modification from the old
panel and integrating it into the updated version of the panel.
352
Notes
DFSORT options
PL/I compiler and run-time options
See notea
See notea
See notea
See notea
353
21.5.1 Documentation
It is important to document the reason for the modification, who requested it and when, and
what issue the modification addresses. Often the reason for a particular modification is lost
over time, and the modification is endlessly refitted as new software releases are installed.
Good documentation will allow a review on each upgrade of the applicability of all
modifications.
Often new operating system features can obviate the need for a modification. Software is
enhanced, partly, based on user experiences and feedback. So, it is common for later
releases to contain the same or similar functionality as many user modifications. If you have
found a shortcoming with a particular function, the chances are that other users will have too.
21.5.2 Tracking
It is essential to install and maintain system exits or modifications in a manner that is
trackable. It is all too easy to simply edit an ISPF panel definition to make a change, only to
have it accidentally regressed some time later, when maintenance is applied. And
regression is not the only possible consequence.
Consider the following example: Imagine you modify an OPC panel. If you make the change
in the OPC-provided library, a subsequent PTF could overwrite your modification, regressing
your change without your knowledge. On the other hand, if you copy the modified panel into a
library higher up the ISPPLIB concatenation, then when you apply the PTF, you are not
actually picking up all of the PTF because you are (probably unknowingly) running off an old
copy of the panel. Neither situation is pleasant, and not at all uncommon.
To avoid this situation, all system modifications should be made with SMP/E. This provides a
site-independent method of maintaining system modifications. Support personnel unfamiliar
with the documentation conventions of a particular site can still maintain a modification if
SMP/E is used.
All source code for system exits should be packaged as an SMP/E Usermod, even if the
particular exit does not have SMP/E definitions for the source code. With the source code
packaged with the usermod, there can be no doubt that the source does indeed match the
installed load module. The copy of the usermod in the SMPPTS can be viewed if there is any
doubt as to the contents of an exit.
Example 21-1 shows a sample SMP/E Usermod that can be used as a skeleton to install an
exit using source code. During SMP/E apply processing, the source element named in the
++SRC statement will be added or replaced. SMP/E will then assemble and linkedit the
module into the target.
Example 21-1 Skeleton SMP/E usermod
//
EXEC PGM=GIMSMP,REGION=6M
//SMPCSI
DD
DSN=SMPE.OS390.GLOBAL.CSI,DISP=SHR
//SMPCNTL DD
*
SET
BOUNDARY (GLOBAL).
RECEIVE S (umod001) SYSMODS.
APPLY S (umod001).
/*
354
//SMPPTFIN DD DATA
++USERMOD(umod001) REWORK(yyyydddn).
++VER(Z038) FMID(fmid) PRE(prereqs).
++SRC(member).
Exit source code
/*
Note: A ++JCLIN statement and linkedit JCL will be required if the exit is not already
defined to SMP/E. This can be checked using the SMP/E dialog. If the target zone does
not contain MOD and LMOD entries for the exit, then a JCLIN is required.
21.6 Documentation
The following documents contain information that may assist you in identifying and managing
user exits and user modifications on your system:
OS/390 MVS Installation Exits, SC28-1753
Parallel Sysplex - Managing Software for Availability, SG24-5451
SMP/E z/OS and OS/390 Reference, SA22-7772
SMP/E z/OS and OS/390 Users Guide, SA22-7773
z/OS MVS Installation Exits, SA22-7593
z/OS MVS Setting Up a Sysplex, SA22-7625
z/OS ISPF Planning and Customizing, GC34-4814
z/OS JES2 Installation Exits, SA22-7534
In addition, there is a non-IBM product called Operating System/Environment Manager that
eases the task of generating and maintaining system exits. More information about this
product can be found at:
http://www.triserv.com/os-em-in.html
355
To allow duplicate logons, a small source code change is required to the HASPCNVT module
of JES2 as shown in Example 21-2.
Example 21-2 Sample JES2 Usermod
//MJ2OZ22 JOB TPSCP010,MJ2OZ22,CLASS=E,MSGCLASS=H,
//
NOTIFY=&SYSUID
//*-------------------------------------------------------------------*
//*
*
//* USERMOD: MJ2OZ22
*
//*
*
//* FUNCTION: ALLOW DUPLICATE TSO LOGON'S IN A MAS
*
//*
*
//*-------------------------------------------------------------------*
//*
//SMPE
EXEC PGM=GIMSMP,REGION=6M
//SMPCSI
DD DSN=SMPEG15.OS390.GLOBAL.CSI,DISP=SHR
//SMPCNTL DD *
SET BOUNDARY(GLOBAL) .
RECEIVE S(MJ2OZ22) SYSMODS .
SET BOUNDARY(J2O2A0T) .
APPLY S(MJ2OZ22) REDO .
/*
//SMPPTFIN DD DATA
++USERMOD(MJ2OZ22) REWORK(20012781).
++VER(Z038) FMID(HJE7703) PRE(UW74954) .
++SRCUPD(HASPCNVT) .
./ CHANGE NAME=HASPCNVT
*
JZ
XTDUPEND
Skip duplicate logon check @420P190 05991200
J
XTDUPEND
Allow duplicate logon in MAS MJ2OZ22 05991201
/*
TSO uses the SYSIKJUA enqueue major name to prevent duplicate logons. If you want to be
able to log on more than once in a MAS, this enqueue must not be propagated around the
sysplex. The default scope for this enqueue is SYSTEM, so ensure that the following entry is
not part of the GRSRNLxx member in use:
RNLDEF RNL(INCL) TYPE(GENERIC) QNAME(SYSIKJUA)
21.7.2 ISPF Exit 16: Log, List, and Temporary Data Set Allocation Exit
If a user needs to be able to log onto multiple systems in sysplex concurrently using the same
logon id, then consider using ISPF exit 16.
By default, ISPF generates names for list and log data sets which are not unique within a
sysplex. A TSO user attempting to use ISPF on more than one system in a sysplex
concurrently will encounter data set enqueue conflicts.
ISPF Exit 16 can be used to provide a prefix to be added to the default ISPF-generated data
set names. The exit should ensure the prefix yields unique data sets names on each system
in the sysplex. Adding the system name, for example, as a second or third level qualifier will
create unique names.
For example, ISPF data sets for system PLX01 would be called:
USERID.PLX01.HFS
USERID.PLX01.ISPPROF
USERID.PLX01.ISR0001.BACKUP
USERID.PLX01.RMF.ISPTABLE
USERID.PLX01.SPFLOG1.LIST
356
USERID.PLX01.SPF120.OUTLIST
USERID.PLX01.SPF3.LIST
The sample exit shown in Example 21-3 will add the system name to the end of the
ISPF-generated data set name prefix for all ISPF list and log data sets. The sample assumes
a system name length of 5 bytes. The code should be modified for other, or variable length,
system names.
For details concerning the installation of this exit, consult Interactive System Productivity
Facility (ISPF) Planning and Customization, SC28-1298.
Example 21-3 ISPF Exit 16
//MOS#Z21 JOB ACCT,MOS#Z21,CLASS=E,MSGCLASS=H,
//
NOTIFY=&SYSUID
//*-------------------------------------------------------------------*
//*
*
//* USERMOD: MOS#Z21
*
//*
*
//* FUNCTION: ADD SYSTEM NAME TO ISPF LIST/LOG/SPFTEMP DATASETS
*
//*
*
//*-------------------------------------------------------------------*
//*
//MOS#Z21 EXEC PGM=GIMSMP,REGION=6M
//SMPCSI
DD
DSN=SMPEG15.OS390.GLOBAL.CSI,DISP=SHR
//SMPCNTL DD
*
SET
BOUNDARY (GLOBAL).
RECEIVE S (MOS#Z21) SYSMODS .
SET
BOUNDARY (OS#2A0T) .
APPLY S (MOS#Z21)
.
/*
//SMPPTFIN DD DATA
++USERMOD (MOS#Z21) REWORK(20010500) .
++VER (Z038) FMID(HIF5A02).
++JCLIN .
//LKED1 EXEC PGM=IEWL,PARM='LIST,XREF,REUS,RENT'
//SYSPRINT DD SYSOUT=A
//SYSLMOD DD DSN=SYS1S2A.SISPLOAD,DISP=SHR
//SYSUT1
DD UNIT=SYSDA,SPACE=(CYL,(3,1))
//SYSLIN
DD *
ORDER ISPXDT
ENTRY ISPXDT
INCLUDE AUSERMOD(ISPFUX16)
INCLUDE AISPMOD1(ISPXDT)
NAME ISPEXITS(R)
++SRC (ISPFUX16) DISTLIB(AUSERSRC).
ISPFUX16 CSECT
ISPFUX16 AMODE 31
ISPFUX16 RMODE 24
BAKR R14,0
STACK SYSTEM STATE
LR
R12,R15
GET EXIT ENTRY ADDR
USING ISPFUX16,R12
ESTABLISH BASE ADDR
L
R2,28(0,R1)
GET PREFIX ADDR
L
R3,24(0,R1)
GET LENGTH ADDR
L
R5,0(0,R3)
LENGTH OF PREFIX
LA
R4,0(R5,R2)
POINT TO END OF PREFIX
MVI 0(R4),C'.'
ADD PERIOD FOR NEW QUALIFIER
L
R7,CVTPTR
GET CVT ADDR
USING CVT,R7
MAP CVT
MVC 1(5,R4),CVTSNAME ADD SYSTEM NAME
357
DROP
LA
ST
SR
PR
R7
R5,6(0,R5)
R5,0(0,R3)
R15,R15
DROP CVT
CALC NEW PREFIX LENGTH
SAVE NEW PREFIX LENGTH
SET ZERO RETURN CODE
UNSTACK SYSTEM STATE
*
REGEQU
CVT LIST=NO,DSECT=YES
END
++SRC (ISPXDT) DISTLIB(AUSERSRC).
ISPMXED START
ISPMXLST (16)
ISPMXDEF 16
ISPMEPT ISPFUX16
ISPMXEND
ISPMXED END
ISPMXDD START
ISPMXDD END
END
/*
Note: A users ISPF profile data set should be allocated with a DISP=OLD. Thus, each
instance of a TSO users session should use a different ISPPROF data set. This can be
done as part of the logon process, usually in the initial clist executed from the logon
procedure.
358
359
Note: The PDF Data Set Name Change Exit is called for more than just edit recovery data
set allocation. The sample exit processes only those data sets used for edit recovery by
checking the allocation reason parameter passed to the exit. If the reason is RECOVERY,
then the data set is the edit recovery data set. If the reason is TEMP, then the data set is a
temporary PDF data set. All other data set names are ignored
For full details refer to Interactive System Productivity Facility (ISPF) Planning and
Customization, SC28-1298.
360
//SYSLIN
DD *
INCLUDE SYSPUNCH(JES2X04)
ENTRY
JES2X04
NAME
JES2X04(R)
/*
++SRC(JES2X04) DISTLIB(USERSRC) .
JES2X04 TITLE 'JES2 USER EXIT 4-- PROLOG (MODULE COMMENT BLOCK)'
***********************************************************************
*
*
* MODULE NAME : JES2X04
*
*
*
* DESCRIPTIVE NAME : JES2 USER EXIT 4 - JCL/JECL SCAN ROUTINE
*
*
*
* FUNCTION: THIS EXIT CAPTURES THE SYSTEM NAME FROM THE /*XEQ CARD
*
*
AND INSERTS A SYSTEM AFFINITY JOBPARM STATEMENT
*
*
*
* DESCRIPTION:
*
*
*
*
*
* ENVIRONMENT : JES2 MAIN TASK
*
*
*
* NOTES
*
*
*
*
DEPENDENCIES = JES2 $EXIT FACILITY, STANDARD JES2 SERVICES
*
*
*
*
REQUIREMENTS = THIS CODE REQUIRES THE USE OF JCTUSER3 AND 4.
*
*
IF THIS IS NEEDED FOR ANY OTHER PURPOSE, CHANGE
*
*
THIS EXIT TO USE SOME OTHER AVAILABLE JCT
*
*
USER FIELD.
*
*
*
*
ATTRIBUTES = NON-REENTRANT, RMODE(ANY),AMODE(24/31)
*
*
*
* ENTRY POINTS : EXIT4
*
*
*
* REGISTER USAGE (ENTRY/EXIT) :
*
*
*
*
REG
VALUE ON ENTRY
VALUE ON EXIT
*
*
*
*
R0
A CODE PASSED TO THE
UNCHANGED
*
*
ROUTINE BY JES2:
*
*
0 INDICATES JECL
*
*
4 INDICATES JCL
*
*
R1
ADDRESS OF 3-WORD
UNCHANGED
*
*
PARAMETER LIST:
*
*
+0 ADDRESS OF IMAGE BUFFER
*
*
+4 ADDRESS OF RDWFLAGX
*
*
+8 ADDRESS OF JCTXWRK
*
*
R2-R9
N/A
UNCHANGED
*
*
R10
ADDRESS OF THE JCT OR ZERO UNCHANGED
*
*
R11
ADDRESS OF THE HCT
UNCHANGED
*
*
R12
N/A
UNCHANGED
*
*
R13
ADDRESS OF THE PCE
UNCHANGED
*
*
R14
RETURN ADDRESS
UNCHANGED
*
*
R15
ENTRY ADDRESS
RETURN CODE
*
*
*
* RETURN CODES (R15 ON EXIT)
*
*
*
*
0 TELLS JES2 THAT IF THERE ARE ANY ADDITIONAL EXIT ROUTINES *
*
ASSOCIATED WITH THIS EXIT, CALL THE NEXT CONSECUTIVE
*
*
EXIT ROUTINE. IF THERE ARE NO OTHER EXIT ROUTINES ASSO*
*
CIATED WITH THIS EXIT, CONTINUE WITH NORMAL PROCESSING.
*
*
*
*
4 TELLS JES2 THAT EVEN IF THERE ARE ADDITIONAL EXIT ROUTINES *
361
*
ASSOCIATED WITH THIS EXIT, IGNORE THEM. CONTINUE WITH
*
*
NORMAL PROCESSING.
*
*
*
*
8 FOR JES2 CONTROL STATEMENTS, TELLS JES2 NOT TO PERFORM
*
*
STANDARD HASPRCCS PROCESSING; INSTEAD, IMMEDIATELY
*
*
CONVERT THE STATEMENT TO A COMMENT (//*) WITH THE NULL*
*
ON-INPUT FLAG SET TO ONE AND WRITE THE STATEMENT TO THE
*
*
JCL DATA SET. FOR JCL STATEMENTS, TELLS JES2 TO PERFORM
*
*
STANDARD HASPRDR PROCESSING.
*
*
*
*
12 TELLS JES2 TO CANCEL THE JOB BECAUSE AN ILLEGAL CONTROL
*
*
STATEMENT HAS BEEN DETECTED; OUTPUT IS PRODUCED.
*
*
*
*
16 TELLS JES2 TO PURGE THE JOB; NO OUTPUT IS PRODUCED.
*
*
*
* MACROS = JES2 - $ENTRY, $MODEND, $MODULE, $MSG, $RETURN, $SAVE,
*
*
$WTO
*
*
*
* MACROS = MVS - NONE
*
*
*
* CHANGE ACTIVITY:
*
*
*
***********************************************************************
*
*
TITLE 'JES2 USER EXIT 04-- PROLOG ($HASPGBL)'
COPY $HASPGBL
*
JES2X04 $MODULE ENVIRON=JES2,
C
$BUFFER,
GENERATE HASP I/O BUFFER DSECT
C
$CAT,
GENERATE HASP CAT DSECT
C
$CMB,
GENERATE HASP CMB DSECT
C
$DCT,
GENERATE HASP DCT DSECT
C
$DTE,
GENERATE HASP DTE DSECT
C
$ERA,
GENERATE HASP ERA DSECT
C
$HASPEQU,
GENERATE HASP EQUATES DSECT
C
$HCT,
GENERATE HASP HCT DSECT
C
$JCT,
GENERATE HASP JCT DSECT
C
$JQE,
GENERATE HASP JQE DSECT
C
$KIT,
GENERATE HASP KIT DSECT
C
$MIT,
GENERATE HASP MIT DSECT
C
$PADDR,
GENERATE HASP PADDR DSECT
C
$PCE,
GENERATE HASP PCE DSECT
C
$PIT,
GENERATE HASP PIT DSECT
C
$RDRWORK,
GENERATE HASP RDRWORK DSECT
C
$TQE,
GENERATE HASP TQE DSECT
C
$USERCBS,
GENERATE HASP USERCBS DSECT
C
$XECB,
GENERATE HASP XECB DSECT
C
$XIT,
GENERATE HASP XIT DSECT
C
ASCB,
GENERATE MVS ASCB DSECT
C
RPL
GENERATE MVS RPL DSECT
JES2X04 RMODE ANY
*
TITLE 'JES2 USER EXIT 04--- JCL/JECL SCAN'
***********************************************************************
* REGISTER USAGE (INTERNAL)
*
*
*
*
REG
VALUE
*
*
*
*
R0
PARAMETER FROM JES2
*
*
R1-R9
WORK REGISTERS
*
*
R10
JCT ADDRESSABILITY
*
*
R11
HCT ADDRESSABILITY
*
*
R12
EXIT4 ADDRESSABILITY
*
*
R13
PCE ADDRESSABILITY
*
362
*
R14
LINK/WORK REGISTER
*
*
R15
LINK/WORK REGISTER
*
*
*
***********************************************************************
SPACE 1
EXIT4
$ENTRY BASE=(R12)
PROVIDE EXIT ROUTINE ENTRY POINT
$SAVE
SAVE CALLER'S REGISTERS
USING JCT,R10
ESTABLISH JCT ADDRESSABILITY
USING HCT,R11
ESTABLISH HCT ADDRESSABILITY
SPACE 1
LR
R6,R1
SAVE POINTER TO PARAMETERS
LR
R7,R0
SAVE FLAG PARAMETER
LR
R12,R15
ESTABLISH BASE REGISTER
LTR
R10,R10
IF JCT NOT PRESENT
BZ
X4RC00
IF JCT=0, NOT JCL
*
CLI
JCTJOBID,C'J'
NOT INTERESTED IN STC OR TSO USER
BNE
X4RC00
SO LEAVE FOR THESE
LTR
R7,R7
R0=0 ON ENTRY IMPLIES CALL IS FOR A
BNZ
X4RC00
JES2 CONTROL STMNT
* --------------------------------------------------------------* IF THIS IS A '/*ROUTE' OR '/*XEQ' CARD, PROCESSING TO FIND THE
* TARGET SYSTEM ID WILL BE COMMON. JUST HAVE TO CHECK FIRST THAT
* THE ROUTE CARD IS NOT FOR PRINT.
* --------------------------------------------------------------JESCC
DS
0H
CLI
JCTUSER3,X'FF'
HAVE WE SEEN A SYSAFF YET ?
BE
X4RC00
YES. SKIP FURTHER PROCESSING
*
L
R3,0(0,R6)
GET ADDRESS OF JCL CARD
CLC
=CL8'/*ROUTE ',0(R3) IS IT A ROUTE CARD?
BE
LBL0012
- YES, PROCESS
CLC
=CL6'/*XEQ ',0(R3)
IS IT AN XEQ CARD?
BNE
X4RC00
- NO, BRANCH
LA
R1,5(R3)
POINT R1 PAST THE XEQ
B
LBL0015
AND CHECK FOR TARGET SYSID
* --------------------------------------------------------------* CHECK HERE TO ENSURE THAT THE '/*ROUTE' CARD IS FOR AN XEQ AND
* NOT A PRINT.
* --------------------------------------------------------------LBL0012 DS
0H
LA
R1,7(R3)
POINT PAST THE '/*ROUTE'
TRT
0(72-7-8,R1),TBNBLNK FIND FIRST NON-BLANK
BZ
X4RC00
REST OF CARD BLANK SO LEAVE
CLC
=CL3'XEQ',0(R1)
FOUND AN XEQ?
BNE
X4RC00
- NO, LEAVE
LA
R1,3(R1)
POINT PAST THE 'XEQ'
* --------------------------------------------------------------* COMMON PROCESSING TO IDENTIFY THE TARGET SYSTEM ID.
* - AFTER THE EXECUTE, R1 POINTS TO THE TARGET ID
* --------------------------------------------------------------LBL0015 DS
0H
LA
R15,72-1(,R3)
SR
R15,R1
USE TRT 0(*,*,R1),TBNBLNK
EX
R15,TRTNBLNK
TO FIND NEXT NON-BLANK
BZ
X4RC00
- DIDN'T FIND ONE SO LEAVE
*
MVC
JCTUSER3,BLANKS
CLEAR JCTUSER3
MVC
JCTUSER4,BLANKS
CLEAR JCTUSER4
LBL0020 DS
0H
LA
R2,8
NAME NAME LENGTH MAXIMUM
LA
R3,JCTUSER3
ADDR TO PUT NODE NAME
LBL0030 DS
0H
363
CLI
0(R1),C' '
END OF NODE NAME ?
BE
LBL0040
YES. BRANCH
CLI
0(R1),C','
END OF NODE NAME ?
BE
LBL0040
YES. BRANCH
MVC
0(1,R3),0(R1)
MOVE A NODE NAME CHARACTER
LA
R1,1(0,R1)
MOVE TO NEXT CHARACTER
LA
R3,1(0,R3)
MOVE TO NEXT CHARACTER
BCT
R2,LBL0030
LOOP THRU NODE NAME
* -------------------------------------------------------------------*
* ADD A /*JOBPARM SYSAFF=XXX IF JOB DESTINED FOR NEW SYSPLEX SYSTEM *
* -------------------------------------------------------------------*
LBL0050 DS
0H
MVC
JCTXWRK(80),AFFCARD
BUILD SYSTEM AFFINITY STMT
MVC
JCTXWRK+17(8),JCTUSER3 ADD SYSTEM NAME
OI
RDWFLAGX,RDWXXSNC
INDICATE STATEMENT INSERTED
B
X4RC00
*
AFFCARD DC
CL80'/*JOBPARM SYSAFF='
*
* --------------------------------------------------------------*
COMMON EXIT
* --------------------------------------------------------------X4RC00
DS
0H
LA
R15,0
SET RC=0
$RETURN RC=(R15)
SAVE RETURN CODE
SPACE 3
EJECT
* --------------------------------------------------------------*
LITERALS
* --------------------------------------------------------------BLANKS
DC
CL4' '
* --------------------------------------------------------------*
NON-BLANK CHECKING
* --------------------------------------------------------------TRTNBLNK TRT
0(*-*,R1),TBNBLNK
** EXECUTED
*
TBNBLNK DC
256X'FF'
ORG
TBNBLNK+C' '
DC
X'00'
ORG
,
*
LTORG
$MODEND
END
$$
364
22
Chapter 22.
Maintenance considerations
This chapter discusses the considerations for rolling out new releases, and planned
maintenance for a system that is being moved into an existing sysplex.
365
Definition
Maintenance
environment
A set of SMP/E target and distribution libraries and CSIs, or non-SMP/E product
target libraries, that are used to maintain the operating system, subsystems, and
related program products. These libraries are the source for the system runtime
environment, but would never actually be IPLed from.
sysres or
sysres set
Version HFS
The IBM-supplied root HFS data set containing files and executables for the
operating system elements. We use this term to avoid confusion with the sysplex
root HFS data set.
While we recommend that you merge the maintenance environments when moving a system
into a sysplex, there are no technical requirements for doing this. You need to analyze your
requirements to decide the structure that best fits your environments needs. To help you
understand your requirements, review the considerations in Table 22-2 on page 367.
In this chapter, we specifically address the z/OS environment; however, the principles could
easily be applied to the subsystems as well. While you could merge all the environments
(z/OS, CICS, DB2, and so on) into a single SMP/E environment, we recommend maintaining
a separate SMP/E environment for each SREL, as this provides more flexibility and
separation of responsibilities. Also, the environment that this discussion is based on is as
follows:
There is one SMP/E Global zone, which is used for all z/OS systems.
The libraries pointed to by the Target zone DDDEFs are normally not used to IPL. Those
libraries are used to run SMP/E Apply jobs against, and they are then copied to create
IPLable sysreses.
In a BronzePlex, the libraries would be copied to tape, and then back to the target sysres.
In a GoldPlex or a PlatinumPlex, they would be copied directly to the target sysres.
All SMP/E jobs are normally run on a specific, non-production, system.
It is important to have, or develop, a maintenance strategy that meets your technical and
business requirements. A preventative maintenance strategy can help avoid system or
subsystem outagesa large percentage of system outages experienced by customers are
caused by problems that already have a fix available for more than six months.
However, when merging a system into a sysplex, you also need to consider the usage of each
system, in terms of the maintenance practices and strategy. Refer to 22.4, Maintenance
strategy on page 371 for more information.
366
In line with the target sysplex environment options outlined in 1.2, Starting and ending
points on page 2, moving the incoming system into a BronzePlex would typically be done to
obtain the benefits of PSLC or WLC, so there is no requirement to merge the maintenance
environments of the incoming system into the target sysplex. This would mean maintaining
the two maintenance environments and sysres structures as you do today.
We would, however, recommend that some merging or consolidation be considered.
Maintaining multiple maintenance environments increases system programmer workload and
increases the chances of mistakes being made. If possible, we recommend merging into a
single SMP/E Global zone so Enhanced HOLDDATA would only need to be received once,
and all SMP/E management can be done through a single interface. Of course, limited DASD
sharing and catalog issues in the BronzePlex configuration may make this difficult to achieve.
Moving the incoming system into a GoldPlex would be done to share the system software
environment (sysres), plus other selected shared system infrastructure, so merging the
maintenance environment would be essential. Merging, in this case, means understanding
the product mix between systems and making all required products available on a single
sysres set.
A GoldPlex may or may not include the sharing of the Master Catalog between all systems in
the target sysplex, so the user catalogs may or may not also be shared. Therefore, you will
need to consider how you want to manage the sysres and the maintenance environments
data set catalog entries. Assuming most, if not all, data sets on the sysres are cataloged in
the Master Catalog, you will need to define a process to keep the Master Catalogs in sync
across the sysplex.
You will also need to consider how you will manage data sets such as distribution libraries
and SMP/E data sets within the maintenance environment. Either all those data sets must be
in a user catalog that is shared by all systems in the sysplex, or else all management of the
maintenance environment must be done from a single system.
Moving the incoming system into a PlatinumPlex would result in shared DASD, a shared
sysres, maintenance environment, Master Catalog and all user catalogs, a single SMSplex,
and so on. This configuration allows you to avail of the benefits of full sharing, and the
maintenance environment data sets will be available from any system within the sysplex.
Note
Type
B, G, P
G, P
G, P
Done?
367
Consideration
Note
Type
Choose a HLQ for maintenance and SMP/E data sets that will
allow for where those data sets will be accessed from.
B, G, P
G, P
G, P
G, P
G, P
Done?
The Type specified in Table 22-2 on page 367 relates to the sysplex target environmentB
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex. The
notes highlighted in Table 22-2 refer to the following points:
1. This consideration only applies to environments where the incoming system and the target
sysplex systems will not be sharing a sysres, which is most likely a BronzePlex.
You will need to ensure that the operating system of the incoming system is at the
appropriate level to coexist with the other systems within the target sysplex. You will also
need to ensure that the appropriate coexistence maintenance is applied to all systems
within the target sysplex prior to merging the incoming system into the target sysplex; refer
to 22.3, Coexistence levels on page 370 for more information. For additional information
on the options available to you for obtaining the required maintenance, refer to 22.4.4,
How to obtain the required maintenance on page 375.
2. It is important to have, or develop, a maintenance strategy that matches your technical and
business requirements. A preventative maintenance strategy can help avoid system or
subsystem outages. However, when merging a system into a sysplex, you also need to
consider the usage of each system (for example, production systems are typically updated
less frequently than test systems), in terms of the maintenance practices and strategy.
Refer to 22.4, Maintenance strategy on page 371 for more information.
3. If you are planning to use a single sysres, you will need to review the product mix to
ensure all the products available on the sysres currently being used by the incoming
system are also available on the shared sysres.
4. The GoldPlex configuration might not include the sharing of the Master Catalog. Equally,
you may or may not plan on sharing the system-related user catalogs (for example, those
containing the distribution libraries). Therefore, you will need to consider how you want to
manage the sysres data set catalog entries. Assuming most, if not all, data sets on the
sysres are cataloged in the Master Catalog you will need to define a process to keep the
Master Catalogs in sync across the sysplex.
You will also need to consider how you want to manage the maintenance environments
data set catalog entries. The options would be to either maintain your maintenance
environment from a single system within the target sysplex, or make the HLQ(s) for the
data sets within the maintenance environment, such as the distribution libraries and
SMP/E data sets, available in a user catalog that is shared by all systems within the
sysplex. If you choose the latter option, there may be issues if your maintenance
environments data sets are SMS-managed. In a PlatinumPlex configuration, there would
be a single Master Catalog and a single SMSplex, so this would not be an issue.
We would recommend that, in the GoldPlex configuration, you maintain your maintenance
environment from a single system within the target sysplex. For suggested data set
naming standards, refer to Table 22-3 on page 380.
368
369
Coexistence occurs when two or more systems at different software levels share resources.
The resources could be shared at the same time by different systems in a multisystem
configuration. Examples of coexistence are: two different JES releases sharing a spool; two
different service levels of DFSMSdfp sharing catalogs; multiple levels of SMP/E processing
SYSMODs packaged to exploit the latest enhancements; or an older level of the system using
the system control files of a newer level. The sharing of resources is inherent in multisystem
Parallel Sysplex configurations. The way in which you make it possible for systems to coexist
within a sysplex environment is to install coexistence service (PTFs) on the earlier-level
systems.
N o rm a l c o e x is te n c e p o lic y
OS/390
OS/390
n
2.8
2.8
n+1
2.9
z/OS
n+2
n+3
2.10
1.1
1.2
2.9
z/OS
n
n+1
2.10
1.1
1.2
1.3
1.4
n+2
n+3
1.3
1.4
You will need to ensure that the operating system of the incoming system is at the appropriate
level to coexist with the other systems within the target sysplex. You will also need to ensure
that the appropriate coexistence maintenance is applied to all systems within the target
sysplex and the incoming system, prior to bringing the incoming system into the target
sysplex.
370
Subsystem levels will also need to be reviewed to ensure there are no compatibility,
coexistence, or toleration maintenance requirements. This is a requirement if either the
incoming system or target sysplex are to be used as a target for subsystem restarts using
ARM, or a similar automatic restart mechanism, and either are at a different operating system
level than the other.
To obtain a list of the required service, refer to Chapter 5, Ensuring coexistence and
fallback, in OS/390 Planning for Installation, GC28-1726, or z/OS Planning for Installation,
GA22-7504, at the appropriate operating system level.
371
The on-request option is only shipped when you specifically request it, and it provides all
PTFs, not just those that are available for 90 days or more.
Customer Built Product Delivery Option (CBPDO)
CBPDO provides monthly PER-closed PTFs and, in addition, includes corrective (COR)
closed reach-ahead fixes for HIPERS and PEs on a weekly basis. CBPDOs are normally
ordered on an as-needed basis; however, some countries provide the option of delivering
CBPDOs on a subscription basis.
ServerPac Offering
The ServerPac delivery vehicle integrates service not only for z/OS itself, but for all
orderable products on the z/OS system replace checklist. The corrective service follows
the traditional distribution procedures.
RefreshPac Offering
RefreshPac is a fee offering that provides tailored service based on a copy of the SMP/E
CSIs that you send to IBM. The service shipped in the RefreshPac is install-tested in IBM
prior to shipment to ensure that it will SMP APPLY cleanly.
When you order a RefreshPac/MVS, you can optionally order the automatic shipment of
periodic follow-on service packages for your order. These packages, called selective
follow-on service, contain service that is applicable to your RefreshPac/MVS order and
that has become available since the latest preceding order or selective follow-on service
package was built. When you request selective follow-on service, you specify the desired
number of follow-on packages (1 or 2) and the number of days between packages (21 to
60).
You can also obtain individual or groups of PTFs using the SRD function in IBMLINK.
Requested PTFs will be either shipped over the network or on tape media.
372
z/OS
OS/390
CICS Transaction Server for OS/390
DB2 UDB for OS/390
IMS
MQSeries for OS/390
Revised RSU
Beginning in November 2001, IBM changed its recommendation for service and redefined the
criteria for how RSUs are assigned to fixes. This new recommendation is complemented by
the additional testing performed by CST. Like the prior RSU process, recommendations come
out monthly in the form of SMP/E ++ASSIGN statements (RSUyymm). You can order both the
RSU ++ASSIGN statements, and the PTFs associated with a given RSU using SUF and
ShopzSeries; see 22.4.4, How to obtain the required maintenance on page 375 for more
details on ShopzSeries and SUF.
However, the criteria for deciding which RSU should be assigned to a PTF has undergone a
number of changes. The biggest change is that a PTF must have gone through a CST
monthly test cycle prior to being assigned an RSU SOURCEID. This additional test cycle
allows IBM to identify and eliminate problems that may occur when the products are
integrated together. This change, in when the recommendation is made, affects when the
PTFs are assigned an RSU SOURCEID. The RSU SOURCEID no longer reflects when the
PTF closed. Rather, it now reflects when the service completed the test cycle and became
recommended. The PUTyymm SOURCEID can be used to identify when a PTF closes.
Keep in mind that the date that a PTF is marked recommended does not affect when that PTF
is available for corrective servicethat remains unchanged. There is also no change to when
SOURCEIDs, such as HIPER or PRP (PTFs that resolve PEd PTFs), are assigned to the
PTFs.
373
The next change to assignment of RSU SOURCEIDs is that it is now based on a different
criteria. Each quarter, all service (Severity 1, 2, 3 and 4 APARs) as of the end of the prior
quarter will have completed a CST test cycle, and will therefore become recommended.
Additionally, each month, HIPER PTFs, PTFs that resolve PE PTFs, and other fixes as
warranted that have completed a CST test cycle will become recommended.
Both the quarterly and the monthly recommendations use the same RSUyymm SOURCEID
notations, so you can identify the quarterly recommendations by their month values (for
example, RSUyy03, RSUyy06, RSUyy09, and RSUyy12). As always, you should review
HIPERs and PE fixes on a regular basis and install those applicable to your environment.
The benefits of the revised RSU process are:
Better testing of maintenance through a coordinated effort by CST and representatives
from each of the participating products in a Parallel Sysplex environment.
The recommendation for inclusion in an RSU is made by the participating product experts.
Testing is completed prior to the RSU being made available. The original RSU was based
on the selection criteria and the testing was done after the RSU was generally available.
The new process allows for consistent maintenance recommendations across
participating products (both in what service is recommended, and how frequently
preventive maintenance should be applied).
The maintenance included in each RSU differs, depending on whether it is a monthly or
quarterly RSU.
Quarterly RSUs
RSUyy03, RSUyy06, RSUyy09, and RSUyy12 contain:
All service, Severity 1, 2, 3, and 4 APARs, that were successfully tested by CST as of the
end of the prior quarter.
HIPERs, PE fixes, and security/integrity APARs that have completed a 30-day CST cycle
in that month.
CST corrective fixes.
374
Monthly RSUs
RSUyy01, RSUyy02, RSUyy04, RSUyy05, RSUyy07, RSUyy08, RSUyy 10, and RSUyy11
contain:
HIPERs, PE fixes, and security/integrity APARs that have completed a 30-day CST cycle
in that month.
CST corrective fixes.
375
If you do not provide a service inventory, you will receive the requested service without any
additional requisites. The service (and two years of Enhanced HOLDDATA) is then either
available to be downloaded electronically, or sent physically on 3480, 3490E, or 3590
cartridges.
For preventive service, you run the same SMP/E job to create an installed service
inventory of your SMP/E zones. IBM then determines what service is required to bring
your system to the latest RSU level, install the latest HIPER or PRP service, or install all
available service. IBM will then create a package for you containing the appropriate
maintenance. The package will eliminate all PTFs that you have already RECEIVEd, as
well as including any PTFs that IBM determined you need as requisites. It will also contain
resolution of known PEs. Since your service order is based on your target SMP/E
environment, it can be delivered quickly because it will not include any extraneous service.
Tracking the status of your order
For either corrective service or preventive service, you can track the status of your order
until it is ready for you to pull down to your host or workstation. You can either use z/OS
1.2 SMP/E (also available as SMP/E for z/OS and OS/390 V3R1, or as a Web download)
to RECEIVE the service directly from the Internet using the SMP/E RECEIVE
FROMNETWORK command.
Alternatively, you can FTP it to your mainframe and install it with the normal SMP/E
RECEIVE statement, or you can download it to your workstation. Once you have the order
on your workstation, you can upload it to your host for processing. Of course, physical
delivery of your order is also an option, if you cant take electronic delivery.
In addition, all orders larger than 1 GB compressed will be shipped on the physical media
of your choice, or not at all, if you decide youd rather resubmit your order to request less
service.
For more information about ShopzSeries, refer to the following URL:
https://www14.software.ibm.com/webapp/ShopzSeries/ShopzSeries.jsp
TechSupport
TechSupport allows you to order corrective service by using the Internet. z/OS
maintenance can be delivered through the Internet, or by using standard physical media
delivery.
TechSupport allows you to:
Order PTFs by providing a list of PTFs that you want. (It does not support ordering by
APAR numbers.)
Get the package transported via FTP, which is TERSEd prior to transport.
The TechSupport Web site is available at the following URL:
https://techsupport.services.ibm.com/server/login
376
The SUF Web site contains a test drive facility, so you can see how SUF works and the
types of functions it delivers. More information on SUF is available at the following URL:
http://www.ibm.com/servers/eserver/zseries/zos/suf/
We recommend that you review the ShopzSeries option, once it becomes available in your
geography, because most of the capabilities of the other maintenance offerings for preventive
and corrective, both physical and electronic, are available through ShopzSeries. While end of
support for SUF has not been announced, we expect that for z/OS platform service, SUF will
be functionally stabilized and future enhancements will be made to ShopzSeries.
377
378
MULTSYS
Used to identify PTFs that have special installation requirements in a multisystem
environment. These include:
Preconditioning - to identify maintenance that requires other maintenance to be
installed on other systems in a complex before this maintenance can be installed or
deployed.
Coexistence (toleration) - to identify maintenance that requires other maintenance to
be installed on other systems in a complex before new function provided in this PTF
can be installed or deployed.
In exception conditions, a PTF may be considered a coexistence PTF if it is used to
identify other maintenance to be installed on other systems in a complex, before
function originally shipped in the product (FMID) can be deployed. This would be
limited to cases where the need for coexistence service wasnt known when the
product (FMID) was originally made available.
Complete fix (exploitation) - to identify maintenance that needs to be installed and
deployed on multiple systems before the change can be effective.
379
SYSA
Shared
sysres
volumes
SYSB
PLXSY1
SYSC
PLXSY2
SYSD
PLXSY3
Maintenance
environment
SMS Pool
Naming standards
Naming standards within the maintenance structure are very important. They need to be
simple so that people will adhere to them, but flexible enough to cater for differing product
levels. The list of naming standards in Table 22-3 is focused on z/OS and related program
products, but could also be used in other areas.
The GoldPlex and BronzePlex configurations do not include the sharing of the Master Catalog
within the target sysplex, and therefore user catalogs may or may not be shared. So you will
need to consider how you want to manage the maintenance environments data set catalog
entries.
The options would be to either maintain your maintenance environment from a single system
within the target sysplex, or make the HLQ(s) for the data sets within the maintenance
environment available in a user catalog that is shared by all systems within the sysplex. If you
choose the latter option, there may be issues if your maintenance environments data sets are
SMS-managed.
In a PlatinumPlex configuration, there would be a single Master Catalog with a single
SMSplex, so managing the data sets with SMS would not cause an issue.
Table 22-3 Example naming standards for the maintenance structure
380
Type
Standard
Example
Product Group
xxxvrm
where xxx is a 3-character product
identifier and vrm is the version,
release modification of the
product.
Global Zone
SMPE.GLOBAL.CSI
SMPE.GLOBAL.CSI
Type
Standard
Example
Target Zones
<xxxvrm>T
for the maintenance zone, one per
product group.
<resvol>T
one per sysres.
ZOS130T
Distribution Zone
<xxxvrm>D
one per product group.
ZOS130D
Target CSIs
SMPE.<xxxvrm>.TLIB.CSI
for the maintenance zone.
SMPE.<xxxvrm>.<resvol>T.CSI
for the sysres zones.
SMPE.ZOS130.TLIB.CSI
Distribution CSI
SMPE.<xxxvrm>.DLIB.CSI
SMPE.ZOS130.DLIB.CSI
SMPE.GLOBAL.SMPPTS
a single SMPPTS
SMPE.GLOBAL.SMPLOG*
SMPE.GLOBAL.SMPLOG/SMPLOGA
PLXSY1T
SMPE.ZOS130.PLXSY1T.CSI
SMPE.<xxxvrm>.TLIB.SMP*
for the maintenance zone.
SMPE.<xxxvrm>.<resvol>T.SMP*
for the sysres zones.
SMPE.ZOS130.TLIB.SMPLTS
SYS1.<dddef>
SMPE.<xxxvrm>.<dddef>
SMPE.ZOS130.ALINKLIB
SMPE.ZOS130.PLXSY1T.SMPMTS
We have used SMPE as the HLQ in the examples, but this could obviously be changed to suit
your sites requirements.
SMP/E structure
The naming standards listed in Table 22-3 allow for a single global zone, one target zone
each for the maintenance libraries and for each sysres set (to ensure there is a maintenance
environment that reflects the sysres level), and a single distribution zone.
This structure allows for future releases and, with slight naming standard changes, could also
cater for different maintenance strategies. For example, when a new level of z/OS is installed
onto the system, a new SMP/E infrastructure would need to be introduced. When the new
level is rolled through all the systems within the target sysplex, the old structure could then be
removed.
Table 22-4 shows an example of the SMP/E structure based on the naming standards
described. The structure shows the single global, maintenance environment target and
distribution zones, and target zones, for each sysres set. These target zones get refreshed by
the propagation routine and would be used for level information specific to the sysres, but
could also be used to apply maintenance directly, if required.
Table 22-4 Example SMP/E structure
Zone type
Zone name
Global
GLOBAL
SMPE.GLOBAL.CSI
Target
ZOS130T
SMPE.ZOS130.TLIB.CSI
381
Zone type
Zone name
Target
PLXSY1T
SMPE.ZOS130.PLXSY1.CSI
Target
PLXSY2T
SMPE.ZOS130.PLXSY2.CSI
Target
PLXSY3T
SMPE.ZOS130.PLXSY3.CSI
Distribution
ZOS130D
SMPE.ZOS130.DLIB.CSI
Another alternative would be to remove the maintenance environment target zone (ZOS130T)
and simply apply your maintenance directly to the sysres target zones. This would be a valid
configuration as long as the maintenance was applied to an inactive sysres.
Note: Never apply maintenance to a sysres that is active within the target sysplex. We
discuss how to prevent this from happening in 22.5.4, Propagation of service and releases
within the sysplex on page 385.
DDDEFs
The DDDEFs within the target and distribution zones will need to be set up to reflect the
maintenance structure. The target library DDDEF entries need to have the specific target
volume specified, as shown in Example 22-1.
Example 22-1 Target library DDDEF example
Entry Type:
Entry Name:
DDDEF
LINKLIB
DSNAME: SYS1.LINKLIB
VOLUME: PLXSY3
UNIT: SYSALLDA
DISP:
SHR
For HFS data sets, the target path DDDEF entries need to have the service and zone
directory structure specified, as in Example 22-2. Automount can be used to mount the
correct root HFS with the appropriate target zone name; refer to 22.5.2, HFS
considerations on page 383 for more information on the automount set up.
Example 22-2 Target path DDDEF example
Entry Type:
Entry Name:
DDDEF
NFSCUTIL
------------------------------------------------------------------PATH: '/SERVICE/ZOS130T/usr/lpp/NFS/IBM/
The distribution library DDDEF entry needs to have the cataloged data set name specified as
shown in Example 22-3. We recommend these data sets be SMS-managed.
Example 22-3 Distribution library DDDEF example
Entry Type:
Entry Name:
DDDEF
ALINKLIB
DSNAME: SMPE.ZOS130.ALINKLIB
VOLUME:
382
UNIT: SYSALLDA
DISP:
SHR
Example
Comments
/$VERSION
OMVS.<resvol>.ROOT
OMVS.PLXSY1.ROOT
Mounted read-only in
BPXPRMxx
/SERVICE/tzone
OMVS.<tzone>.ROOT
OMVS.<resvol>.ROOT
OMVS.ZOS130T.ROOT
OMVS.PLXSY2.ROOT
/etc/SERVICE.map
The corresponding map needs to be set up to tell automount which HFS to mount on the
appropriate mountpoint; see Example 22-5.
Example 22-5 /etc/SERVICE.map example
name
type
filesystem
mode
duration
delay
setuid
ZOS130T
HFS
OMVS.ZOS130T.ROOT
RDWR
60
0
no
383
name
type
filesystem
mode
duration
delay
setuid
PLXSY1T
HFS
OMVS.PLXSY1.ROOT
RDWR
60
0
no
name
type
filesystem
mode
duration
delay
setuid
PLXSY2T
HFS
OMVS.PLXSY2.ROOT
RDWR
60
0
no
name
type
filesystem
mode
duration
delay
setuid
PLXSY3T
HFS
OMVS.PLXSY3.ROOT
RDWR
60
0
no
The automount filesystype entry also needs to be set up in BPXPRMxx, if it isnt already, as in
Example 22-6.
Example 22-6 Automount filesystype entry in BPXPRMxx
FILESYSTYPE TYPE(AUTOMNT)
ENTRYPOINT(BPXTAMD)
To ensure the automount facility is started after each IPL, the /etc/rc file on the appropriate
system will need to be updated as per Example 22-7.
Example 22-7 Start automount in /etc/rc
# Start the automount facility
echo starting AUTOMOUNT > /dev/console
/usr/sbin/automount
With the automount structure set up as in the examples, whenever maintenance is applied to
any of the target zones, the appropriate Version HFS will be mounted. For example, if
maintenance is being applied to the ZOS130T zone and the process was modifying a file in a
path specified in a DDDEF starting with /SERVICE/ZOS130T/*, then automount would
automatically mount the Version HFS OMVS.ZOS130T.ROOT at the /SERVICE/ZOS130T
mountpoint. As per the parameters specified in Example 22-5 on page 383, it would be
mounted Read/Write for a duration of 60 minutes.
384
Setting up automatic cross-zone requisite checking can reduce or eliminate these tasks. This
process could, for example, automatically verify that all coexistence service is installed when
installing a new level of the operating system. Or, the process can ensure all cross-product
dependencies (for example, OS/390 or z/OS PTFs, DB2 PTFs, and so on) are installed when
installing products like WebSphere 4.0.1.
Cross-zone checking
In OS/390 Release 3, SMP/E introduced the capability to automate the checking of
cross-zone requisites. These cross-zone requisites can be for cross-product dependencies
on the same system, as well as for cross-system Preconditioning, Coexistence
(toleration), or Completing a fix (exploitation) PTFs. Product packagers can use
++IFREQs SMP MCS statements to identify these requisites.
Different methods can be used for cross-zone processing. However, you will need to set up
your SMP/E environment appropriately. A zone group can be defined and added to the install
jobs, or the XZGROUP operand can be used.
Once set up, SMP/E can identify cross-zone requisites needed in the set-to zone which is in
effect for the current APPLY/ACCEPT commands, as well as any cross-zone requisites for
SYSMODs currently being installed. SMP/E checks if the requisite is already installed, or if it
needs to be installed as part of the same SMP/E APPLY/ACCEPT command. Once products
use ++IFREQs for MULTSYS PTFs, SMP/E will be able to verify and install cross-zone
requisites, thereby satisfying the ++HOLD REASON(MULTSYS) exception condition.
Note: If SYSMODs being installed into the set-to zone have requirements against the other
cross-zones, that service must be APPLYed to those zones before installation can be
completed into the set-to zone.
For detailed information on how to set up automatic cross-zone requisite checking, refer to
Specifying Automatic Cross-Zone Requisite Checking in Chapter 3, Preparing to Use
SMP/E in SMP/E for z/OS and OS/390 Users Guide, SA22-7773.
385
386
reschk=substr(word3,1,1)
if reschk="S" then ok="NOTOK"
if ok="NOTOK" then status="NOTOK"
end
say out.i status
end
end
else
do
out.0=0
mmm=getmsg("out.","SOL",cart,,wait)
if mmm=0 then
do i=1 to out.0
parse var out.i word1 word2 word3 word4 word5
status=" "
if word4=sysres then
do
reschk=substr(word3,1,1)
if reschk="S" then ok="NOTOK"
if ok="NOTOK" then status="NOTOK"
end
say out.i status
end
end
"console deactivate"
end
else
say "CONSOLE NOT AVAILABLE"
"consprof soldisplay("soldisp")",
"unsoldisplay("unsdisp")",
"solnum("solnum")",
"unsolnum("unsnum")"
if ok="NOTOK" then exit 20
exit 0
Propagation routine
These steps and examples are based on building a sysres from the maintenance
environment, including creating a snapshot of the SMP/E environment. The following
assumptions are made:
A 3390 model 9 device is used for all sysreses, meaning that all sysres data sets fit on a
single volume. If your sysres actually consists of more than one volume (for example, if
you are using 3390 model 3 devices), you will need to adjust the sample jobs accordingly.
All the sysres data sets for the maintenance environment are actually kept on a single
device. While you could spread these data sets over a number of devices, mixed with
other data sets if you wish, there are benefits to keeping all the data sets together:
It is easier to create the production IPL volumes if all the source data sets are grouped
together (you can use a full-pack copy).
If you need to test an emergency fix, the maintenance environment data sets can
actually be IPLed in a test LPAR.
387
388
)
INDD(INDD)
OUTDD(OUTDD)
PROCESS(SYS1)
ALLEXCP
TOL(ENQF)
ADMIN
/*
6. Create IPL text at the latest release level on the target sysres.
Example 22-14 Propagation - create IPL text on the target sysres
//IPLTEXT EXEC PGM=ICKDSF,COND=(4,LT)
//SYSPRINT DD SYSOUT=*
//IPLTEXT DD DISP=SHR,DSN=SYS1.SAMPLIB(IPLRECS),
Chapter 22. Maintenance considerations
389
//
UNIT=3390,VOL=SER=target_sysres
//
DD DISP=SHR,DSN=SYS1.SAMPLIB(IEAIPL00),
//
UNIT=3390,VOL=SER=target_sysres
//RESVOL
DD DISP=SHR,UNIT=3390,VOL=SER=target_sysres
//SYSIN
DD *
REFORMAT DDNAME(RESVOL) DEVICETYPE(3390) VFY(target_sysres) VOLID(target_sysres) IPLDD(IPLTEXT,OBJ) BOOTSTRAP
/*
7. Create the Master Catalog entry in nucleus. You would normally use the SYSCAT
parameter in the LOADxx member to specify the Master Catalog name. (However, if for
some reason you still use an A in your Loadparm and the operator presses <Enter> at
the prompt Specify master catalog parameter during the IPL, the SYSCATLG member
of Nucleus will be used to ensure the IPL does not fail.)
Example 22-15 Propagation - create Master Catalog entry
//SYSCATLG EXEC PGM=IEBGENER,COND=(4,LT)
//SYSPRINT DD SYSOUT=*
//SYSIN
DD DUMMY
//SYSUT2
DD DISP=SHR,DSN=SYS1.NUCLEUS(SYSCATLG),
//
UNIT=3390,VOL=SER=target_sysres
//SYSUT1
DD *
volser133Cmaster_catalog_name
/*
8. Delete SMP/E data sets related to the previous level of the target sysres.
This step deletes the old SMP/E data sets related to the target sysres volumes target
zone, in preparation for a copy. An example of such a data set is:
SMPE.ZOS130.PLXSY1T.SMPLTS.
Example 22-16 Propagation - delete target SMP/E data sets
//*-------------------------------------------------------------------//*
Delete SMPE data sets
//*-------------------------------------------------------------------//DELSMPE EXEC PGM=IDCAMS,COND=(4,LT)
//SYSPRINT DD SYSOUT=*
//SYSIN
DD *
DEL 'SMPE.xxxvrm.target_sysresT.SMPLTS'
DEL 'SMPE.xxxvrm.target_sysresT.SMPMTS'
DEL 'SMPE.xxxvrm.target_sysresT.SMPSCDS'
DEL 'SMPE.xxxvrm.target_sysresT.SMPSTS'
DEL 'SMPE.xxxvrm.target_sysresT.SMPLOG'
DEL 'SMPE.xxxvrm.target_sysresT.SMPLOGA'
SET MAXCC=0
/*
9. Create the SMP/E target data sets related to the target sysres. These data sets would
normally be SMS-managed and would not be placed on the sysres.
This step takes a snapshot of the maintenance zones SMP/E data sets and copies them
to the SMP/E data sets related to the target sysres.
390
Note: The sample jobs assumes that the output data sets will be SMS-managed. If this is
not the case, you must add a DD statement referring to the target device, and an OUTDD()
DSS control statement referring to that DD statement.
Example 22-17 Propagation - copy target SMP/E data sets
//COPYSMPE EXEC PGM=ADRDSSU,REGION=0M,COND=(4,LT)
//SYSPRINT DD SYSOUT=*
//SYSIN
DD *
COPY DATASET(INCLUDE(
SMPE.xxxvrm.TLIB.SMPLTS
SMPE.xxxvrm.TLIB.SMPMTS
SMPE.xxxvrm.TLIB.SMPSCDS
SMPE.xxxvrm.TLIB.SMPSTS
SMPE.xxxvrm.TLIB.SMPLOG
SMPE.xxxvrm.TLIB.SMPLOGA
)
)
RENUNC((SMPE.xxxvrm.TLIB.**,SMPE.xxxvrm.target_sysresT.**)) CATALOG
REPLACE
TOL(ENQF)
ADMIN
/*
391
12.Initialize the target zone CSI for the target sysres zone.
Example 22-20 Propagation - initialize the target zone CSI
//INITCSI EXEC PGM=IDCAMS,COND=(4,LT)
//SYSPRINT DD SYSOUT=*
//TZONE
DD DSN=SMPE.xxxvrm.target_sysresT.CSI,DISP=OLD
//ZPOOL
DD DSN=SYS1.MACLIB(GIMZPOOL),DISP=SHR
//SYSIN
DD *
REPRO OUTFILE(TZONE) INFILE(ZPOOL)
/*
13.Copy the zone from the maintenance target zone to the target sysres zone.
This step uses the SMP/E zonecopy command to copy the maintenance target zone into
the target zone related to the sysres.
Example 22-21 Propagation - Copy target zone
//ZONECOPY EXEC SMPE,COND=(4,LT)
//SMPCSI
DD DISP=SHR,DSN=SMPE.GLOBAL.CSI
//SMPLOG
DD DISP=SHR,DSN=SMPE.GLOBAL.SMPLOG
//SMPLOGA DD DISP=SHR,DSN=SMPE.GLOBAL.SMPLOGA
//SYSIN
DD *
SET
BOUNDARY(target_sysresT) .
ZONECOPY (xxxvrmT) INTO(target_sysresT) .
/*
Before merging one global zone into another, we recommend that you first clean up the
originating global zone and its related SMPPTS, in order to reduce the amount of data to be
merged. Do this by deleting all unneeded SYSMOD and HOLDDATA entries from the global
zone, and deleting unneeded MCS entries from the SMPPTS. You can use the REJECT
NOFMID and REJECT PURGE SMP/E commands to delete those entries related to PTFs
and FMIDs that are no longer needed.
Also, rather than manually determining which FMIDs to delete, you can use the sample
programs GIMCRSAM and GIMPRSAM (provided in SYS1.SAMPLIB) to help you create a
REJECT NOFMID command for the FMIDs that are superseded or deleted in the DLIB zones
you specify. Refer to SMP/E for z/OS and OS/390 Commands, SA22-7771, for more
information on the GZONEMERGE command.
You can use the GZOMEMERGE command to copy information for selected FMIDs or for the
entire contents of a global zone. For example, to merge the entire contents from one global
zone to another, you would use the statements contained in Example 22-23, SMP/E
Statements for GZONEMERGE on page 393.
Example 22-23 SMP/E Statements for GZONEMERGE
SET BDY(GLOBAL)
/* Set to the destination Global zone
.
GZONEMERGE
FROMCSI(from.global.zone.data.set.CSI) /* From Global zone CSI
CONTENT
/* Indicates SYSMODs and corresponding
SMPPTS members
DEFINITION
/* Indicates OPTIONS, UTILITY, ZONESET,
FMIDSET, GZONE Entry stuff, etc. are
to be copied
*/
*/
*/
*/
22.7.1 Tools
GZONEMERGE
You can use the SMP/E GZONEMERGE command to copy information from one SMP/E
global zone to another. This allows you to reduce the number of global zones that you must
manage within the target sysplex. See 22.6.1, Merge global zones on page 392 for more
information.
22.7.2 Documentation
The following publications provide information that may be helpful in managing your
maintenance environment:
393
394
Appendix A.
Additional material
This redbook refers to additional material that can be downloaded from the Internet as
described below.
Select the Additional materials and open the directory that corresponds with the redbook
form number, SG246818.
Description
Zipped Code Samples for RACF
Zipped Code Samples for OPC
5 MB minimum
Windows/UNIX
Any
64 MB
395
396
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 399.
Controlling S/390 Processors Using the HMC, SG24-4832
Converting to RMM - A Practical Guide, SG24-4998
DFSMSrmm Primer, SG24-5983
Enhanced Catalog Sharing and Management, SG24-5594
Hierarchical File System Usage Guide, SG24-5482
ICF Catalog Backup and Recovery: A Practical Guide, SG24-5644
OS/390 MVS Multisystem Consoles Implementing Sysplex Operations, SG24-4626
OS/390 Parallel Sysplex Configuration Volume 2: Cookbook, SG24-5638
OS/390 Software Management Cookbook, SG24-4775
OS/390 Version 2 Release 10 Implementation, SG24-5976
Parallel Sysplex - Managing Software for Availability, SG24-5451
Parallel Sysplex Operational Scenarios, SG24-2079
S/390 Parallel Sysplex: Resource Sharing, SG24-5666
Other resources
These publications are also relevant as further information sources:
CICS TS Intallation Guide, GC34-5985
DCAF V1.3 Installation and Configuration Guide, SH19-4068
DCAF V1.3 Users Guide, SH19-4069
DFSMSdfp Storage Administration Reference, SC26-7331
Hardware Management Console Guide, GC38-0453
Hardware Management Console Operations Guide, SC28-6809
IMS/ESA V6 Common Queue Server Guide and Reference, SC26-9517
JES2 Multi-Access Spool in a Sysplex Environment, GG66-3263
Hardware Configuration Definition (HCD) Scenarios, SC33-7987
MVS System Management Facilities, SA22-7630
OS/390 MVS Installation Exits, SC28-1753
OS/390 Planning for Installation, GC28-1726
System Automation for OS/390 Planning and Installation, GC28-1549
Tivoli Workload Scheduler for z/OS Programming Interfaces, SH19-4545
397
398
You can also download additional materials (code samples or diskette/CD-ROM images) from
that site.
Related publications
399
400
Index
considerations for SMF 84
considerations for SMSplex 214
considerations for System Automation for OS/390
292
considerations for system weights 32
considerations for TCP/IP 177
considerations for the hardware configuration 328
considerations for VTAMplex 168, 171172
considerations for WLM 52
considerations for XCF 19
Coupling Facility considerations 31
defined 4
Symbols
&SYSCLONE, use by VTAM 170
A
access control for HFS files 117
allocating CDSs
size considerations 18
alternate Couple Data Set, need for 17
ALTPASS RACF tool 211
APPC XCF group 23
application description database, merging 283
application environment
affect on WLM merge 62
ARCxxxxx XCF group 24
ATRRRS XCF group 25
Automatic Restart Manager 371
and automation products 35
considerations 34
introduction 33
operations considerations 321
automatic tape sharing XCF group 26
automation
GRS considerations 77
HSM considerations 235
availability, relationship to maintenance strategy 366
B
BBGROUP XCF group 23
BPXPRMxx member, and HFSplex 120
BronzePlex
CFRM policy considerations 28
considerations for ARM 34
considerations for consoles 312, 320
considerations for esoterics 331
considerations for GRSPlex 72
considerations for HFSplex 116
considerations for HMCplex 302
considerations for HSMplex 222
considerations for JES2 96
considerations for Language Environment 137
considerations for LOGRplex 39, 42
considerations for maintenance environment 367
considerations for maintenance strategy 366367
considerations for OPC 261
considerations for OS CONFIG IDs 331
considerations for RACFplex 191
considerations for RMMplex 242
considerations for SFM 32
considerations for shared Master Catalog 150
considerations for shared Parmlib 158
considerations for shared Proclib 165
considerations for shared sysres 147
considerations for sharing system data sets 144
C
calendar database, merging 275
Capacity Upgrade on Demand 307
catalog aliases
use of to hide data set name changes 369
catalog recovery
considerations 157
Redbook 87
saving SMF records 87
utilities 91
catalog sharing
considerations for HFSplex 120
CATALOGplex
definition 8
relationship to JES2 configuration 97
CBPDO 372
CEC definition 302
CEECOPT, LE CICS run-time defaults 136
CEEDOPT, LE non-CICS run-time defaults 136
CEEROPT, LE CICS and IMS run-time overrides 137
CEEUOPT, LE user run-time overrides 137
CF link utilization 29
CF structures
calculating sizes 29
with fixed names 30
CFLevel
Internet site 29
CFRM Policy
considerations for merging policies 28
CFRM policy
merging definitions 332
CFSizer tool 16, 30
change freeze
during merge of the OPCplexes 262
Channel Subsystem I/O Priority 66
Channel-To-Channel Adaptors(CTCs)
considerations for FICON 332
using an addressing scheme to simplify maintenance
332
chargeback reports 68
CICS and GRS 76
401
402
general considerations 16
LOGR 39
LOGR considerations 40
MAXSYSTEM parameter 16
no multi-volume support 17
OMVS
contents of 116
recommendations 17
sample allocation jobs 36
suggested placement 17
WLM 55
COUPLExx member
WLM definitions 52
Coupling Facility
considerations for a BronzePlex 28
considerations for a GoldPlex 28
considerations for PlatinumPlex 28
utilization guidelines 29
CPC
definition 302
simplifying HMC operations 303
CSQGxxxx XCF group 25
CSQxxxxx XCF group 24
CVOLs and GRS 76
D
DASDONLY log streams 38
DASDplex
definition 8
relationship to JES2 configuration 97
data set naming standards
considerations for system maintenance 369
DB2 and GRS 76
DB2 XCF group 23
DBSECLV RACF tool 200, 210
DBSYNC RACF tool 199, 210
restrictions 206
DFHIR000 XCF group 23
DFSMS/MVS and GRS 76
DFSMShsm
GRS RNL considerations 76
discrete profiles 195, 206
DSMON RACF utility 197
Dump Analysis and Elimination
GRS considerations 76
XCF group 23
duplicate 233
duplicate data set names 10, 72, 222, 227, 229, 233, 245
GRS considerations 10
identifying 233
duplicate device numbers 329
duplicate jobname considerations 97
duplicate volsers 222, 227230, 232, 244245, 247
considerations for PDSE sharing 12
identifying 245
DWWCVRCM XCF group 23
Dynamic Channel-path Management 66
dynamic I/O reconfiguration
activating for the whole sysplex 336
HSA considerations 330
E
ECSplex
description 8
EDGRMMxx member 245, 248
EDGUTIL program 245247
Enhanced Catalog Sharing 150
sizing the CF structure 154
Enhanced HOLDDATA 367, 377
Enhanced Service Offering(ESO) 371
ERBSCAN, using to browse SMF records 90
ESCM XCF group 24
ESCON Director
managing from the HMC 305
Esoterics 218, 331
Event Notification Facility XCF group 23
event triggered tracking database, merging 285
EXIT statement
identifying JES2 exits 99
exits 351
definition of a usermod 352
definition of an exit 352
identifying active JES2 exits 105
identifying RACF exits 197
managing 353354
reasons for avoiding use of 352
reasons for rationializing 352353
sample SMP/E usermod to control usermods 354
SMS 217
used by OPC 264
EZBTCPCS XCF group 26, 176
F
FDRimsid XCF group 24
G
GoldPlex
CFRM policy considerations 28
considerations for consoles 312
considerations for esoterics 331
considerations for GRSplex 72
considerations for HFSplex 116
considerations for HMCplex 303
considerations for HSMplex 222
considerations for JES2 96
considerations for Language Environment 137
considerations for LOGRplex 40, 42
considerations for maintenance strategy 366367
considerations for OPC 261
considerations for OS CONFIG IDs 331
considerations for RACFplex 191
considerations for RMMplex 242
H
Hardware System Area (HSA)
requirements for dynamic I/O reconfiguration support
330
HFSplex
access control considerations 117
allocating sysplex root HFS 118
allocating the system-specific root HFSs 119
available file sharing mechanisms 116
benefits of sysplex HFS sharing 117
BPXPRMxx parameters related to sysplex HFS sharing 120
catalog considerations 120
considerations for BronzePlex 116
considerations for BronzePlex or GoldPlex 13
considerations for GoldPlex 116
considerations for PlatinumPlex 117
Index
403
404
I
I/O Supervisor XCF group 24
IBM recommended service levels 373
IBMLINK 372
ICETOOL samples for RACF 211
ICHRFR01 RACF router table 198
ICHRIN03 table 192
ICHRRCDE class descriptor table 198
IDAVQUIO XCF group 26
IEFACTRT SMF exit 265
IEFU29 SMF exit 85
IEFU83 SMF exit 265
IEFU84 SMF exit 265
IEFUJI SMF exit 265
IFASMFDP program, where to run 86
IGDSMSxx member 217
IGWXSGIS XCF group 26
J
JCL variable tables, in OPC 281
JES2
adding members to the MAS non-disruptively 96
L
Language Environment
accessing through LNKLST 136
accessing through STEPLIB 136
BronzePlex considerations 137
compile options 138
considerations if using ARM 138
displaying run time values 140, 142
downward compatibility 138
GoldPlex considerations 137
introduction 136
LNKLST recommendation 136
PlatinumPlex considerations 137
Index
405
M
Maintenance considerations 365
maintenance envionment
cross-product requisite checking 384
cross-system requisite checking 384
cross-zone requisite checking 385
maintenance environment
coexistence considerations 368
description 366
developing a preventative maintenance strategy 368
handling service of HFS data sets 383
HFS considerations 383
HOLD SYSTEM reason IDs 378
impact of Consolidated Service Test 372
naming standards 380, 383
options for obtaining service from IBM 375
propagating service 385
recommendations 367
SMP/E structure 381
suggested structure 379
use of automount 383
maintenance policy
IBM recommendations 373
maintenance strategy 366
and shared sysres 149
considerations for BronzePlex 366
considerations for GoldPlex 366
considerations for PlatinumPlex 366
MASDEF statement 103
MAXSYSTEM
impact of on CDS sizes
impact of on structure sizes 19
specifying an appropriate value 18
MAXSYSTEM parameter, effect of 16
merging catalogs 157
merging global zones 392
merging multiple LOGR CDSs 40
merging SMP global zones 369
MQSeries Shared Queues XCF group 25
multilevel catalog alias 155
MVS console services XCF groups 23
O
offload data sets
duplicate data set names 41
406
P
Parallel Access Volumes 67
PDSE
considerations for duplicate volsers 12
PDSE sharing XCF group 23
Peer Recovery 39
Performance Index
impact in a sysplex 53
performance reporting
use of SMF data 87
period database, merging 276
PlatinumPlex
CFRM policy considerations 28
considerations for consoles 312
considerations for GRSplex 72
considerations for HFSplex 117
considerations for HMCplex 303
considerations for HSMPlex 222
considerations for JES2 96
considerations for Language Environment 137
considerations for LOGRplex 40, 42
considerations for maintenance strategy 366367
considerations for OPC 261
considerations for OS CONFIG IDs 331
considerations for RACFplex 191
considerations for RMMPlex 242
considerations for SFM 33
considerations for shared Master Catalog 150
considerations for shared Parmlib 158
considerations for shared Proclib 165
considerations for shared sysres 147
considerations for sharing system data sets 144
considerations for SMF 84
considerations for SMSplex 214
considerations for System Automation for OS/390
292
considerations for TCP/IP 178
considerations for the hardware configuration 328
considerations for VTAMplex 168, 173
considerations for WLM 52
considerations for XCF 19
Coupling Facility considerations 28
defined 4
PROMPT
SFM parameter 31
PROPROF RACF tool 211
PWDCOPY RACF tool 207, 210
R
RACEX2IN tool 211
RACF
using to protect OPC 266
RACF and GRS 76
RACF considerations for shared catalogs 156
RACF XCF group 25
RACFplex
ALTPASS tool 211
analyzing dataset class profiles 204
approaches for merging databases 199
benefits of a single RACFplex 190
benefits of RACF sysplex data sharing 190
changing the Group Tree Structure 207
class descriptor table 198
connecting users to groups 201
considerations for BronzePlex 191
considerations for GoldPlex 191
considerations for PlatinumPlex 191
Index
407
408
S
sample CLIST to make mass changes 110
sample ISREDIT macro 111
scheduling environment 63
Index
409
410
introduction 38
LOGR CDS considerations 40
LOGR CDS contents 39
merging multiple LOGR CDSs 40
OPERLOG considerations 42
Peer Recovery considerations 39
preparing for the merge 48
restarting after the merge 49
RRS log streams 46
steps for merging System loggers 47
updating Logger and CFRM policies 48
WebSphere considerations 47
system logger
considerations for a BronzePlex or a GoldPlex 12
GRS considerations 76
system symbols 144
changing with SYMUPDTE tool 146
considerations for system maintenance 369
use by VTAM 170
use in PARMLIB 145
use in started task JCL 146
used in catalog entries 151
used with OPC 269
SYSTTRC XCF group 26
SYSWLM XCF group 27, 52
T
tape libraries 217
TAPEDSN in RACF 206
TAPEVOL class in RACF 206
target sysplex
definition 2
TCP XCF group 26
TCP/IP
overview of BronzePlex considerations 12
TCP/IP considerations 175
TCPlex
how many you can have in a sysplex 176
TCPplex
Application DVIPA 181
BronzePlex considerations 185
considerations for GoldPlex 178
considerations for merging TCPplexes 185
considerations for PlatinumPlex 178
definitions 176
description 9
DVIPAs relationship to Dynamic XCF 176
dynamic routing 180
dynamic routing using OSPF 180
Dynamic VIPA description 181
Dynamic XCF 176, 178
Dynamic XCF prereqs 179
external workload balancing 183
EZBTCPCS XCF group 176
features available in a sysplex 177
GoldPlex considerations 185
IP network addressing scheme considerations 179
ISTXCF XCF group 179
merge considerations 179
requirements on VTAM 179
U
unloading the RACF database 194
user catalogs, considerations for System Logger 42
USS security 125
V
velocity goals 54
version HFS 120
placement 369
VIO definitions 217
VIO journaling data sets and GRS 76
virtual storage constraints 330
VLF XCF group 26
VSAM and GRS 76
VSAM/RLS Lock structure
relationship to MAXSYSTEM parameter 16
VSAM/RLS XCF groups 26
VTAM considerations 167
VTAM jXCF group 27
VTAMplex
CF structure used by VTAM MNPS 172
considerations for BronzePlex 168, 171172
considerations for GoldPlex 168, 173
considerations for PlatinumPlex 168, 173
definition 172
description 9
determining if dynamic XCF links are being used 170
facilities available in a sysplex 170
how many in a sysplex 172
NETID requirements 174
preventing use of VTAM GR 171
W
WebSphere and System Logger 47
WebSphere log streams 47
WLM
allocating CDSs using WLM dialog 56
application environments 62
CDS Format Level 55
CDS Function Level 55
Channel Subsystem I/O Priority 66
CICS goal types 63
classification groups 60
classification rules 54
changes in OS/390 2.10 59, 68
classification rules by JES MAS 63
Compatibility mode 52
considerations 51
documentation 69
Dynamic Channel-path Management 66
dynamic PAV management 67
general recommendations 53
how it manages service classes 53
how many service classes 53
how many WLMpexes per sysplex 52
impact on chargeback routines 68
IMS goal types 63
Intelligent Resource Director 66
interaction with CICSPlex Systems Manager 64
interface in OPC 61
LPAR CPU Management 66
managing CICS goals 63
managing IMS goals 63
maximum number of service classes 57
merge considerations 54
merge methodology 68
merging service class definitions 57
OPC considerations 267
Performance Index 53
report classes 60
enhancements in OS/390 2.10 61
resource capping 61
response time goals 63
scheduling environments 6263, 99, 107
service class attributes to consider 57
sizing the CDSs 55
tools 69
velocity goals 53, 63
WLM-managed initiators 62
controlling number of 63
impact on WLM merge 63
Index
411
workload names 60
WLM XCF group 27
WLM-managed initiators 99
WLMplex
description 9
workstation database, merging 271
X
XCF
affect of RMAX value on size of sysplex CDS 318
CF structure
relationship to MAXSYSTEM parameter 16
groups 19
introduction 18
using to connect VTAM nodes 170
XCF group
ability to control names 19
assigning to a transport class 21
list of common group names 22
XCF groups
displaying how many are in use 20
for WLM 52
XCF members 19
XCF signalling connectivity 332
XCF signalling structures
advantage over CTCs 27
placement 27
XCF signalling, number of CTCs required 27
XCF tuning 21
XCFOPTS parameter, specifying XCF group name 26
XES XCF group 27
Z
zFS considerations for HFS sharing 129
412
(0.5 spine)
0.475<->0.873
250 <-> 459 pages
Back cover
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
ISBN 0738426083