ABCs of Z-OS System Programming Vol. 3 - Sg246983
ABCs of Z-OS System Programming Vol. 3 - Sg246983
Paul Rogers
Redelf Janssen
Andre Otto
Rita Pleus
Alvaro Salla
Valeria Sokal
ibm.com/redbooks
International Technical Support Organization
August 2007
SG24-6983-02
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to Version 1 Release 8 of z/OS (5694-A01), Version 1 Release 8 of z/OS.e (5655-G52), and to all
subsequent releases and modifications until otherwise indicated in new editions.
© Copyright International Business Machines Corporation 2004, 2005, 2007. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. iii
3.15 Storage balancing with RAID-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.16 ESS performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.17 WLM controlling PAVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.18 Parallel Access Volumes (PAVs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.19 HyperPAV feature for DS8000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.20 HyperPAV implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.21 ESS copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.22 IBM TotalStorage DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.23 IBM TotalStorage DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.24 DS8000 hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.25 Storage systems LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.26 IBM TotalStorage Resiliency Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.27 TotalStorage Expert product highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.28 Introduction to tape processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.29 SL and NL format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.30 Tape capacity - tape mount management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.31 TotalStorage Enterprise Tape Drive 3592 Model J1A. . . . . . . . . . . . . . . . . . . . . . . . 101
3.32 IBM TotalStorage Enterprise Automated Tape Library 3494 . . . . . . . . . . . . . . . . . . 103
3.33 Introduction to Virtual Tape Server (VTS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.34 IBM TotalStorage Peer-to-Peer VTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.35 Storage area network (SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Contents v
5.18 Steps to activate a minimal SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.19 Allocating SMS control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
5.20 Defining the SMS base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.21 Creating ACS routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5.22 DFSMS setup for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.23 Starting SMS and activating a new configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
5.24 Control SMS processing with operator commands . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.25 Displaying the SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.26 Managing data with a minimal SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.27 Device-independence space allocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5.28 Developing naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.29 Setting the low-level qualifier (LLQ) standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.30 Establishing installation standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5.31 Planning and defining data classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5.32 Data class attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.33 Use data class ACS routine to enforce standards . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.34 Simplifying JCL use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.35 Allocating a data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5.36 Creating a VSAM cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.37 Retention period and expiration date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5.38 SMS PDSE support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.39 Selecting data sets to allocate as PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5.40 Allocating new PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.41 System-managed data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.42 Data types that cannot be system-managed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.43 Interactive Storage Management Facility (ISMF) . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
5.44 ISMF: Product relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.45 ISMF: What you can do with ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5.46 ISMF: Accessing ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5.47 ISMF: Profile option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
5.48 ISMF: Obtaining information about a panel field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
5.49 ISMF: Data set option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
5.50 ISMF: Volume Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
5.51 ISMF: Management Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
5.52 ISMF: Data Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.53 ISMF: Storage Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
5.54 ISMF: List option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Contents vii
viii ABCs of z/OS System Programming Volume 3
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Java, RSM, Solaris, SunOS, Ultra, and all Java-based trademarks are trademarks of Sun Microsystems, Inc.
in the United States, other countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
The ABCs of z/OS® System Programming is an eleven volume collection that provides an
introduction to the z/OS operating system and the hardware architecture. Whether you are a
beginner or an experienced system programmer, the ABCs collection provides the
information that you need to start your research into z/OS and related subjects. If you would
like to become more familiar with z/OS in your current environment, or if you are evaluating
platforms to consolidate your e-business applications, the ABCs collection will serve as a
powerful technical tool.
Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS
delivery and installation
Volume 2: z/OS implementation and daily maintenance, defining subsystems, JES2 and
JES3, LPA, LNKLST, authorized libraries, Language Environment®, and SMP/E
Volume 3: Introduction to DFSMS™, data set basics, storage management hardware and
software, VSAM, System-managed storage, catalogs, and DFSMStvs
Volume 5: Base and Parallel Sysplex®, System Logger, Resource Recovery Services (RRS),
global resource serialization (GRS), z/OS system operations, automatic restart management
(ARM), Geographically dispersed Parallel Sysplex (GPDS)
Volume 11: Capacity planning, performance management, WLM, RMF™, and SMF
Redelf Janssen is an IT Architect in IBM Global Services ITS in IBM Germany. He holds a
degree in Computer Science from University of Bremen and joined IBM Germany in 1988. His
areas of expertise include IBM zSeries, z/OS and availability management. He has written
IBM Redbooks® publications on OS/390 Releases 3, 4, and 10, and z/OS Release 8.
Andre Otto is a z/OS DFSMS SW service specialist at the EMEA Backoffice team in
Germany. He has 12 years of experience in the DFSMS, VSAM and catalog components. He
holds a degree in Computer Science from the Dresden Professional Academy.
Rita Pleus is an IT Architect in IBM Global Services ITS in IBM Germany. She has 21 years
of IT experience in a variety of areas, including systems programming and operations
management. Before joining IBM in 2001, she worked for a German S/390® customer. Rita
holds a degree in Computer Science from the University of Applied Sciences in Dortmund.
Her areas of expertise include z/OS, its subsystems, and systems management.
Alvaro Salla is an IBM retiree who worked for IBM for more than 30 years in large systems.
He has co-authored many IBM Redbooks publications and spent many years teaching
S/360™ to S/390. He has a degree in Chemical Engineering from the University of Sao
Paulo, Brazil.
Valeria Sokal is an MVS™ system programmer at an IBM customer. She has 16 years of
experience as a mainframe systems programmer.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
redbooks@us.ibm.com
Preface xiii
xiv ABCs of z/OS System Programming Volume 3
1
DFSMS is an operating environment that helps automate and centralize the management of
storage based on the policies that your installation defines for availability, performance,
space, and security.
The heart of DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage
administrator defines policies that automate the management of storage and hardware
devices. These policies describe data allocation characteristics, performance and availability
goals, backup and retention requirements, and storage requirements for the system.
DFSMS is an exclusive element of the z/OS operating system and is a software suite that
automatically manages data from creation to expiration.
IBM 3494
Virtual
Virtual Tape
Tape Server
Server
Robot Tape
dsname.f.data
Data Set
DASD
Systems
programmer
Understanding DFSMS
Data management is the part of the operating system that organizes, identifies, stores,
catalogs, and retrieves all the data information (including programs) that your installation
uses.
DFSMSdfp helps you store and catalog information on DASD, optical, and tape devices so
that it can be quickly identified and retrieved from the system. DFSMSdfp provides access to
both record- and stream-oriented data in the z/OS environment.
Systems programmer
As a systems programmer, you can use DFSMS data management to:
Allocate space on DASD and optical volumes
Automatically locate cataloged data sets
Control access to data
Transfer data between the application program and the medium
Mount magnetic tape volumes in the drive
NFS
IBM workstations
IBM System z
dfp
tvs NFS
Storage dss DFSMS
Hierarchy
rmm hsm
P690
DFSMS components
DFSMS is a set of products associated with z/OS that is responsible for data management.
DFSMS has five MVS data management functional components as an integrated single
software package:
DFSMSdfp Provides storage, data, program, and device management. It is comprised of
components such as access methods, OPEN/CLOSE/EOV routines, catalog
management, DADSM (DASD space control), utilities, IDCAMS, SMS, NFS,
ISMF, and other functions.
DFSMSdss Provides data movement, copy, backup, and space management functions.
DFSMShsm Provides backup, recovery, migration, and space management functions. It
invokes DFSMSdss for certain of its functions.
DFSMSrmm Provides management functions for removable media such as tape cartridges
and optical media.
DFSMStvs Enables batch jobs and CICS® online transactions to update shared VSAM
data sets concurrently.
DFSMSdfp component
DFSMSdfp provides storage, data, program, and device management. It is comprised of
components such as access methods, OPEN/CLOSE/EOV routines, catalog management,
DADSM (DASD space control), utilities, IDCAMS, SMS, NFS, ISMF, and other functions.
Managing storage
The storage management subsystem (SMS) is a DFSMSdfp facility designed for automating
and centralizing storage management. SMS automatically assigns attributes to new data
when that data is created. SMS automatically controls system storage and assigns data to
the appropriate storage device. ISMF panels allow you to specify these data attributes.
For more information about ISMF, see “Interactive Storage Management Facility (ISMF)” on
page 289.
Managing data
DFSMSdfp organizes, identifies, stores, catalogs, shares, and retrieves all the data that your
installation uses. You can store data on DASD, magnetic tape volumes, or optical volumes.
Using data management, you can complete the following tasks:
Allocate space on DASD and optical volumes
Automatically locate cataloged data sets
Control access to data
Transfer data between the application program and the medium
Mount magnetic tape volumes in the drive
Concurrent copy
Figure 1-4 DFSMSdss functions
DFSMSdss component
DFSMSdss is the primary data mover for DFSMS. DFSMSdss copies and moves data to help
manage storage, data, and space more efficiently. It can efficiently move multiple data sets
from old to new DASD. The data movement capability that is provided by DFSMSdss is useful
for many other operations, as well. You can use DFSMSdss to perform the following tasks.
Space management
DFSMSdss can reduce or eliminate DASD free-space fragmentation.
Concurrent copy
When it is used with supporting hardware, DFSMSdss also provides concurrent copy
capability. Concurrent copy lets you copy or back up data while that data is being used. The
user or application program determines when to start the processing, and the data is copied
as if no updates have occurred.
DFSMSrmm component
DFSMSrmm manages your removable media resources, including tape cartridges and reels.
It provides the following functions.
Library management
You can create tape libraries, or collections of tape media associated with tape drives, to
balance the work of your tape drives and help the operators that use them.
Volume management
DFSMSrmm manages the movement and retention of tape volumes throughout their life
cycle.
Storage management
Space management
Availability management
DFSMShsm component
DFSMShsm complements DFSMSdss to provide the following functions.
Storage management
DFSMShsm provides automatic DASD storage management, thus relieving users from
manual storage management tasks.
Space management
DFSMShsm improves DASD space usage by keeping only active data on fast-access storage
devices. It automatically frees space on user volumes by deleting eligible data sets, releasing
overallocated space, and moving low-activity data to lower cost-per-byte devices, even if the
job did not request tape.
DFSMStvs component
DFSMS Transactional VSAM Services (DFSMStvs) allows you to share VSAM data sets
across CICS, batch, and object-oriented applications on z/OS or distributed systems.
DFSMStvs enables concurrent shared updates of recoverable VSAM data sets by CICS
transactions and multiple batch applications. DFSMStvs enables 24-hour availability of CICS
and batch applications.
DFSMStvs is built on top of VSAM record-level sharing (RLS), which permits sharing of
recoverable VSAM data sets at the record level. Different applications often need to share
VSAM data sets. Sometimes the applications need only to read the data set. Sometimes an
application needs to update a data set while other applications are reading it. The most
complex case of sharing a VSAM data set is when multiple applications need to update the
data set and all require complete data integrity.
Transaction processing provides functions that coordinate work flow and the processing of
individual tasks for the same data sets. VSAM record-level sharing and DFSMStvs provide
The Storage Management Subsystem (SMS) is an operating environment that automates the
management of storage. Storage management uses the values provided at allocation time to
determine, for example, on which volume to place your data set, and how many tracks to
allocate for it. Storage management also manages tape data sets on mountable volumes that
reside in an automated tape library. With SMS, users can allocate data sets more easily.
The data sets allocated through SMS are called system-managed data sets or SMS-managed
data sets.
Access methods are identified primarily by the way that they organize the data in the data set.
For example, use the basic sequential access method (BSAM) or queued sequential access
method (QSAM) with sequential data sets. However, there are times when an access method
identified with one organization can be used to process a data set organized in a different
manner. For example, a sequential data set (not extended-format data set) created using
BSAM can be processed by the basic direct access method (BDAM), and vice versa. Another
example is UNIX files, which you can process using BSAM, QSAM, basic partitioned access
method (BPAM), or virtual storage access method (VSAM).
DATASET.SEQ1
DATASET.SEQ DATASET.SEQ2
DATASET.PDS DATASET.SEQ3
DATASET.VSAM
VOLSER=DASD01 VOLSER=SL0001
Note: As an exception, the z/OS UNIX services component supports Hierarchical File
System (HFS) data sets, where the collection is of bytes and there is not the concept of
logically related data records.
Storage devices
Data can be stored on a magnetic direct access storage device (DASD), magnetic tape
volume, or optical media. As mentioned previously, the term DASD applies to disks or
simulated equivalents of disks. All types of data sets can be stored on DASD, but only
sequential data sets can be stored on magnetic tape. The types of data sets are described in
“DFSMSdfp data set types” on page 20.
DASD volumes
Each block of data on a DASD volume has a distinct location and a unique address, making it
possible to find any record without extensive searching. You can store and retrieve records
either directly or sequentially. Use DASD volumes for storing data and executable programs,
The following sections discuss the logical attributes of a data set, which are specified at data
set creation time in:
DCB/ACB control blocks in the application program
DD cards (explicitly, or through the Data Class (DC) option with DFSMS)
In an ACS Data Class (DC) routine (overridden by a DD card)
After the creation, such attributes are kept in catalogs and VTOCs.
HARRY.FILE.EXAMPLE.DATA
1º 2º 3º 4º
HLQ LLQ
A data set name can be one name segment, or a series of joined name segments. Each
name segment represents a level of qualification. For example, the data set name
HARRY.FILE.EXAMPLE.DATA is composed of four name segments. The first name on the left
is called the high-level qualifier (HLQ), the last name on the right is the lowest-level qualifier
(LLQ).
Each name segment (qualifier) is 1 to 8 characters, the first of which must be alphabetic (A to
Z) or national (# @ $). The remaining seven characters are either alphabetic, numeric (0 - 9),
national, a hyphen (-). Name segments are separated by a period (.).
Note: Including all name segments and periods, the length of the data set name must not
exceed 44 characters. Thus, a maximum of 22 name segments can make up a data set
name.
Objects
Partitioned Organized
Physical Sequential
(PDS and PDSE)
Compression
Data striping
Extended-addressability
Objects
An extended-format data set can occupy any number of tracks. On a volume that has more
than 65,535 tracks, a sequential data set cannot occupy more than 65,535 tracks.
An extended-format, striped sequential data set can contain up to 4 GB blocks. The maximum
size of each block is 32 760 bytes.
System-managed DASD
You can allocate both sequential and VSAM data sets in extended format on a
system-managed DASD. Extended-format VSAM data sets also allow you to release partial
unused space and to use system-managed buffering (SMB, a fast buffer pool management
technique) for VSAM batch programs. You can select whether to use the primary or
secondary space amount when extending VSAM data sets to multiple volumes.
Objects
Objects are named streams of bytes that have no specific format or record orientation. Use
the object access method (OAM) to store, access, and manage object data. You can use any
type of data in an object because OAM does not recognize the content, format, or structure of
the data. For example, an object can be a scanned image of a document, an engineering
drawing, or a digital video. OAM objects are stored either on DASD in a DB2® database, or
on an optical drive, or on a tape storage volume.
The storage administrator assigns objects to object storage groups and object backup
storage groups. The object storage groups direct the objects to specific DASD, optical, or tape
devices, depending on their performance requirements. You can have one primary copy of an
object, and up to two backup copies of an object.
z/OS UNIX
z/OS UNIX System Services (z/OS UNIX) enables z/OS to access UNIX files. UNIX
applications also can access z/OS data sets. z/OS UNIX files are byte-oriented, similar to
objects. We differentiate between the following types of z/OS UNIX files.
DSORG=PS
RECFM=FB
LRECL=80
Data Set
80
80 80
80 BLKSIZE=27920
80 80
DATASET.TEST.SEQ1
See also z/OS MVS JCL Reference, SA22-7597 for information about the data set
specifications discussed in this section.
Logical records, when located in DASD or tape, are grouped into physical records named
blocks (to save space in DASD because of the gaps). Each block of data on a DASD volume
has a distinct location and a unique address (block number, track, and cylinder), thus making
it possible to find any block without extensive sequential searching. Logical records can be
stored and retrieved either directly or sequentially.
DASD volumes are used for storing data and executable programs (including the operating
system itself), and for temporary working storage. One DASD volume can be used for many
different data sets, and space on it can be reallocated and reused. The maximum length of a
logical record (LRECL) is limited by the physical size of the media used.
Spanned records are specified as VS, VBS, DS, or DBS. A spanned record is a logical record
that spans two or more blocks. Spanned records can be necessary if the logical record size is
larger than the maximum allowed block size.
You can also specify the records as fixed-length standard by using FS or FBS, meaning that
there is not an internal short block.
In an extended-format data set, the system adds a 32-byte suffix to each block, which is
transparent to the application program.
Space values
For DASD data sets, you can specify the amount of space required in: logical records, blocks,
records, tracks, or cylinders. You can specify a primary and a secondary space allocation.
When you define a new data set, only the primary allocation value is used to reserve space
for the data set on DASD. Later, when the primary allocation of space is filled, space is
allocated in secondary storage amounts (if specified). The extents can be allocated on other
volumes if the data set was defined as multivolume.
For example, if you allocate a new data set and specify SPACE=(TRK,(2,4)), this initially
allocates two tracks for the data set. As each record is written to the data set and these two
tracks are used up, the system automatically obtains four more tracks. When these four tracks
are used, another four tracks are obtained. The same sequence is followed until the extent
limit for the type of data set is reached.
The procedure for allocating space on magnetic tape devices is different from allocating
space on DASD. Because data sets on magnetic tape devices must be organized
sequentially, each one is located contiguously. All data sets that are stored on a given
magnetic tape volume must be recorded in the same density. See z/OS DFSMS Using
Magnetic Tapes, SC26-7412 for information about magnetic tape volume labels and tape
processing.
Data sets defined as large format must be accessed using QSAM, BSAM, or EXCP.
Large format data sets have a maximum of 16 extents on each volume. Each large format
data set can have a maximum of 59 volumes. Therefore, a large format data set can have a
maximum of 944 extents (16 times 59).
A large format data set can occupy any number of tracks, without the limit of 65535 tracks per
volume. The minimum size limit for a large format data set is the same as for other sequential
data sets that contain data: one track, which is about 56 000 bytes. Primary and secondary
space can both exceed 65 535 tracks per volume.
Figure 2-9 on page 31 shows the creation of a data set using the ISPF panel 3.2. Other ways
to create a data set are by using any of the following methods:
Access method services
You can define VSAM data sets and establish catalogs by using a multifunction services
program called access method services.
TSO ALLOCATE command
You can issue the ALLOCATE command through TSO/E to define VSAM and non-VSAM
data sets.
Using JCL
Any data set can be defined directly through JCL by specifying DSNTYPE=LARGE on the
DD statement.
TSO
MCAT
ALIAS: FPITA
ALIAS: VERA
UCAT
VOLDAT
UCAT
FPIT VTOC
A.DA
TA
FPITA.DATA
FPITA.FILE1
VERA.FILE1
FPITA.DATA
For detailed information about catalogs refer to Chapter 6, “Catalogs” on page 305.
MYVOL1
PAY.D1
CATALOG
Cataloged reference
// DD DSN=PAY.D2,DISP=OLD
PAY.D2
See z/OS MVS JCL Reference, SA22-7597 for information about UNIT and VOL parameters.
Note: We strongly recommend that you do not have uncataloged data sets in your
installation because uncataloged data sets can cause problems with duplicate data and
possible incorrect data set processing.
VTOC
A
B C
A
Data sets
The VTOC locates data sets on that volume. The VTOC is composed of 140-byte data set
control blocks (DSCBs), of which there are six types shown in Table 2-1 on page 38, that
correspond either to a data set currently residing on the volume, or to contiguous, unassigned
tracks on the volume. A set of assembler macros is used to allow a program or z/OS to
access VTOC information.
IEHLIST utility
The IEHLIST utility can be used to list, partially or completely, entries in a specified volume
table of contents (VTOC), whether indexed or non-indexed. The program lists the contents of
selected data set control blocks (DSCBs) in edited or unedited form.
VTOC
Cylinder 0
Track 0
DATA SET A
DSCBs F4 F0 F1 F1 F1
DATA SET B
DATA SET C
DSCBs also describe the VTOC itself. CVAF routines automatically construct a DSCB when
space is requested for a data set on the volume. Each data set on a DASD volume has one or
more DSCBs (depending on its number of extents) describing space allocation and other
control information such as operating system data, device-dependent information, and data
set characteristics. There are seven kinds of DSCBs, each with a different purpose and a
different format number.
The first record in every VTOC is the VTOC DSCB (format-4). The record describes the
device, the volume the data set resides on, the volume attributes, and the size and contents
of the VTOC data set itself. The next DSCB in the VTOC data set is a free-space DSCB
(format-5) that describes the unassigned (free) space in the full volume.
The function of some DSCBs depends on whether an optional Index VTOC is allocated in the
volume. Index VTOC is a sort of B-tree, to make the search in VTOC faster.
Table 2-1 on page 38 describes the different types of DSCBs, taking into consideration
whether the Index VTOC is in place or not.
In z/OS 1.7 there is a new AS (DEVMAN) containing trace information about CVAF events.
0 Free VTOC Describes unused DSCB records One for every unused 140-byte record
DSCB in the VTOC (contains 140 bytes in the VTOC. The DS4DSREC field of
of binary zeros). To delete a the format-4 DSCB is a count of the
DSCB from the VTOC, a format-0 number of format-0 DSCBs in the
DSCB is written over it. VTOC. This field is not maintained for an
indexed VTOC.
1 Identifier Describes the first three extents One for every data set or data space on
of a data set or VSAM data space. the volume, except the VTOC.
2 Index Describes the indexes of an ISAM One for each ISAM data set (for a
data set. This data set multivolume ISAM data set, a format-2
organization is old, and is not DSCB exists only on the first volume).
supported anymore.
3 Extension Describes extents after the third One for each data set on the volume
extent of a non-VSAM data set or that has more than three extents. There
a VSAM data space. can be as many as 10 for a PDSE, HFS,
extended format data set, or a VSAM
data set component cataloged in an
integrated catalog facility catalog.
PDSEs, HFS, and extended format data
sets can have up to 123 extents per
volume. All other data sets are
restricted to 16 extents per volume. A
VSAM component can have 7257
extents in up to 59 volumes (123 each).
7 Free space Only one field in the format-7 This DSCB is not used frequently.
for certain DSCB is an intended interface.
device This field indicates whether the
DSCB is a format-7 DSCB. You
can reference that field as
DS1FMTID or DS5FMTID. A
character 7 indicates that the
DSCB is a format-7 DSCB, and
your program should not modify it.
VTOC
VVDS
DATA
FREE SPACE
VTOC index
The VTOC index enhances the performance of VTOC access. The VTOC index is a
physical-sequential data set on the same volume as the related VTOC, created by the
ICKDSF utility program. It consists of an index of data set names in format-1 DSCBs
contained in the VTOC and volume free space information.
If the system detects a logical or physical error in a VTOC index, the system disables further
access to the index from all systems that might be sharing the volume. Then, the VTOC
remains usable but with possibly degraded performance.
If a VTOC index becomes disabled, you can rebuild the index without taking the volume offline
to any system. All systems can continue to use that volume without interruption to other
applications, except for a brief pause during the index rebuild. After the system rebuilds the
VTOC index, it automatically re-enables the index on each system that has access to it.
Next, we see more details about the internal implementation of the Index VTOC.
You can use ICKDSF to convert a non-indexed VTOC to an indexed VTOC by using the
BUILDIX command and specifying the IXVTOC keyword. The reverse operation can be
performed by using the BUILDIX command and specifying the OSVTOC keyword. For
details, see Device Support Facilities User’s Guide and Reference Release 17, GC35-0033,
and z/OS DFSMSdfp Advanced Services, SC26-7400, for more information about that topic.
//EXAMPLE JOB
//EXEC PGM=ICKDSF
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
INIT UNITADDRESS(0353) NOVERIFY -
VOLID(VOL123)
/*
You use the INIT command to initialize volumes. The INIT command writes a volume label (on
cylinder 0, track 0) and a VTOC on the device for use by MVS. It reserves and formats tracks
for the VTOC at the location specified by the user and for the number of tracks specified. If no
location is specified, tracks are reserved at the default location.
The following example performs an online minimal initialization, and as a result of the
command, an index to the VTOC is created.
// JOB
// EXEC PGM=ICKDSF
//XYZ987 DD UNIT=3390,DISP=OLD,VOL=SER=PAY456
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
INIT DDNAME(XYZ987) NOVERIFY INDEX(X'A',X'B',X'2')
/*
For details on how to IPL the stand-alone version and to see examples of the commands,
refer to Device Support Facilities User’s Guide and Reference Release 17, GC35-0033.
For many years, DASD devices have been the most used storage devices on IBM eServer™
zSeries systems and their predecessors, delivering the fast random access to data and high
availability that customers have come to expect.
The era of tapes began before DASD was introduced. During that time, tapes were used as
the primary application storage medium. Today customers use tapes for such purposes as
backup, archiving, or data transfer between companies.
Traditional DASD
3380 Models J, E, K
3390 Models 1, 2, 3, 9
Traditional DASD
In the era of traditional DASD, the hardware consisted of controllers like 3880 and 3990,
which contained the necessary intelligent functions to operate a storage subsystem. The
controllers were connected to S/390 systems via parallel or ESCON® channels. Behind a
controller you had several model groups of the 3390 that contained the disk drives. Based on
the models, these disk drives had different capacities per device. Within each model group,
the different models provide either four, eight, or twelve devices. All A-units come with four
controllers, providing a total of four paths to the 3990 Storage Control. At that time, you were
not able to change the characteristics of a given DASD device.
The more modern IBM DASD products, such as Enterprise Storage Server (ESS), DS6000,
DS8000, and DASD from other vendors, emulate IBM 3380 and 3390 volumes in geometry,
capacity of tracks, and number of tracks per cylinder. This emulation makes all the other
entities think they are dealing with real 3380s or 3390s. Among these entities, we have data
processing people not working directly with storage, JCL, MVS commands, open routines,
access methods, IOS, and channels. One advantage of this emulation is that it allows DASD
manufacturers to implement changes in the real disks, including the geometry of tracks and
cylinders, without affecting the way those components interface with DASD. From an
ESS technology
The IBM TotalStorage Enterprise Storage Server (ESS) is IBM’s disk storage server,
developed using IBM Seascape® architecture. The ESS provides functionality to the family of
e-business servers, and also to non-IBM (that is, Intel®-based and UNIX-based) families of
servers. Across all of these environments, the ESS features unique capabilities that allow it to
meet the most demanding requirements of performance, capacity, and data availability that
the computing business may require. See “Enterprise Storage Server (ESS)” on page 54 for
more information about this topic.
Seascape architecture
The Seascape architecture is the key to the development of IBM’s storage products.
Seascape allows IBM to take the best of the technologies developed by the many IBM
laboratories and integrate them, producing flexible and upgradeable storage solutions. This
Seascape architecture design has allowed the IBM TotalStorage Enterprise Storage Server
to evolve from the initial E models to the succeeding F models, and to the later 800 models,
each featuring new, more powerful hardware and functional enhancements, and always
integrated under the same successful architecture with which the ESS was originally
conceived. Refer to “Seascape architecture” on page 51 for more information.
Note: In this publication, we use the terms disk or head disk assembly (HDA) for the real
devices, and the terms DASD volumes or DASD devices for the logical 3380/3390s.
D/T3380
2655 Cyl
1770 Cyl
885 Cyl
D/T3390
10017 Cyl
2226 Cyl 3339 Cyl
1113 Cyl
DASD capacity
Figure 3-2 shows various DASD device types. 3380 devices were used in the 1980s. Capacity
went from 885 to 2,655 cylinders per volume. When storage density increased, new device
types were introduced at the end of the 1980s. Those types were called 3390. Capacity per
volume ranged from 1,113 to 3,339 cylinders. A special device type, model 3390-9 was
introduced to store large amounts of data that needed very fast access. The track geometry
within one device category was (and is) always the same; this means that 3380 volumes have
47,476 bytes per track, and 3390 volumes have 56,664 bytes per track.
Track/Cyl 15 15 15 15 15 15 15
Today, disk storage subsystems other than those listed can emulate one of those listed. For
example, the IBM Enterprise Storage Server emulates the IBM 3390. On an emulated disk or
on a VM minidisk, the number of cylinders per volume is a configuration option. It might be
less than or greater than the stated number. If so, the number of bytes per device will differ
accordingly. The IBM ESS Model 1750 supports up to 32760 cylinders and the IBM ESS
Model 2107 supports up to 65520 cylinders.
Large volume support is available on z/OS operating systems, the ICKDSF, and DFSORT™
utilities.
Large volume support must be installed on all systems in a sysplex prior to sharing data sets
on large volumes. Shared system and application data sets cannot be placed on large
volumes until all system images in a sysplex have large volume support installed.
The size of the logical volume defined does not have an impact on the performance of the
ESS subsystem. The ESS does not serialize I/O on the basis of logical devices, so an
increase in the logical volume size does not affect the ESS backend performance. Host
operating systems, on the other hand, serialize I/Os against devices. As more data sets
reside on a single volume, there will be greater I/O contention accessing the device. With
large volume support, it is more important than ever to try to minimize contention on the
logical device level. To avoid potential I/O bottlenecks on devices:
Exploit the use of Parallel Access Volumes to reduce IOS queuing on the system level.
Eliminate unnecessary reserves by using WLM in goal mode.
Multiple allegiance will automatically reduce queuing on sharing systems.
Parallel Access Volume (PAV) support is of key importance when implementing large
volumes. PAV enables one MVS system to initiate multiple I/Os to a device concurrently. This
keeps IOSQ times down and performance up even with many active data sets on the same
volume. PAV is a practical “must” with large volumes. We discourage you from using large
volumes without PAV. In particular, we recommend the use of dynamic PAV.
As the volume sizes grow larger, more data and data sets will reside on a single S/390 device
address. Thus, the larger the volume, the greater the multi-system performance impact will be
of serializing volumes with RESERVE processing. You need to exploit a GRS Star
Configuration and convert all RESERVE's possible into system ENQ requests.
RAID Disks
Primary Alternate
Record X Record X
Raid-1 ABCDEF ABCDEF
RAID architecture
Redundant array of independent disks (RAID) is a direct access storage architecture where
data is recorded across multiple physical disks with parity separately recorded, so that no
loss of access to data results from the loss of any one disk in the array.
RAID breaks the one-to-one association of volumes with devices. A logical volume is now the
addressable entity presented by the controller to the attached systems. The RAID unit maps
the logical volume across multiple physical devices. Similarly, blocks of storage on a single
physical device may be associated with multiple logical volumes. Because a logical volume is
mapped by the RAID unit across multiple physical devices, it is now possible to overlap
processing for multiple cache misses to the same logical volume because cache misses can
be satisfied by different physical devices.
The RAID concept involves many small computer system interface (SCSI) disks replacing a
big one. The major RAID advantages are:
Performance (due to parallelism)
Cost (SCSI are commodities)
zSeries compatibility
Environment (space and energy)
However, RAID increased the chances of malfunction due to media and disk failures and the
fact that the logical device is now residing on many physical disks. The solution was
Note: The ESS storage controllers use the RAID architecture that enables multiple logical
volumes to be mapped on a single physical RAID group. If required, you can still separate
data sets on a physical controller boundary for the purpose of availability.
RAID implementations
Except for RAID-1, each manufacturer sets the number of disks in an array. An array is a set
of logically related disks, where a parity applies.
Note: Data striping (stripe sequential physical blocks in different disks) is sometimes called
RAID-0, but it is not a real RAID because of no redundancy, that is, no parity bits.
Seascape architecture
The IBM Enterprise Storage Server’s “architecture for e-business” design is based on IBM’s
storage enterprise architecture, Seascape. The Seascape architecture defines
next-generation concepts for storage by integrating modular building block technologies from
IBM, including disk, tape, and optical storage media, powerful processors, and rich software.
Integrated Seascape solutions are highly reliable, scalable, and versatile, and support
specialized applications on servers ranging from PCs to super computers. Virtually all types
of servers can concurrently attach to the ESS, including iSeries® and AS/400® systems. As a
result, ESS can be the external disk storage system of choice for AS/400 as well as iSeries
systems in heterogeneous SAN environments.
DFSMS provides device support for the IBM 2105 Enterprise Storage Server (ESS), a
high-end storage subsystem. The ESS storage subsystem succeeded the 3880, 3990, and
9340 subsystem families. Designed for mid-range and high-end environments, the ESS gives
you large capacity, high performance, continuous availability, and storage expandability. You
can read more about ESS in “Enterprise Storage Server (ESS)” on page 54.
Cache
Cache is used to store both read and write data to improve ESS performance to the attached
host systems. There is the choice of 8, 16, 24, 32, or 64 GB of cache. This cache is divided
between the two clusters of the ESS, giving the clusters their own non-shared cache. The
ESS cache uses ECC (error checking and correcting) memory technology to enhance
reliability and error correction of the cache. ECC technology can detect single- and double-bit
errors and correct all single-bit errors. Memory scrubbing, a built-in hardware function, is also
performed and is a continuous background read of data from memory to check for correctable
errors. Correctable errors are corrected and rewritten to cache. To protect against loss of
data on a write operation, the ESS stores two copies of written data, one in cache and the
other in NVS.
The ESS 750 has capabilities similar to the ESS 800. The ESS Model 750 consists of two
clusters, each with a two-way processor and 4 or 8 GB cache. It can have two to six Fibre
Channel/FICON or ESCON host adapters. The storage capacity ranges from a minimum of
1.1 TB up to a maximum of 4 TB. A key feature is that the ESS 750 is upgradeable,
non-disruptively, to the ESS Model 800, which can grow to more than 55 TB of physical
capacity.
Note: Effective April 28, 2006, IBM withdrew from marketing the following products:
IBM TotalStorage Enterprise Storage Server (ESS) Models 750 and 800
IBM Standby Capacity on Demand for ESS offering
For replacement products, see “IBM TotalStorage DS6000” on page 82 and “IBM
TotalStorage DS8000” on page 85.
SCSI protocol
Although we do not cover other platforms in this publication, we provide here a brief overview
of the SCSI protocol. The SCSI adapter is a card in the host. It connects to a SCSI bus via a
SCSI port. There are two different types of SCSI supported by ESS:
SCSI Fast Wide with 20 MB/sec
Ultra™ SCSI Wide with 40 MB/sec
Storage consolidation
StorWatch support
PPRC support
IBM includes a Web browser interface called TotalStorage Enterprise Storage Server (ESS)
Copy Services. The interface is part of the ESS subsystem and can be used to perform
FlashCopy and PPRC functions.
Many of the ESS features are now available to non-zSeries platforms, such as PPRC for
Windows® XP and UNIX, where the control is through a Web interface.
StorWatch support
On the software side, there is StorWatch, a range of products in UNIX/XP that does what
DFSMS and automation do for System z. The TotalStorage Expert, formerly marketed as
StorWatch Expert, is a member of the IBM and Tivoli Systems family of solutions for
Enterprise Storage Resource Management (ESRM). These are offerings that are designed to
complement one another, and provide a total storage management solution.
TotalStorage Expert is an innovative software tool that gives administrators powerful, yet
flexible storage asset, capacity, and performance management capabilities to centrally
manage Enterprise Storage Servers located anywhere in the enterprise.
Host Adapters
Main Power
Supplies
Batteries
At the top of each cluster is an ESS cage. Each cage provides slots for up to 64 disk drives,
32 in front and 32 at the back.
Each host adapter can communicate with either cluster. To install a new host adapter card,
the bay must be powered off. For the highest path availability, it is important to spread the
host connections across all the adapter bays. For example, if you have four ESCON links to a
host, each connected to a different bay, then the loss of a bay for upgrade would only impact
one out of four of the connections to the server. The same would be valid for a host with
FICON connections to the ESS.
Similar considerations apply for servers connecting to the ESS by means of SCSI or fibre
channel links. For open system servers, the Subsystem Device Driver (SDD) program that
comes standard with the ESS can be installed on the connecting host servers to provide
multiple paths or connections to handle errors (path failover) and balance the I/O load to the
ESS.
The ESS connects to a large number of different servers, operating systems, host adapters,
and SAN fabrics. A complete and current list is available at the following Web site:
http://www.storage.ibm.com/hardsoft/products/ess/supserver.htm
These characteristics allow simpler and more powerful configurations. The ESS supports up
to 16 host adapters, which allows for a maximum of 16 Fibre Channel/FICON ports per
machine, as shown in Figure 3-10.
Each Fibre Channel/FICON host adapter provides one port with an LC connector type. The
adapter is a 2 Gb card and provides a nominal 200 MBps full-duplex data rate. The adapter
will auto-negotiate between 1 Gb and 2 Gb, depending upon the speed of the connection at
the other end of the link. For example, from the ESS to a switch/director, the FICON adapter
can negotiate to 2 Gb if the switch/director also has 2 Gb support. The switch/director to host
link can then negotiate at 1 Gb.
Eight-packs
Set of 8 similar capacity/rpm disk drives packed
together
Installed in the ESS cages
Initial minimum configuration is 4 eight-packs
Upgrades are available increments of 2 eight-packs
Maximum of 48 eight-packs per ESS with expansion
Disk drives
18.2 GB 15,000 rpm or 10,000 rpm
36.4 GB 15,000 rpm or 10,000 rpm
72.8 GB 10,000 rpm
145.6 GB 10,000 rpm
Eight-pack conversions
Capacity and/or RPMs
ESS disks
With a number of disk drive sizes and speeds available, including intermix support, the ESS
provides a great number of capacity configuration options.
The maximum number of disk drives supported within the IBM TotalStorage Enterprise
Storage Server Model 800 is 384—with 128 disk drives in the base enclosure and 256 disk
drives in the expansion rack. When configured with 145.6 GB disk drives, this gives a total
physical disk capacity of approximately 55.9 TB (see Table 3-2 for more details).
Disk drives
The minimum available configuration of the ESS Model 800 is 582 GB. This capacity can be
configured with 32 disk drives of 18.2 GB contained in four eight-packs, using one ESS cage.
All incremental upgrades are ordered and installed in pairs of eight-packs; thus the minimum
capacity increment is a pair of similar eight-packs of either 18.2 GB, 36.4 GB, 72.8 GB, or
145.6 GB capacity.
The ESS is designed to deliver substantial protection against data corruption, not just relying
on the RAID implementation alone. The disk drives installed in the ESS are the latest
state-of-the-art magneto resistive head technology disk drives that support advanced disk
functions such as disk error correction codes (ECC), Metadata checks, disk scrubbing, and
predictive failure analysis.
The IBM TotalStorage ESS Specialist will configure the eight-packs on a loop with spare
DDMs as required. Configurations that include drive size intermixing may result in the
creation of additional DDM spares on a loop as compared to non-intermixed configurations.
Currently there is the choice of four different new-generation disk drive capacities for use
within an eight-pack:
18.2 GB/15,000 rpm disks
36.4 GB/15,000 rpm disks
72.8 GB/10,000 rpm disks
145.6 GB/10,000 rpm disks
The eight disk drives assembled in each eight-pack are all of the same capacity. Each disk
drive uses the 40 MBps SSA interface on each of the four connections to the loop.
It is possible to mix eight-packs of different capacity disks and speeds (rpm) within an ESS,
as described in the following sections.
Table 3-2 should be used as a guide for determining the capacity of a given eight-pack. This
table shows the capacities of the disk eight-packs when configured as RAID ranks. These
capacities are the effective capacities available for user data.
A A A S B B B B C C C C D D D D 1 2 3 S 1 2 3 4
A/B/C/D : representation of RAID 5 rank drives (user data and distributed parity)
The ESS Storage Server Model 800 uses the latest SSA160 technology in its device adapters
(DA). With SSA 160, each of the four links operates at 40 MBps, giving a total nominal
bandwidth of 160 MBps for each of the two connections to the loop. This amounts to a total of
320 MBps across each loop. Also, each device adapter card supports two independent SSA
loops, giving a total bandwidth of 320 MBps per adapter card. There are eight adapter cards,
giving a total nominal bandwidth capability of 2,560 MBps. Refer to “SSA loops” on page 65
for more information about this topic.
SSA loops
One adapter from each pair of adapters is installed in each cluster as shown in Figure 3-12.
The SSA loops are between adapter pairs, which means that all the disks can be accessed by
both clusters. During the configuration process, each RAID array is configured by the IBM
TotalStorage ESS Specialist to be normally accessed by only one of the clusters. Should a
cluster failure occur, the remaining cluster can take over all the disk drives on the loop.
Figure 3-12 on page 63 shows a logical representation of a single loop with 48 disk drives
(RAID ranks are actually split across two eight-packs for optimum performance). In the figure
you can see there are six RAID arrays: four RAID 5 designated A to D, and two RAID 10 (one
3+3+2 spare and one 4+4).
read
SSA operation write
4 links per loop
2 read and 2 write DA
simultaneously in each direction
40 MB/sec on each link
write
read
Loop availability
DA
Loop reconfigures itself
dynamically
write
read
Spatial reuse
Up to 8 simultaneous
DA DA
operations to local group of
disks (domains) per loop
SSA operation
SSA is a high performance, serial connection technology for disk drives. SSA is a full-duplex
loop-based architecture, with two physical read paths and two physical write paths to every
disk attached to the loop. Data is sent from the adapter card to the first disk on the loop and
then passed around the loop by the disks until it arrives at the target disk. Unlike bus-based
designs, which reserve the whole bus for data transfer, SSA only uses the part of the loop
between adjacent disks for data transfer. This means that many simultaneous data transfers
can take place on an SSA loop, and it is one of the main reasons that SSA performs so much
better than SCSI. This simultaneous transfer capability is known as “spatial release.”
Each read or write path on the loop operates at 40 MB/s, providing a total loop bandwidth of
160 MB/s.
Loop availability
The loop is a self-configuring, self-repairing design that allows genuine hot-plugging. If the
loop breaks for any reason, then the adapter card will automatically reconfigure the loop into
two single loops. In the ESS, the most likely scenario for a broken loop is if the actual disk
drive interface electronics should fail. If this should happen, the adapter card will dynamically
reconfigure the loop into two single loops, effectively isolating the failed disk. If the disk is part
of a RAID array, the adapter card will automatically regenerate the missing disk using the
remaining data and parity disks to the spare disk. Once the failed disk has been replaced, the
loop will automatically be reconfigured into full duplex operation, and the replaced disk will
become a new spare.
If a cluster should fail, the remaining cluster device adapter will own all the domains on the
loop, thus allowing full data access to continue.
First RAID-10 rank Data Data Data Spare Data Data Data Data
Additional RAID-10 ranks Data Data Data Spare Data Data Data Data
configured in the loop will be 1' 2' 3' S 1' 2' 3' 4'
4+4 Eight-pack pair 2
Eight-pack 4
For a loop with an intermixed
Data Data Data Data Data Data Data Data
capacity, the ESS will assign
1 2 3 4 1 2 3 4
two spares for each capacity.
Eight-pack 3
This means there will be one
Data Data Data Data Data Data Data Data
3+3+2S array per capacity
1' 2' 3' 4' 1' 2' 3' 4'
RAID-10
RAID-10 is also known as RAID 0+1 because it is a combination of RAID 0 (striping) and
RAID 1 (mirroring). The striping optimizes the performance by striping volumes across
several disk drives (in the ESS Model 800 implementation, three or four DDMs). RAID 1 is the
protection against a disk failure provided by having a mirrored copy of each disk. By
combining the two, RAID 10 provides data protection and I/O performance.
Array
A disk array is a group of disk drive modules (DDMs) that are arranged in a relationship, for
example, a RAID 5 or a RAID 10 array. For the ESS, the arrays are built upon the disks of the
disk eight-packs.
Disk eight-pack
The physical storage capacity of the ESS is materialized by means of the disk eight-packs.
These are sets of eight DDMs that are installed in pairs in the ESS. Two disk eight-packs
provide for two disk groups —four DDMs from each disk eight-pack. These disk groups can
be configured as either RAID-5 or RAID-10 ranks.
Spare disks
The ESS requires that a loop have a minimum of two spare disks to enable sparing to occur.
The sparing function of the ESS is automatically initiated whenever a DDM failure is detected
on a loop and enables regeneration of data from the failed DDM onto a hot spare DDM.
Cluster 1 Cluster 2
1) RAID 10 array 3) RAID 10 array
Loop A 3 + 3 + 2S 4+4 Loop A
LSS 0 LSS 1
SSA 01 SSA 11
ESS
z/OS
DS6000
Parallel Access Volumes (PAV) DS8000
Multiple allegiance
Priority I/O queuing
Custom volumes
Improved caching algorithms
FICON host adapters
Enhanced CCWs
However, this concurrency can be achieved as long as no data accessed by one channel
program can be altered through the actions of another channel program.
To implement PAV, IOS introduces the concept of alias addresses. Instead of one UCB per
logical volume, an MVS host can now use several UCBs for the same logical volume. Apart
from the conventional Base UCB, alias UCBs can be defined and used by z/OS to issue I/Os
in parallel to the same logical volume device.
With ESS, it is possible to have this queue concept internally; I/O Priority Queueing in ESS
has the following properties:
I/O can be queued with the ESS in priority order.
WLM sets the I/O priority when running in goal mode.
There is I/O priority for systems in a sysplex.
Each system gets a fair share.
Custom volumes
Custom volumes provides the possibility of defining small size 3390 or 3380 volumes. This
causes less contention on a volume. Custom volumes is designed for high activity data sets.
Careful size planning is required.
The ESS manages its cache in 4 KB segments, so for small data blocks (4 KB and 8 KB are
common database block sizes), minimum cache is wasted. In contrast, large cache segments
could exhaust cache capacity while filling up with small random reads. Thus the ESS, having
smaller cache segments, is able to avoid wasting cache space for situations of small record
sizes that are common in interactive applications.
This efficient cache management, together with the ESS Model 800 powerful back-end
implementation that integrates new (optional) 15,000 rpm drives, enhanced SSA device
adapters, and twice the bandwidth (as compared to previous models) to access the larger
NVS (2 GB) and the larger cache option (64 GB), all integrate to give greater throughput while
sustaining cache speed response times.
The ability to do multiple I/O requests to the same volume nearly eliminates IOS queue time
(IOSQ), one of the major components in z/OS response time. Traditionally, access to highly
active volumes has involved manual tuning, splitting data across multiple volumes, and more.
With PAV and the Workload Manager, you can almost forget about manual performance
tuning. WLM manages PAVs across all members of a sysplex, too. The ESS, in conjunction
with z/OS, has the ability to meet the performance requirements on its own.
Alias assignment
It will not always be easy to predict which volumes should have an alias address assigned,
and how many. Your software can automatically manage the aliases according to your goals.
Through WLM, there are two mechanisms to tune the alias assignment:
The first mechanism is goal based. This logic attempts to give additional aliases to a PAV
device that is experiencing IOS queue delays and is impacting a service class period that
is missing its goal. To give additional aliases to the receiver device, a donor device must
be found with a less important service class period. A bitmap is maintained with each PAV
device that indicates the service classes using the device.
The second mechanism is to move aliases to high-contention PAV devices from
low-contention PAV devices. High-contention devices will be identified by having a
significant amount of IOSQ. This tuning is based on efficiency rather than directly helping
a workload to meet its goal.
The ESS and DS8000 support concurrent data transfer operations to or from the same
3390/3380 devices from the same system. A device (volume) accessed in this way is called a
parallel access volume (PAV).
PAV exploitation requires both software enablement and an optional feature on your
controller. PAV support must be installed on each controller. It enables the issuing of multiple
channel programs to a volume from a single system, and allows simultaneous access to the
logical volume by multiple users or jobs. Reads, as well as writes to different extents, can be
satisfied simultaneously. The domain of an I/O consists of the specified extents to which the
I/O operation applies, which corresponds to the extents of the same data set. Writes to the
same domain still have to be serialized to maintain data integrity, which is also the case for
reads and write.
The implementation of N parallel I/Os to the same 3390/3380 device consumes N addresses
in the logical controller, thus decreasing the number of possible real devices. Also, UCBs are
PAV benefits
Workloads that are most likely to benefit from PAV functionality being available include:
Volumes with many concurrently open data sets, such as volumes in a work pool
Volumes that have a high read to write ratio per extent
Volumes reporting high IOSQ times
To solve such problems HyperPAV was introduced. With HyperPAV all aliases’ UCBs are
located in a pool and are used dynamically by IOS.
DS8000 feature
HyperPAV is an optional feature on the DS8000 series, available with the HyperPAV indicator
feature number 0782 and corresponding DS8000 series function authorization (2244-PAV
HyperPAV feature number 7899). HyperPAV also requires the purchase of one or more PAV
licensed features and the FICON/ESCON Attachment licensed feature. The FICON/ESCON
Attachment licensed feature applies only to the DS8000 Turbo Models 931, 932, and 9B2.
HyperPAV allows many DS8000 series users to benefit from enhancements to PAV with
support for HyperPAV.
HyperPAV allows an alias address to be used to access any base on the same control unit
image per I/O base. This capability also allows different HyperPAV hosts to use one alias to
access different bases, which reduces the number of alias addresses required to support a
set of bases in a System z environment with no latency in targeting an alias to a base. This
functionality is also designed to enable applications to achieve equal or better performance
than is possible with the original PAV feature alone, while also using the same or fewer z/OS
resources. The HyperPAV capability is offered on z/OS V1R6 and later.
Applications
z/OS Image
UCB 08F3
P
Storage Server
do I/O to base O
volumes UCB 08F2
O
L
UCB 0801 UCB 08F1
Applications
do I/O to base UCB 08F0 Logical Subsystem (LSS) 0800
volumes UCB 0802
Alias UA=F0
Alias UA=F1
Alias UA=F2
Alias UA=F3
Applications z/OS Image
do I/O to base Base UA=01
volumes
UCB 08F0
UCB 0801
Base UA=02
UCB 08F1 P
Applications
do I/O to base UCB 08F3 O
O
volumes UCB 0802
L
UCB 08F2
HyperPAV feature
With the IBM System Storage™ DS8000 Turbo model and the IBM server synergy feature,
the HyperPAV together with PAV, Multiple Allegiance, and support for IBM System z MIDAW
facility can dramatically improve performance and efficiency for System z environments.
For each z/OS image within the sysplex, aliases are used independently. WLM is not involved
in alias movement so it does not need to collect information to manage HyperPAV aliases.
Note: HyperPAV was introduced and integrated in z/OS V1R9 and is available in z/OS
V1R8 with APAR OA12865.
DATA
MOVER
DATA
MOVER
local point-in-time copy
Concurrent
Copy co py
XRC s remote ces
TotalStorage
nou tan
Sidefile chro d dis
asyn unlimite
over
Km
o 103
C copy up t
TotalStorage
PPR
mote
us re
hrono py
sync -XD t e co
PPRC ous remo nces FlashCopy
ron al dista
ynch t
non-s r continen
ove
Remote copy provides two options that enable you to maintain a current copy of your data at
a remote site. These two options are used for disaster recovery and workload migration:
Extended remote copy (XRC)
Peer-to-peer remote copy (PPRC)
Note: Fibre Channel Protocol is supported only on ESS Model 800 with the appropriate
licensed internal code (LIC) level and the PPRC Version 2 feature enabled.
PPRC provides a synchronous volume copy across ESS controllers. The copy is done from
one controller (the one having the primary logical device) to the other (having the secondary
logical device). It is synchronous because the task doing the I/O receives the CPU back with
the guarantee that the copy was executed. There is a performance penalty for distances
longer than 10 km. PPRC is used for disaster recovery, device migration, and workload
migration; for example, it enables you to switch to a recovery system in the event of a disaster
in an application system.
You can issue the CQUERY command to query the status of one volume of a PPRC volume pair
or to collect information about a volume in the simplex state. The CQUERY command is
modified and enabled to report on the status of S/390-attached CKD devices.
See z/OS DFSMS Advanced Copy Services, SC35-0428, for further information about the
PPRC service and the CQUERY command.
If you are trying to decide whether to use synchronous or asynchronous PPRC, consider the
differences between the two modes:
When you use synchronous PPRC, no data loss occurs between the last update at the
primary system and the recovery site, but it increases the impact to applications and uses
more resources for copying data.
Asynchronous PPRC using the extended distance feature reduces impact to applications
that write to primary volumes and uses less resources for copying data, but data might be
lost if a disaster occurs. To use PPRC-XD as a disaster recovery solution, customers
need to periodically synchronize the recovery volumes with the primary site and make
backups to other DASD volumes or tapes.
PPRC-XD can operate at very long distances (such as continental distances), well beyond
the 103 km supported for PPRC synchronous transmissions—and with minimal impact on the
application. The distance is limited only by the network and channel extender technology
capabilities.
XRC relies on the IBM TotalStorage Enterprise Storage Server, IBM 3990, RAMAC Storage
Subsystems, and DFSMSdfp. The 9393 RAMAC Virtual Array (RVA) does not support XRC
for source volume capability.
XRC relies on the system data mover, which is part of DFSMSdfp. The system data mover is
a high-speed data movement program that efficiently and reliably moves large amounts of
data between storage devices. XRC is a continuous copy operation, and it is capable of
operating over long distances (with channel extenders). It runs unattended, without
involvement from the application users. If an unrecoverable error occurs at your primary site,
the only data that is lost is data that is in transit between the time when the primary system
fails and the recovery at the recovery site.
You can implement XRC with one or two systems. Let us suppose that you have two
systems: an application system at one location, and a recovery system at another. With these
two systems in place, XRC can automatically update your data on the remote disk storage
subsystem as you make changes to it on your application system. You can use the XRC
suspend/resume service for planned outages. You can still use this standard XRC service on
systems attached to the ESS if these systems are installed with the toleration or transparency
support.
Coupled Extended Remote Copy (CXRC) allows XRC sessions to be coupled together to
guarantee that all volumes are consistent across all coupled XRC sessions. CXRC can
manage thousands of volumes. IBM TotalStorage XRC Performance Monitor provides the
ability to monitor and evaluate the performance of a running XRC configuration.
Concurrent copy
Concurrent copy is an extended function that enables data center operations staff to generate
a copy or a dump of data while applications are updating that data. Concurrent copy delivers
a copy of the data, in a consistent form, as it existed before the updates took place.
FlashCopy service
FlashCopy is a point-in-time copy services function that can quickly copy data from a source
location to a target location. FlashCopy enables you to make copies of a set of tracks, with
the copies immediately available for read or write access. This set of tracks can consist of an
entire volume, a data set, or just a selected set of tracks. The primary objective of FlashCopy
is to create a copy of a source volume on the target volume. This copy is called a
point-in-time copy. Access to the point-in-time copy of the data on the source volume is
through reading the data from the target volume. The actual point-in-time data that is read
from the target volume might or might not be physically stored on the target volume. The ESS
FlashCopy service is compatible with the existing service provided by DFSMSdss. Therefore,
you can invoke the FlashCopy service on the ESS with DFSMSdss.
75.25”
@19.2TB
@4.8TB
5.25”
54.5” 19”
The DS6000 series offers high scalability while maintaining excellent performance. With the
DS6800 (Model 1750-511), you can install up to 16 disk drive modules (DDMs). The
minimum storage capability with 8 DDMs is 584 GB. The maximum storage capability with 16
DDMs for the DS6800 model is 4.8 TB. If you want to connect more than 16 disks, you can
use up to 13 DS6000 expansion units (Model 1750-EX1) that allow a maximum of 224 DDMs
per storage system and provide a maximum storage capability of 67 TB.
DS6000 specifications
Table 3-3 summarizes the DS6000 features.
Max cache 4 GB
RAID Levels 5, 10
Modular scalability
The DS6000 is modularly scalable, with optional expansion enclosure, to add capacity to help
meet your growing business needs. The scalability comprises:
Flexible design to accommodate on demand business environments
Ability to make dynamic configuration changes
– Add disk drives in increments of 4
– Add storage expansion units
Scale capacity to over 67 TB
ES 800 DS8000
76”
75.25”
33.25”
54.5”
The current physical storage capacity of the DS8000 series system can range from 1.1 TB to
192 TB of physical capacity, and it has an architecture designed to scale to over
96 petabytes.
DS8000 models
The DS8000 series offers various choices of base and expansion models, so you can
configure storage units that meet your performance and configuration needs.
DS8100
The DS8100 (Model 921) features a dual two-way processor complex and support for one
expansion frame.
DS8300
The DS8300 (Models 922 and 9A2) features a dual four-way processor complex and
The DS8000 expansion frames (Models 92E and 9AE) expand the capabilities of the base
models. You can attach the Model 92E to either the Model 921 or the Model 922 to expand
their capabilities. You can attach the Model 9AE to expand the Model 9A2.
The DS8100 model can support one expansion frame. With one expansion frame, you can
expand the capacity of the Model 921 as follows:
Up to 384 disk drives, for a maximum capacity of 115.2 TB
The DS8300 models can support either one or two expansion frames. With expansion
frames, you can expand the Model 922 and 9A2 as follows:
With one expansion frame, you can support the following expanded capacity and number
of adapters:
– Up to 384 disk drives, for a maximum capacity of 115.2 TB
– Up to 32 fibre-channel/FICON or ESCON host adapters
With two expansion frames, you can support the following expanded capacity:
– Up to 640 disk drives, for a maximum capacity of 192 TB
Workload A Workload B
LUN 0 LUN 0
LUN 1 LUN 2
DS8300 Model 9A2 exploits
Logical
LUN 2 Partition LPAR technology, allowing to
Logical
B
run two separate storage server
Partition
A
images
DS8000
RAID
RAID RAID
RAID
RAID RAID
Adapters
Adapters Switched Fabric Adapters
Adapters
Adapters Adapters
LPAR overview
A logical partition (LPAR) is a subset of logical resources that is capable of supporting an
operating system. It consists of CPUs, memory, and I/O slots that are a subset of the pool of
available resources within a system. These resources are assigned to the logical partition.
Isolation between LPARs is provided to prevent unauthorized access between partition
boundaries.
With these separate resources, each Storage System LPAR can run the same or different
versions of microcode, and can be used for completely separate production, test, or other
unique storage environments within this single physical system. This may enable storage
Copy services
FlashCopy ®
Mirroring
Metro Mirror (Synchronous PPRC)
Global Mirror (Asynchronous PPRC)
Metro/Global Copy (two or three-site Asynchronous
Cascading PPRC)
Global Copy (PPRC Extended Distance)
Global Mirror for zSeries (XRC) – DS6000 can be
configured as an XRC target only
Metro/Global Mirror for zSeries (three-site solution
using Synchronous PPRC and XRC) – DS6000 can
be configured as an XRC target only
These hardware and software features, products, and services are available on the IBM
TotalStorage DS6000 and DS8000 series and IBM TotalStorage ESS Models 750 and 800. In
addition, a number of advanced Copy Services features that are part of the IBM TotalStorage
Resiliency family are available for the DS6000 and DS8000 series. The IBM TotalStorage DS
Family also offers systems to support enterprise-class data backup and disaster recovery
capabilities. As part of the IBM TotalStorage Resiliency Family of software, IBM TotalStorage
FlashCopy point-in-time copy capabilities back up data in the background while allowing
users nearly instant access to information on both source and target volumes. Metro and
Global Mirror capabilities create duplicate copies of application data at remote sites.
High-speed data transfers help to back up data for rapid retrieval.
Copy Services
Copy Services is a collection of functions that provides disaster recovery, data migration, and
data duplication functions. Copy Services runs on the DS6000 and DS8000 series and
supports open systems and zSeries environments.
Copy Services functions also are supported on the previous generation of storage systems,
the IBM TotalStorage Enterprise Storage Server.
For information about copy services see also 3.21, “ESS copy services” on page 79.
Netscape or
Internet Explorer
z/OS
TotalStorage Expert
Windows XP or
AIX
AS/400
DS8000
FICON UNIX
ESCON
VTS
Peer-To-Peer
VTS
3494 Library Windows XP
Manager
TotalStorage Expert
TotalStorage Expert is an innovative software tool that gives administrators powerful, yet
flexible storage asset, capacity, and performance management capabilities to centrally
manage Enterprise Storage Servers located anywhere in the enterprise.
The two features are licensed separately. There are also upgrade features for users of
StorWatch Expert V1 with either the ESS or the ETL feature, or both, who want to migrate to
TotalStorage Expert V2.1.1.
TotalStorage Expert is designed to augment commonly used IBM performance tools such as
Resource Management Facility (RMF), DFSMS Optimizer, AIX Performance Toolkit, and
similar host-based performance monitors. While these tools provide performance statistics
from the host system’s perspective, TotalStorage Expert provides statistics from the ESS and
ETL system perspective.
The ESS is ideal for businesses with multiple heterogeneous servers, including zSeries,
UNIX, Windows NT®, Windows 2000, Novell NetWare, HP/UX, Sun Solaris, and AS/400
servers.
With Version 2.1.1, the TotalStorage ESS Expert is packaged with the TotalStorage ETL
Expert. The ETL Expert provides performance, asset, and capacity management for IBM’s
three ETL solutions:
IBM TotalStorage Enterprise Automated Tape Library, described in “IBM TotalStorage
Enterprise Automated Tape Library 3494” on page 103.
IBM TotalStorage Virtual Tape Server, described in “Introduction to Virtual Tape Server
(VTS)” on page 105.
IBM TotalStorage Peer-to-Peer Virtual Tapeserver, described in “IBM TotalStorage
Peer-to-Peer VTS” on page 108.
Both tools can run on the same server, share a common database, efficiently monitor storage
resources from any location within the enterprise, and provide a similar look and feel through
a Web browser user interface. Together they provide a complete solution that helps optimize
the potential of IBM disk and tape subsystems.
Tape volumes
Tape refer to volumes that can be physically moved. You can only store sequential data sets
on tape. Tape volumes can be sent to a safe, or to other data processing centers.
Internal labels are used to identify magnetic tape volumes and the data sets on those
volumes. You can process tape volumes with:
IBM standard labels
Labels that follow standards published by:
– International Organization for Standardization (ISO)
– American National Standards Institute (ANSI)
– Federal Information Processing Standard (FIPS)
Nonstandard labels
No labels
Note: Your installation can install a bypass for any type of label processing; however, the
use of labels is recommended as a basis for efficient control of your data.
IBM standard tape labels consist of volume labels and groups of data set labels. The volume
label, identifying the volume and its owner, is the first record on the tape. The data set label,
Usually, the formats of ISO and ANSI labels, which are defined by the respective
organizations, are similar to the formats of IBM standard labels.
Nonstandard tape labels can have any format and are processed by routines you provide.
Unlabeled tapes contain only data sets and tape marks.
/ /
IBM Standard IBM Standard
IBM IBM Standard
Data Set Data Set
Standard Volume TM Data Set TM TM TM
Header Trailer
Labels Label
Label Label
/ /
/ /
Unlabeled
Tapes Data Set TM TM
/ /
TM= Tapemark
Other parameters of the DD statement identify the data set, give volume and unit information
and volume disposition, and describe the data set's physical attributes. You can use a data
class to specify all of your data set's attributes (such as record length and record format), but
not data set name and disposition. Specify the name of the data class using the JCL keyword
DATACLAS. If you do not specify a data class, the automatic class selection (ACS) routines
assign a data class based on the defaults defined by your storage administrator.
An example of allocating a tape data set using DATACLAS in the DD statement of the JCL
statements follows. In this example, TAPE01 is the name of the data class.
//NEW DD DSN=DATASET.NAME,UNIT=TAPE,DISP=(,CATLG,DELETE),DATACLAS=TAPE01,LABEL=(1,SL)
AL ISO/ANSI/FIPS labels
BLP Bypass label processing. The data is treated in the same manner as if NL had been
specified, except that the system does not check for an existing volume label. The user is
responsible for the positioning. If your installation does not allow BLP, the data is treated
exactly as if NL had been specified. Your job can use BLP only if the Job Entry Subsystem
(JES) through Job class, RACF through TAPEVOL class, or DFSMSrmm(*) allow it.
LTM Bypass a leading tape mark. If encountered, on unlabeled tapes from VSE.
Note: If you do not specify the label type, the operating system assumes that the data set
has IBM standard labels.
3590=10,000 Mb
3490=800 Mb
3480=200 Mb 3592=300,000 Mb
Tape capacity
The capacity of a tape depends on the device type that is recording it. 3480 and 3490 tapes
are physically the same cartridges. The IBM 3590 and 3592 high performance cartridge tape
is not compatible with the 3480, 3490, or 3490E drives. 3490 units can read 3480 cartridges,
but cannot record as a 3480, and 3480 units cannot read or write as a 3490.
Tape mount management allows you to efficiently fill a tape cartridge to its capacity and gain
full benefit from improved data recording capability (IDRC) compaction, 3490E Enhanced
Capability Magnetic Tape Subsystem, 36-track enhanced recording format, and Enhanced
Capacity Cartridge System Tape. By filling your tape cartridges, you reduce your tape mounts
and even the number of tape volumes you need.
With an effective tape cartridge capacity of 2.4 GB using 3490E and the Enhanced Capacity
Cartridge System Tape, DFSMS can intercept all but extremely large data sets and manage
them with tape mount management. By implementing tape mount management with DFSMS,
you might reduce your tape mounts by 60% to 70% with little or no additional hardware
Tape mount management also improves job throughput because jobs are no longer queued
up on tape drives. Approximately 70% of all tape data sets queued up on drives are less than
10 MB. With tape mount management, these data sets reside on DASD while in use. This
frees up the tape drives for other allocations.
Tape mount management recommends that you use DFSMShsm to do interval migration to
SMS storage groups. You can use ACS routines to redirect your tape data sets to a tape
mount management DASD buffer storage group. DFSMShsm scans this buffer on a regular
basis and migrates the data sets to migration level 1 DASD or migration level 2 tape as soon
as possible, based on the management class and storage group specifications.
Table 3-6 lists all IBM tape capacities supported since 1952.
For further information about tape processing, see z/OS DFSMS Using Magnetic Tapes,
SC26-7412.
Improved environmentals
The IBM 3592 tape drive can be used as a standalone solution or as an automated solution
within a 3494 tape library.
Improved environmentals
By using a smaller form factor than 3590 Magstar drives, you can put two 3592 drives in place
of one 3590 drive in the 3494. In a standalone solution you can put a maximum of 12 drives
into one 19-inch rack, managed by one controller.
VTS models:
Model B10 VTS
Model B20 VTS
Peer-to-Peer (PtP) VTS (up to twenty-four 3590
tape drives)
VTS design (single VTS)
32, 64, 128 or 256 3490E virtual devices
Tape volume cache:
Analogous to DASD cache
Data access through the cache
Dynamic space management
Cache hits eliminate tape mounts
Up to twelve 3590 tape drives (the real 3590 volume
contains up to 250,000 virtual volumes per VTS)
Stacked 3590 tape volumes managed by the 3494
VTS introduction
The IBM Magstar Virtual Tape Server (VTS), integrated with the IBM Tape Library
Dataservers (3494), delivers an increased level of storage capability beyond the traditional
storage products hierarchy. The host software sees VTS as a 3490 Enhanced Capability
(3490E) Tape Subsystem with associated standard (CST) or Enhanced Capacity Cartridge
System Tapes (ECCST). This virtualization of both the tape devices and the storage media to
the host allows for transparent utilization of the capabilities of the IBM 3590 tape technology.
Along with introduction of the IBM Magstar VTS, IBM introduced new views of volumes and
devices because of the different knowledge about volumes and devices in the host system
and the hardware. Using a VTS subsystem, the host application writes tape data to virtual
devices. The volumes created by the hosts are called Virtual Volumes and are physically
stored in a tape volume cache that is built from RAID DASD.
VTS models
These are the IBM 3590 drives you can choose:
For the Model B10 VTS, four, five, or six 3590-B1A/E1A/H1A can be associated with VTS.
For the Model B20 VTS, six to twelve 3590-B1A/E1A/H1A can be associated with VTS.
Each ESCON channel in the VTS is capable of supporting 64 logical paths, providing up to
1024 logical paths for Model B20 VTS with sixteen ESCON channels, and 256 logical paths
for Model B10 VTS with four ESCON channels. Each logical path can address any of the 32,
64, 128, or 256 virtual devices in the Model B20 VTS.
Through tape volume cache management policies, the VTS management software moves
host-created volumes from the tape volume cache to a Magstar cartridge managed by the
VTS subsystem. When a virtual volume is moved from the tape volume cache to tape, it
becomes a logical volume.
VTS design
VTS looks like an automatic tape library with thirty-two 3490E drives and 50,000 volumes in
37 square feet. Its major components are:
Magstar 3590 (three or six tape drives) with two ESCON channels
Magstar 3494 Tape Library
Fault-tolerant RAID-1 disks (36 Gb or 72 Gb)
RISC Processor
VTS functions
VTS provides the following functions:
Thirty-two 3490E virtual devices.
Tape volume cache (implemented in a RAID-1 disk) that contains virtual volumes.
The tape volume cache consists of a high performance array of DASD and storage
management software. Virtual volumes are held in the tape volume cache when they are
being used by the host system. Outboard storage management software manages which
virtual volumes are in the tape volume cache and the movement of data between the tape
volume cache and physical devices. The size of the DASD is made large enough so that
more virtual volumes can be retained in it than just the ones currently associated with the
virtual devices.
After an application modifies and closes a virtual volume, the storage management
software in the system makes a copy of it onto a physical tape. The virtual volume remains
available on the DASD until the space it occupies reaches a predetermined threshold.
Leaving the virtual volume in the DASD allows for fast access to it during subsequent
requests. The DASD and the management of the space used to keep closed volumes
available is called tape volume cache. Performance for mounting a volume that is in tape
volume cache is quicker than if a real physical volume is mounted.
Up to six 3590 tape drives; the real 3590 volume contains logical volumes. Installation
sees up to 50,000 volumes.
VTS is expected to provide a ratio of 59:1 in volume reduction, with dramatic savings in all
tape hardware items (drives, controllers, and robots).
CX1
VTC
Master VTS
I/O VTS
FICON/ESCON
to zSeries
ESCON/FICON
Distributed Library
UI Library
VTC
Peer-to-Peer VTS
IBM TotalStorage Peer-to-Peer Virtual Tape Server, an extension of IBM TotalStorage Virtual
Tape Server, is specifically designed to enhance data availability. It accomplishes this by
providing dual volume copy, remote functionality, and automatic recovery and switchover
capabilities. With a design that reduces single points of failure (including the physical media
where logical volumes are stored), IBM TotalStorage Peer-to-Peer Virtual Tape Server
improves system reliability and availability, as well as data access. To help protect current
hardware investments, existing IBM TotalStorage Virtual Tape Servers can be upgraded for
use in this new configuration.
IBM TotalStorage Peer-to-Peer Virtual Tape Server consists of new models and features of
the 3494 Tape Library that are used to join two separate Virtual Tape Servers into a single,
interconnected system. The two virtual tape systems can be located at the same site or at
different sites that are geographically remote. This provides a remote copy capability for
remote vaulting applications.
IBM TotalStorage Peer-to-Peer Virtual Tape Server appears to the host IBM eServer zSeries
processor as a single automated tape library with 64, 128, or 256 virtual tape drives and up to
500,000 virtual volumes. The configuration of this system has up to 3.5 TB of Tape Volume
Cache native (10.4 TB with 3:1 compression), up to 24 IBM TotalStorage 3590 tape drives,
and up to 16 host ESCON or FICON channels.
LAN
Any Storage
ESS
Figure 3-35 Storage area network (SAN)
SANs today are usually built using fibre channel technology, but the concept of a SAN is
independent of the underlying type of network.
There are different SAN topologies on the base of fibre channel networks:
Point-to-Point
With a SAN, a simple link is used to provide high-speed interconnection between two
nodes.
Arbitrated loop
The fibre channel arbitrated loop offers relatively high bandwidth and connectivity at a low
cost. In order for a node to transfer data, it must first arbitrate to win control of the loop.
Once the node has control, it is free to establish a virtual point-to-point connection with
another node on the loop. After this point-to-point (virtual) connection is established, the
two nodes consume all of the loop’s bandwidth until the data transfer operation is
complete. Once the transfer is complete, any node on the loop can then arbitrate to win
control of the loop.
Switched
Fibre channel switches function in a manner similar to traditional network switches to
provide increased bandwidth, scalable performance, an increased number of devices,
and, in some cases, increased redundancy.
Multiple switches can be connected to form a switch fabric capable of supporting a large
number of host servers and storage subsystems. When switches are connected, each
switch’s configuration information has to be copied into all the other participating switches.
This is called cascading.
Today zSeries has 2 Gbps link data rate support. The 2 Gbps links are for native FICON,
FICON CTC, cascaded directors and fibre channels—FCP channels—on the FICON Express
cards on z800, z900, and z990 only.
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. 113
4.1 Overview of DFSMSdfp utilities
DFSMSdfp utilities
Utilities are programs that perform commonly needed functions. DFSMS provides utility
programs to assist you in organizing and maintaining data. There are system and data set
utility programs that are controlled by JCL, and utility control statements.
The base JCL and some utility control statements necessary to use these utilities are
provided in the major discussion of the utility programs in this chapter. For more details and to
help you find the program that performs the function you need, see “Guide to Utility Program
Functions” in topic 1.1 of z/OS DFSMSdfp Utilities, SC26-7414.
Table 4-1 on page 115 lists and describes system utilities. Programs that provide functions
which are better performed by newer applications (such as ISMF, ISPF/PDF or DFSMSrmm
or DFSMSdss) are marked with an asterisk (*) in the table.
IEHPROGM Access Method Services, Build and maintain system control data.
PDF 3.2
*IFHSTATR DFSMSrmm, EREP Select, format, and write information about tape
errors from the IFASMFDP tape.
These utilities allow you to manipulate partitioned, sequential or indexed sequential data sets,
or partitioned data sets extended (PDSEs), which are provided as input to the programs. You
can manipulate data ranging from fields within a logical record to entire data sets. The data
set utilities included in this section cannot be used with VSAM data sets. You use the
IDCAMS utility to manipulate VSAM data set; refer to “Invoking the IDCAMS utility program”
on page 136.
Table 4-2 lists data set utility programs and their use. Programs that provide functions which
are better performed by newer applications, such as ISMF or DFSMSrmm or DFSMSdss, are
marked with an asterisk (*) in the table.
*IEBCOMPR, SuperC, (PDF 3.12) Compare records in sequential or partitioned data sets, or
PDSEs.
IEBGENER or ICEGENER Copy records from a sequential data set, or convert a data set
from sequential organization to partitioned organization.
*IEBIMAGE Modify, print, or link modules for use with the IBM 3800 Printing
Subsystem, the IBM 3262 Model 5, or the 4284 printer.
IEBPTPCH or PDF 3.1 or 3.6 Print or punch records in a sequential or partitioned data set.
Example 1
PDSE1 PDSE2
Directory 1 Directory 2
ABCDGL ABCDEFG
HIJKL
Example 2
PDSE1 PDSE2
Directory 1 Directory 2
ABCFHIJ ABFGHIJ
IEBCOMPR utility
IEBCOMPR is a data set utility used to compare two sequential data sets, two partitioned
data sets (PDS), or two PDSEs, at the logical record level, to verify a backup copy. Fixed,
variable, or undefined records from blocked or unblocked data sets or members can also be
compared. However, you should not use IEBCOMPR to compare load modules.
Two sequential data sets are considered equal (that is, are considered to be identical) if:
The data sets contain the same number of records
Corresponding records and keys are identical
Two partitioned data sets or two PDSEs are considered equal if:
Corresponding members contain the same number of records
Note lists are in the same position within corresponding members
Corresponding records and keys are identical
Corresponding directory user data fields are identical
If all these conditions are not met for a specific type of data set, those data sets are
considered unequal. If records are unequal, the record and block numbers, the names of the
DD statements that define the data sets, and the unequal records are listed in a message
data set. Ten successive unequal comparisons stop the job step, unless you provide a
routine for handling error conditions.
A partitioned data set or partitioned data set extended can be compared only if all names in
one or both directories have counterpart entries in the other directory. The comparison is
made on members identified by these entries and corresponding user data.
You can run this sample JCL to compare two cataloged, partitioned organized (PO) data sets:
Figure 4-2 on page 116 shows several examples of the directories of two partitioned data
sets.
In Example 1, Directory 2 contains corresponding entries for all the names in Directory 1;
therefore, the data sets can be compared.
In Example 2, each directory contains a name that has no corresponding entry in the other
directory; therefore, the data sets cannot be compared, and the job step will be ended.
IEBCOPY utility
IEBCOPY is a data set utility used to copy or merge members between one or more
partitioned data sets (PDS), or partitioned data sets extended (PDSE), in full or in part. You
can also use IEBCOPY to create a backup of a partitioned data set into a sequential data set
(called an unload data set or PDSU), and to copy members from the backup into a partitioned
data set.
In addition, IEBCOPY automatically lists the number of unused directory blocks and the
number of unused tracks available for member records in the output partitioned data set.
INDD statement
This statement specifies the names of DD statements that locate the input data sets. When
an INDD=appears in a record by itself (that is, not with a COPY keyword), it functions as a
control statement and begins a new step in the current copy operation.
INDD=[(]{DDname|(DDname,R)}[,...][)]
R specifies that all members to be copied or loaded from this input data set are to replace
any identically named members on the output partitioned data set.
OUTDD statement
This statement specifies the name of a DD statement that locates the output data set.
OUTDD=DDname
SELECT statement
This statement selects specific members to be processed from one or more data sets by
coding a SELECT statement to name the members. Alternatively, all members but a specific
few can be designated by coding an EXCLUDE statement to name members not to be
processed.
DATA.SET5
DATA.SET1 DATA.SET1
Directory
AC
Unsued
Directory Member C Directory
ABF Unused ABCDF
A
Member F Available Member F
A DATA.SET6 A
Unused Unused
B B
Directory
Available D
BCD
Member B C
D
C After Copy
Before Copy Before Compress
Available
COPY processing
Processing occurs as follows:
1. Member A is not copied from DATA.SET5 into DATA.SET1 because it already exists on
DATA.SET1 and the replace option was not specified for DATA.SET5.
2. Member C is copied from DATA.SET5 to DATA.SET1, occupying the first available space.
3. All members are copied from DATA.SET6 to DATA.SET1, immediately following the last
member. Members B and C are copied even though the output data set already contains
members with the same names because the replace option is specified on the data set
level.
The pointers in DATA.SET1's directory are changed to point to the new members B and C.
Thus, the space occupied by the old members B and C is unused.
DATA.SET1 DATA.SET1
Directory Directory
ABCDF ABCDF
Member F Member F
A A
Unused Compress B
B D
D C
C Available
The simplest way to request a compress-in-place operation is to specify the same ddname for
both the OUTDD and INDD parameters of a COPY statement.
Example
In our example in “IEBCOPY: Copy operation” on page 120, the pointers in the DATA.SET1
directory are changed to point to the new members B and C. Thus, the space occupied by the
old members B and C is unused. The members currently on DATA.SET1 are compressed in
place as a result of the copy operation, thereby eliminating embedded unused space.
However, be aware that a compress-in-place operation may bring risk to your data if
something abnormally disrupts the process.
Using IEBGENER
IEBGENER copies records from a sequential data set or converts sequential data sets into
members of PDSs or PDSEs. You can use IEBGENER to:
Create a backup copy of a sequential data set, a member of a partitioned data set or
PDSE, or a UNIX System Services file such as an HFS file.
Produce a partitioned data set or PDSE, or a member of a partitioned data set or PDSE,
from a sequential data set or a UNIX System Services file.
Expand an existing partitioned data set or PDSE by creating partitioned members and
merging them into the existing data set.
Produce an edited sequential or partitioned data set or PDSE.
Manipulate data sets containing double-byte character set data.
Print sequential data sets or members of partitioned data sets or PDSEs or UNIX System
Services files.
Re-block or change the logical record length of a data set.
Copy user labels on sequential output data sets.
Supply editing facilities and exits.
Jobs that call IEBGENER have a system-determined block size used for the output data set if
RECFM and LRECL are specified, but BLKSIZE is not specified. The data set is also
considered to be system-reblockable.
In Figure 4-9 on page 124, the data set in SYSUT1 is a PDS or PDSE member and the data
set in SYSUT2 is a UNIX file. This job creates a macro library in the UNIX directory.
// JOB ....
// EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=PROJ.BIGPROG.MACLIB(MAC1),DISP=SHR
//SYSUT2 DD PATH='/u/BIGPROG/macros/special/MAC1',PATHOPTS=OCREAT,
// PATHDISP=(KEEP,DELETE),
// PATHMODE=(SIRUSR,SIWUSR,
// SIRGRP,SIROTH),
// FILEDATA=TEXT
//SYSIN DD DUMMY
Note: If you have the DFSORT product installed, you should be using ICEGENER as an
alternative to IEBGENER when making an unedited copy of a data set or member. It may
already be installed in your system under the name IEBGENER. It generally gives better
performance.
Utility control
statements
Sequential Input Existing
define record Expanded
groups
Member B Data Set Data Set
name members
LASTREC C
E
Member F E
G
G
B
Available D
F
Figure 4-10 shows how sequential input is converted into members that are merged into an
existing partitioned data set or PDSE. The left side of the figure shows the sequential input
that is to be merged with the partitioned data set or PDSE shown in the middle of the figure.
Utility control statements are used to divide the sequential data set into record groups and to
provide a member name for each record group. The right side of the figure shows the
expanded partitioned data set or PDSE.
Note that members B, D, and F from the sequential data set were placed in available space
and that they are sequentially ordered in the partitioned directory.
MY.DATA IEBGENER
MY.DATA.OUTPUT
For further information about IEBGENER, refer to z/OS DFSMSdfp Utilities, SC26-7414.
Using IEHLIST
IEHLIST is a system utility used to list entries in the directory of one or more partitioned data
sets or PDSEs, or entries in an indexed or non-indexed volume table of contents. Any number
of listings can be requested in a single execution of the program.
The directory of a partitioned data set is composed of variable-length records blocked into
256-byte blocks. Each directory block can contain one or more entries that reflect member or
alias names and other attributes of the partitioned members. IEHLIST can list these blocks in
edited and unedited format.
The directory of a PDSE, when listed, will have the same format as the directory of a
partitioned data set.
If you include the keyword FORMAT in the LISTVTOC parameter, you will have more detailed
information about the DASD and about the data sets, and you can also specify the DSNAME
that you want to request information about. If you specify the keyword DUMP instead of
FORMAT, you will get an unformatted VTOC listing.
Note: This information is at the DASD volume level, and does not have any interaction with
the catalog.
IEHINITT utility
IEHINITT is a system utility used to place standard volume label sets onto any number of
magnetic tapes mounted on one or more tape units. They can be ISO/ANSI Version 3 or
ISO/ANSI Version 4 volume label sets written in ASCII (American Standard Code for
Information Interchange) or IBM standard labels written in EBCDIC.
(Omit REFRESH if you did not have this option active previously.)
To further protect against overwriting the wrong tape, IEHINITT asks the operator to verify
each tape mount.
In Figure 4-17, serial numbers 001234, 001244, 001254, 001264, 001274, and so forth are
placed on eight tape volumes. The labels are written in EBCDIC at 800 bits per inch. Each
volume labeled is mounted, when it is required, on one of four 9-track tape units.
Detailed procedures for using the program are described in z/OS DFSMSrmm
Implementation and Customization Guide, SC26-7405.
Note: DFSMSrmm is an optional priced feature of DFSMS. That means that EDGINERS
can only be used when DFSMSrmm is licensed. If DFSMSrmm is licensed, IBM
recommends that you use EDGINERS for tape initialization instead of using IEHINITT.
IEFBR14 program
IEFBR14 is not a utility program. It is a two-line program that clears register 15, thus passing
a return code of 0. It then branches to the address in register 14, which returns control to the
system. So in other words, this program is dummy program. It can be used in a step to force
MVS (specifically, the initiator) to process the JCL code and execute functions such as the
following:
Checking all job control statements in the step for syntax
Allocating direct access space for data sets
Performing data set dispositions like creating new data sets or deleting old ones
Note: Although the system allocates space for data sets, it does not initialize the new data
sets. Therefore, any attempt to read from one of these new data sets in a subsequent step
may produce unpredictable results. Also, we do not recommend allocation of multi-volume
data sets while executing IEFBR14.
In the example in Figure 4-18 the first DD statement DD1 deletes old data set DATA.SET1.
The second DD statement creates a new PDS with name DATA.SET2.
Access methods
An access method is a friendly program interface between programs and their data. It is in
charge of interfacing with Input Output Supervisor (IOS), the z/OS code that starts the I/O
operation. An access method makes the physical organization of data transparent to you by:
Managing data buffers
Blocking and de-blocking logical records into physical blocks
Synchronizing your task and the I/O operation (wait/post mechanism)
Writing the channel program
Optimizing the performance characteristics of the control unit (such as caching and data
striping)
Compressing and decompressing I/O data
Executing software error recovery
In contrast to other platforms, z/OS supports several types of access methods and data
organizations.
An access method defines the organization by which the data is stored and retrieved. DFSMS
access methods have their own data set structures for organizing data, macros, and utilities
to define and process data sets. It is an application choice, depending on the type of access
(sequential or random), to allow or disallow insertions and deletions, to pick up the most
adequate access method for its data.
Optionally, BDAM uses hardware keys. Hardware keys are less efficient than the optional
software keys in VSAM KSDS.
Note: Because BDAM tends to require the use of device-dependent code, it is not a
recommended access method. In addition, using keys is much less efficient than in VSAM.
BDAM is supported by DFSMS only to enable compatibility with other IBM operating
systems.
For information about partitioned organized data set, see 4.22, “Partitioned organized (PO)
data sets” on page 148, and subsequent sections.
VSAM arranges and retrieves logical records by an index key, relative record number, or
relative byte addressing (RBA). A logical record has an RBA, which is the relative byte
address of its first byte in relation to the beginning of the data set. VSAM is used for direct,
sequential or skip sequential processing of fixed-length and variable-length records on
DASD. VSAM data sets (also named clusters) are always cataloged. There are five types of
cluster organization:
Entry-sequenced data set (ESDS): Contains records in the order in which they were
entered. Records are added to the end of the data set and can be accessed sequentially
or randomly through the RBA.
Key-sequenced data set (KSDS): Contains records in ascending collating sequence of the
contents of a logical record field called key. Records can be accessed by the contents of
such key, or by an RBA.
Linear data set (LDS): Contains data that has no record boundaries. Linear data sets
contain none of the control information that other VSAM data sets do. Data in Virtual (DIV)
is an optional intelligent buffering technique that includes a set of assembler macros that
provide buffering access to VSAM linear data sets. See “VSAM: Data-in-virtual (DIV)” on
page 177.
Relative record data set (RRDS): Contains logical records in relative record number order;
the records can be accessed sequentially or randomly based on this number. There are
two types of relative record data sets:
– Fixed-length RRDS: logical records must be of fixed length.
– Variable-length RRDS: logical records can vary in length.
A z/OS UNIX file (HFS or zFS) can be accessed as though it were a VSAM entry-sequenced
data set (ESDS). Although UNIX files are not actually stored as entry-sequenced data sets,
the system attempts to simulate the characteristics of such a data set. To identify or access a
UNIX file, specify the path that leads to it.
All access method services commands have the following general structure:
COMMAND parameters ... [terminator]
The command defines the type of service requested; the parameters further describe the
service requested; the terminator indicates the end of the command statement.
Time Sharing Option (TSO) users can use functional commands only. For more information
about modal commands, refer to z/OS DFSMS Access Method Services for Catalogs,
SC26-7394.
You can call the access method services program in the following ways:
As a job or jobstep
From a TSO session
From within your own program
TSO users can run access method services functional commands from a TSO session as
though they were TSO commands.
For more information, refer to “Invoking Access Method Services from Your Program” in z/OS
DFSMS Access Method Services for Catalogs, SC26-7394.
As a job or jobstep
You can use JCL statements to call access method services. PGM=IDCAMS identifies the
access method services program, as shown in Figure 4-21.
/*
Each time you enter an access method services command as a TSO command, TSO builds
the appropriate interface information and calls access method services. You can enter one
command at a time. Access method services processes the command completely before
TSO lets you continue processing. Except for ALLOCATE, all the access method services
functional commands are supported in a TSO environment.
To use IDCAMS and some of its parameters from TSO/E, you must update the IKJTSOxx
member of SYS1.PARMLIB. Add IDCAMS to the list of authorized programs (AUTHPGM).
For more information, refer to z/OS DFSMS Access Method Services for Catalogs,
SC26-7394.
ALTER Alters attributes of data sets, catalogs, tape library entries, and tape
volume entries that have already been defined.
BLDINDEX Builds alternate indexes (AIX) for existing VSAM base clusters.
DCOLLECT Collects data set, volume usage, and migration utility information.
DEFINE ALIAS Defines an alternate name for a user catalog or a non-VSAM data set.
DEFINE ALTERNATEINDEX Defines an alternate index for a KSDS or ESDS VSAM data set.
DEFINE CLUSTER Creates KSDS, ESDS, RRDS, VRRDS and linear VSAM data sets.
DEFINE PATH Defines a path directly over a base cluster or over an alternate index
and its related base cluster.
IMPORT Connects user catalogs, and imports VSAM cluster and its ICF catalogs
information.
PRINT Used to print VSAM data sets, non-VSAM data sets, and catalogs.
VERIFY Causes a catalog to correctly reflect the end of a data set after an error
occurred while closing a VSAM data set. The error might have caused
the catalog to be incorrect.
For a complete description of all AMS commands, refer to z/OS DFSMS Access Method
Services for Catalogs, SC26-7394.
Note: These commands cannot be used when access method services is run in TSO. See
z/OS DFSMS Access Method Services for Catalogs, SC26-7394, for a complete
description of the AMS modal commands.
DCOLLECT functions
Capacity planning
Active data sets
VSAM clusters
Migrated data sets
Backed-up data sets
SMS configuration information
Data is gathered from the VTOC, VVDS, and DFSMShsm control data set for both managed
and non-managed storage. ISMF provides the option to build the JCL necessary to execute
DCOLLECT.
DCOLLECT example
With the sample JCL Figure 4-25 you can gather information about all volumes belonging to
storage group STGGP001.
VOLABC
ABC.GDG.G0003V00
GDS: oldest
ABC.GDG.G0001V00 (-4)
ABC.GDG.G0002V00 (-3)
ABC.GDG.G0003V00 (-2)
VOLDEF
ABC.GDG.G0004V00 (-1)
ABC.GDG.G0002V00
ABC.GDG.G0005V00 ( 0)
Limit = 5 newest
ABC.GDG.G0005V00
Generation data sets can be sequential, PDSs, or direct (BDAM). Generation data sets
cannot be PDSEs, UNIX files, or VSAM data sets. The same GDG may contain SMS and
non-SMS data sets.
There are usability advantages to grouping related data sets using a function such as GDS.
For example, the catalog management routines can refer to the information in a special index
called a generation index in the catalog. Thus:
All of the data sets in the group can be referred to by a common name.
z/OS is able to keep the generations in chronological order.
Outdated or obsolete generations can be automatically deleted from the catalog by z/OS.
Another advantage is the ability to reference a new generation using the same JCL.
A generation data group (GDG) base is allocated in a catalog before the GDS’s are
cataloged. Each GDG is represented by a GDG base entry. Use the access method services
DEFINE command to allocate the GDG base (see also 4.19, “Defining a generation data
group” on page 144).
The GDG base is a construct that exists only in a user catalog, it does not exist as a data set
on any volume. The GDG base is used to maintain the generation data sets (GDS), which are
the real data sets.
The number of GDS’s in a GDG depends on the limit you specify when you create a new
GDG in the catalog.
GDG example
In our example in Figure 4-26, the limit is 5. That means, the GDG can hold a maximum of
five GDS’s. Our data set name is ABC.GDG. Then, you can access the GDS’s by their relative
names: ABC.GDG(0) corresponds to the absolute name ABC.GDG.G0005V00.
ABC.GDG(-1) would be generation ABC.GDG.G0004V00, and so on. The relative number
can also be used to catalog a new generation (+1), which would be generation number 6 with
an absolute name of ABC.GDG.G0006V00. Because the limit is 5, the oldest generation
(G0001V00) would be rolled-off if you define a new one.
The parameters you specify on the DEFINE GENERATIONDATAGROUP IDCAMS command determine
what happens to rolled-off GDS’s. For example, if you specify the SCRATCH parameter, the
GDS is scratched from VTOC when it is rolled off. If you specify the NOSCRATCH parameter, the
rolled-off generation data set is re-cataloged as rolled off and is disassociated with its
generation data group.
GDS’s can be in a deferred roll-in state if the job never reached end-of-step or if they were
allocated as DISP=(NEW,KEEP) and the data set is not system-managed. However, GDS’s
in a deferred roll-in state can be referred to by their absolute generation numbers. You can
use the IDCAMS command ALTER ROLLIN to roll in these GDS’s.
For further information about Generation Data Groups, refer to z/OS DFSMS: Using Data
Sets, SC26-7410.
}
VTOC
EMPTY -
NOSCRATCH - A
LIMIT(255)) B ABC.GDG C
/*
A
C
available space
The DEFINE GENERATIONDATAGROUP command defines a GDG base catalog entry GDG01.
Figure 4-29 shows a generation data set defined within the GDG by using JCL statements.
The job DEFGDG2 allocates space and catalogs a GDG data set in the newly-defined GDG.
The job control statement GDGDD1 DD specifies the GDG data set in the GDG.
Only one model DSCB is necessary for any number of generations. If you plan to use only
one model, do not supply DCB attributes when you create the model. When you
subsequently create and catalog a generation, include necessary DCB attributes in the DD
statement referring to the generation. In this manner, any number of GDGs can refer to the
same model. The catalog and model data set label are always located on a direct access
volume, even for a magnetic tape GDG.
Restriction: You cannot use a model DSCB for system-managed generation data sets.
The generation and version number are in the form GxxxxVyy, where xxxx is an unsigned
four-digit decimal generation number (0001 through 9999) and yy is an unsigned two-digit
decimal version number (00 through 99). For example:
A.B.C.G0001V00 is generation data set 1, version 0, in generation data group A.B.C.
A.B.C.G0009V01 is generation data set 9, version 1, in generation data group A.B.C.
The number of generations and versions is limited by the number of digits in the absolute
generation name; that is, there can be 9,999 generations. Each generation can have 100
versions. The system automatically maintains the generation number.
You can catalog a generation using either absolute or relative numbers. When a generation is
cataloged, a generation and version number is placed as a low-level entry in the generation
data group. To catalog a version number other than V00, you must use an absolute
generation and version number.
A.B.C.G0005V00 = A.B.C(-1)
READ/UPDATE OLD GDS
A.B.C.G0006V00 = A.B.C(0)
DEFINE NEW GDS
A.B.C.G0007V00 = A.B.C(+1)
The value of the specified integer tells the operating system what generation number to
assign to a new generation data set, or it tells the system the location of an entry representing
a previously cataloged old generation data set.
When you use a relative generation number to catalog a generation, the operating system
assigns an absolute generation number and a version number of V00 to represent that
generation. The absolute generation number assigned depends on the number last assigned
and the value of the relative generation number that you are now specifying. For example, if
in a previous job generation, A.B.C.G0006V00 was the last generation cataloged, and you
specify A.B.C(+1), the generation now cataloged is assigned the number G0007V00.
Though any positive relative generation number can be used, a number greater than 1 can
cause absolute generation numbers to be skipped for a new generation data set. For
example, if you have a single step job and the generation being cataloged is a +2, one
generation number is skipped. However, in a multiple step job, one step might have a +1 and
a second step a +2, in which case no numbers are skipped. The mapping between relative
and absolute numbers is kept until the end of the job.
PO.DATA.SET
DIRECTORY
A
B C
A B
MEMBERS
C
In a partitioned organized data set, the “books” are called members, and to locate them, they
are pointed to by entries in a directory, as shown in Figure 4-32.
The members are individual sequential data sets and can be read or written sequentially,
once they have been located via directory. Then, the records of a given member are written
or retrieved sequentially.
Partitioned data sets can only exist on DASD. Each member has a unique name, one to eight
characters in length, and is stored in a directory that is part of the data set.
The main advantage of using a PO data set is that, without searching the entire data set, you
can retrieve any individual member after the data set is opened. For example, in a program
library (always a partitioned data set) each member is a separate program or subroutine. The
individual members can be added or deleted as required.
All these improvements require almost total compatibility, at the program level and the user
level, with the old PDS.
If your data set is large, or if you expect to update it extensively, it might be best to allocate a
large space. A PDS cannot occupy more than 65,535 tracks and cannot extend beyond one
volume. If your data set is small or is seldom changed, let SMS calculate the space
requirements to avoid wasted space or wasted time used for recreating the data set.
Space for the directory is expressed in 256 byte blocks. Each block contains from 3 to 21
entries, depending on the length of the user data field. If you expect 200 directory entries,
request at least 10 blocks. Any unused space on the last track of the directory is wasted
unless there is enough space left to contain a block of the first member.
The system allocates 5 cylinders to the data set, of which ten 256-byte records are for a
directory. Since the CONTIG subparameter is coded, the system allocates 10 contiguous
cylinders on the volume. The secondary allocation is two cylinders, which is needed when the
data set needs to expand beyond the five cylinders primary allocation.
CREATION
DATA CLASS CONSTRUCT
SMS
VOLUME + DSNTYPE=LIBRARY
// DD DSNTYPE=LIBRARY
CONVERSION
New directory pages are added, interleaved with the member pages, as new directory entries
are required. A PDSE always occupies at least five pages of storage.
The directory is like a KSDS index structure (KSDS is covered in “VSAM key sequenced
cluster (KSDS)” on page 169), making a search much faster. It cannot be overwritten by being
opened for sequential output.
If you try to add a member with DCB characteristics that differs from the rest of the members,
you will get an error.
Restriction: You cannot use a PDSE for certain system data sets that are opened in the
IPL/NIP time frame.
PDSE enhancements
Recent enhancements have made PDSEs more reliable and available, correcting a few
problems that caused some IPLs due to a hang, deadlock, or out-of-storage condition.
Originally, in order to implement PDSE, two system address spaces were introduced:
SMXC, in charge of PDSE serialization.
SYSBMAS, the owner of the data space and hiperspace buffering.
z/OS V1R6 combines SMXC and SYSBMAS to a single address space called SMSPDSE.
This improves overall PDSE usability and reliability by:
Reducing excessive ECSA usage (by moving control blocks into the SMSPDSE address
space)
Reducing re-IPLs due to system hangs in failure or CANCEL situations
Providing storage administrators with tools for monitoring and diagnosis through VARY
SMS,PDSE,ANALYSIS command (for example, determining which systems are using a
particular PDSE)
However, the SMSPDSE address space is usually non-restartable because of the eventual
existence of perennial PDSEs data sets in the LNKLST concatenation. Then, any hang
condition could cause an unplanned IPL. To fix this, we have a new AS, the restartable
SMSPDSE1, which is in charge of all allocated PDSEs except the ones in the LNKST.
You can convert the entire data set or individual members, and also back up and restore
PDSEs. By using the DFSMSdss COPY function with the CONVERT and PDS keywords, you
can convert a PDSE back to a PDS. This is especially useful if you need to prepare a PDSE
for migration to a site that does not support PDSEs. When copying members from a PDS load
module library into a PDSE program library, or vice versa, the system invokes the program
management binder component.
Converting PDSs to PDSEs is beneficial, but be aware that certain data sets are unsuitable
for conversion to, or allocation as, PDSEs because the system does not retain the original
block boundaries.
Using DFSMSdss
In Figure 4-36, the DFSMSdss COPY example converts all PDSs with the high-level qualifier
of “MYTEST” on volume SMS001 to PDSEs with the high-level qualifier of “MYTEST2” on
Using IEBCOPY
To copy one or more specific members using IEBCOPY, as shown in Figure 4-36 on
page 155, use the SELECT control statement. In this example, IEBCOPY copies members A,
B, and C from USER.PDS.LIBRARY to USER.PDSE.LIBRARY.
For more information about DFSMSdss, see z/OS DFSMSdss Storage Administration Guide,
SC35-0423, and z/OS DFSMSdss Storage Administration Reference, SC35-0424.
The binder
The binder is the program that processes the output of language translators and compilers
into an executable program (load module or program object). It replaced the linkage editor
The program management loader increases the services of the program fetch component by
adding support for loading program objects. The program management loader reads both
program objects and load modules into virtual storage and prepares them for execution. It
relocates any address constants in the program to point to the appropriate areas in virtual
storage and supports 24-bit, 31-bit, and 64-bit addressing ranges. All program objects loaded
from a PDSE are page-mapped into virtual storage. When loading program objects from a
PDSE, the loader selects a loading mode based on the module characteristics and
parameters specified to the binder when you created the program object. You can influence
the mode with the binder FETCHOPT parameter. The FETCHOPT parameter allows you to
select whether the program is completely preloaded and relocated before execution, or
whether pages of the program can be read into virtual storage and relocated only when they
are referenced during execution.
IEWTPORT utility
The transport utility (IEWTPORT) is a program management service with very specific and
limited function. It obtains (via the binder) a program object from a PDSE and converts it into
a transportable program file in a sequential (nonexecutable) format. It also reconstructs the
program object from a transportable program file and stores it back into a PDSE (through the
binder).
Access methods
An access method defines the technique that is used to store and retrieve data. Access
methods have their own data set structures to organize data, macros to define and process
data sets, and utility programs to process data sets. Access methods are identified primarily
by the data set organization. For example, use the basic sequential access method (BSAM)
or queued sequential access method (QSAM) with sequential data sets. However, there are
times when an access method identified with one organization can be used to process a data
set organized in a different manner.
Physical sequential
There are two sequential access methods, basic sequential access method (BSAM) and
queued sequential access method (QSAM) and just one sequential organization. Both
methods access data organized in a physical sequential manner; the physical records
(containing logical records) are stored sequentially in the order in which they are entered.
An important performance item in sequential access is buffering. If you allow enough buffers,
QSAM is able to minimize the number of SSCHs by packaging together in the same I/O
operation (through CCW command chaining) the data transfer of many physical blocks. This
function decreases considerably the total amount of I/O connect time. Another key point is the
look-ahead function for reads, that is, reading in advance records that are not yet required by
the application program.
Extended format data sets must be SMS-managed and must reside on DASD. You cannot
use an extended format data set for certain system data sets.
Programs can also access the information in HFS files through the MVS BSAM, QSAM, and
VSAM (Virtual Storage Access Method) access methods. When using BSAM or QSAM, an
HFS file is simulated as a multi-volume sequential data set. When using VSAM, an HFS file is
simulated as an ESDS. HFS data sets are:
Supported by standard DADSM create, rename, and scratch
Supported by DFSMShsm for dump/restore and migrate/recall if DFSMSdss is used as
the data mover
Not supported by IEBCOPY or the DFSMSdss COPY function
QSAM arranges records sequentially in the order that they are entered to form sequential
data sets, which are the same as those data sets that BSAM creates. The system organizes
records with other records. QSAM anticipates the need for records based on their order. To
improve performance, QSAM reads these records into storage before they are requested.
This is called queued access. You can use QSAM with the following data types:
Sequential data sets
Basic format sequential data sets before z/OS V1R7, which were known as sequential
data sets or more accurately as non-extended-format sequential data sets
Large format sequential data sets
Extended-format data sets
z/OS UNIX files
Catalog
Management
VSAM Data set types:
KSDS
ESDS
CATALOG
LDS
RRDS
Fixed Length
Variable Length
Record
Management
z/OS UNIX files can be accessed as though they are VSAM entry-sequenced data sets
(ESDS). Although UNIX files are not actually stored as entry-sequenced data sets, the
system attempts to simulate the characteristics of such a data set. To identify or access a
UNIX file, specify the path that leads to it.
Any type of VSAM data set can be in extended format. Extended-format data sets have a
different internal storage format than data sets that are not extended. This storage format
gives extended-format data sets additional usability characteristics and possibly better
performance due to striping. You can choose that an extended-format key-sequenced data
set be in the compressed format. Extended-format data sets must be SMS managed. You
cannot use an extended-format data set for certain system data sets.
Logical record
Unit of application information in a VSAM data set
Designed by the application programmer
Can be of fixed or variable size
Divided into fields, one of them can be a key
Physical record
Control interval
Control area
Component
Cluster
Sphere
Logical record
A logical record is a unit of application information used to store data in a VSAM cluster. The
logical record is designed by the application programmer from the business model. The
application program, through a GET, requests that a specific logical record be moved from
the I/O device to memory in order to be processed. Through a PUT, the specific logical record
is moved from memory to an I/O device. A logical record can be of a fixed size or a variable
size, depending on the business requirements.
The logical record is divided into fields by the application program, such as the name of the
item, code, and so on. One or more contiguous fields can be defined as a key field to VSAM,
and a specific logical record can be retrieved directly by its key value.
Logical records of VSAM data sets are stored differently from logical records in non-VSAM
data sets.
Physical record
A physical record is device-dependent and is a set of logical records moved during an I/O
operation by just one CCW (Read or Write). VSAM calculates the physical record size in
order to optimize the track space (to avoid many gaps) at the time the data set is defined. All
physical records in VSAM have the same length. A physical record is also referred to as a
physical block or simply a block. VSAM may have control information along with logical
records in a physical record.
Component
A component in systems with VSAM is a named, cataloged collection of stored records, such
as the data component or index component of a key-sequenced file or alternate index. A
component is a set of CAs. It is the VSAM terminology for an MVS data set. A component has
an entry in the VTOC. An example of a component can be the data set containing only data
for a KSDS VSAM organization.
Cluster
A cluster is a named structure consisting of a group of related components. VSAM data sets
can be defined with either the DEFINE CLUSTER command or the ALLOCATE command. The
cluster is a set of components that have a logical binding between them. For example, a
KSDS cluster is composed of the data component and the index component. The concept of
cluster was introduced to make the JCL to access VSAM more flexible. If you want to access
a KSDS normally, just use the cluster’s name on a DD card. Otherwise, if you want some
special processing with just the data, use the data component name on the DD card.
Sphere
A sphere is a VSAM cluster and its associated data sets. The cluster is originally defined with
the access method services ALLOCATE command, the DEFINE CLUSTER command, or through
JCL. The most common use of the sphere is to open a single cluster. The base of the sphere
is the cluster itself.
3 bytes
Control Interval Format 4 bytes
C
R R R
FREE SPACE I
LR1 LR2 LR
LRnn D D D
D
Fn F2 F1
F
Contigous records of
the same size
Based on the CI size, VSAM calculates the best size of the physical block in order to better
use the 3390/3380 logical track. The CI size can be from 512 bytes to 32 KB. A CI contents
depends on the cluster organization. A KSDS consists of:
Logical records stored from the beginning to the end of the CI.
Free space, for data records to be inserted into or lengthened.
Control information, which is made up of two types of fields:
– One control interval definition field (CIDF) per CI. CIDF is a 4-byte field. CIDF contains
information about the amount and location of free space in the CI.
The size of CIs can vary from one component to another, but all the CIs within the data or
index component of a particular cluster data set must be of the same length. The CI
components and properties may vary, depending on the data set organization. For example,
an LDS does not contain CIDFs and RDFs in its CI. All of the bytes in the LDS CI are data
bytes.
Spanned records
Spanned records are logical records that are larger than the CI size. They are needed when
the application requires very long logical records. To have spanned records, the file must be
defined with the SPANNED attribute at the time it is created. Spanned records are allowed to
extend across or “span” control interval boundaries, but not beyond control area limits. The
RDFs describe whether the record is spanned or not.
A spanned record always begins on a control interval boundary, and fills one or more control
intervals within a single control area. A spanned record does not share the CI with any other
records; in other words, the free space at the end of the last segment is not filled with the next
record. This free space is only used to extend the spanned record.
CAs are needed to implement the concept of splits. The size of a VSAM file is always a
multiple of the CA size and VSAM files are extended in units of CAs.
Splits
CI splits and CA splits occur as a result of data record insertions (or increasing the length of
an already existing record) in KSDS and VRRDS organizations. If a logical record is to be
inserted (in key sequence) and there is not enough free space in the CI, the CI is split.
Approximately half the records in the CI are transferred to a free CI provided in the CA, and
the record to be inserted is placed in the original CI.
If there are no free CIs in the CA and a record is to be inserted, a CA split occurs. Half the CIs
are sent to the first available CA at the end of the data component. This movement creates
free CIs in the original CA, then the record to be inserted causes a CI split.
Cluster
HDR 67 95
Index
HDR 38 67 HDR 95 Set
Index
Component
H H H
D 7 11 14 21 30 38 D 43 50 54 57 64 67 D 71 75 78 85 92 95 Sequence
R R R Set
2 5 7 39 41 43 68 69
8 9 44 45 46 72 73 74
Record
12 13 14 51 53 54 76 77 78
Data
Component 15 16 19 55 56 57 79 80 85
key
22 23 26 58 61 62 86 89
31 35 38 65 66 67 93 94 95 Control
Interval
Logical Record
Data component
The data component is the part of a VSAM cluster, alternate index, or catalog that contains
the data records. All VSAM cluster organizations have the data component.
Index component
The index component is a collection of records containing data keys and pointers (relative
byte address, or RBA). The data keys are taken from a fixed defined field in each data logical
record. The keys in the index logical records are compressed (rear and front). The RBA
pointers are compacted. Only KSDS and VRRDS VSAM data set organizations have the
index component.
Using the index, VSAM is able to retrieve a logical record from the data component when a
request is made randomly for a record with a certain key. A VSAM index can consist of more
than one level (binary tree). Each level contains pointers to the next lower level. Because
there are random and sequential types of access, VSAM divides the index component into
two parts: the sequence set, and the index set.
Index set
The records in all levels of the index above the sequence set are called the index set. An entry
in an index set logical record consists of the highest possible key in an index record in the
next lower level, and a pointer to the beginning of that index record. The highest level of the
index always contains a single index CI.
The structure of VSAM prime indexes is built to create a single index record at the lowest
level of the index. If there is more than one sequence-set-level record, VSAM automatically
builds another index level.
Cluster
A cluster is the combination of the data component (data set) and the index component (data
set) for a KSDS. The cluster provides a way to treat index and data components as a single
component with its own name. Use of the word cluster instead of data set is recommended.
The records in the AIX index component contain the alternate key and the RBA pointing to
the alternate index data component. The records in the AIX data component contain the
alternate key value itself and all the primary keys corresponding to the alternate key value
(pointers to data in the base cluster). The primary keys in the logical record are in ascending
sequence within an alternate index value.
Any field in the base cluster record can be used as an alternate key. It can also overlap the
primary key (in a KSDS), or any other alternate key. The same base cluster may have several
alternate indexes varying the alternate key. There may be more than one primary key value
per the same alternate key value. For example, the primary key might be an employee
number and the alternate key might be the department name; obviously, the same
department name may have several employee numbers.
AIX cluster is created with IDCAMS DEFINE ALTERNATEINDEX command, then it is populated
via the BLDINDEX command. Before a base cluster can be accessed through an alternate
index, a path must be defined. A path provides a way to gain access to the base data through
a specific alternate index. To define a path, use the DEFINE PATH command. The utility to
issue this command is discussed in “Access method services (IDCAMS)” on page 135.
Sphere
A sphere is a VSAM cluster and its AIX associated clusters’ data sets.
The key field must be contiguous and each key’s contents must be unique. After it is
specified, the value of the key cannot be altered, but the entire record may be deleted.
When a new record is added to the data set, it is inserted in its logical collating sequence by
key.
A KSDS has a data component and an index component. The index component keeps track
of the used keys and is used by VSAM to retrieve a record from the data component quickly
when a request is made for a record with a certain key.
A KSDS can be accessed in sequential mode, direct mode, or skip sequential mode
(meaning that you process sequentially, but directly skip some portions of the data set).
HDR 67 95
Index
HDR 38 67 HDR 95 Set
Index
Component
H H H
D 7 11 14 21 26 38 D 43 50 54 57 64 67 D 71 75 78 85 92 95 Sequence
R R R Set
2 5 7 39 41 43 68 69
8 9 44 45 46 72 73 74
12 13 14 51 53 54 76 77 78
Data
Component 15 16 19 55 56 57 79 80 85
22 23 26 58 61 62 86 89
31 35 38 65 66 67 93 94 95
Control
Interval Control Area Control Area Control Area
Application
When initially loading a KSDS data set, records must be presented to VSAM in key
sequence. This loading can be done through the IDCAMS VSAM utility named REPRO. The
index for a key-sequenced data set is built automatically by VSAM as the data set is loaded
with records.
When a data CI is completely loaded with logical records, free space, and control information,
VSAM makes an entry in the index. The entry consists of the highest possible key in the data
control interval and a pointer to the beginning of that control interval.
When accessing records sequentially, VSAM refers only to the sequence set. It uses a
horizontal pointer to get from one sequence set record to the next record in collating
sequence.
If VSAM does not find a record with the desired key, the application receives a return code
indicating that the record was not found.
CI 1
C
R R
RECORD RECORD RECORD RECORD I
UNUSED SPACE D D
1 2 3 4 D
F F
F
RBA 0
CI 2
C
R R R R
RECORD RECORD RECORD RECORD UNUSED I
D D D D
5 6 7 8 SPACE D
F F F F
F
RBA 4096
CI 3
C
R R
RECORD RECORD I
UNUSED SPACE D D
9 10 D
F F
F
RBA 8192
CI 4
C
I
UNUSED SPACE D
F
RBA 12288
Records can be accessed sequentially or directly by relative byte address (RBA). When a
record is loaded or added, VSAM indicates its relative byte address (RBA). The RBA is the
offset of the first byte of the logical record from the beginning of the data set. The first record
in a data set has an RBA of 0; the second record has an RBA equal to the length of the first
record, and so on. The RBA of a logical record depends only on the record's position in the
sequence of records. The RBA is always expressed as a full-word binary integer.
Although an entry-sequenced data set does not contain an index component, alternate
indexes are allowed. You can build an alternate index with keys to keep track of these RBAs.
Application program:
GET NEXT
CI 1
C
R R
RECORD RECORD RECORD RECORD I
UNUSED SPACE D D
1 2 3 4 D
F F
F
RBA 0
CI 2
C
R R R R
RECORD RECORD RECORD RECORD UNUSED I
D D D D
5 6 7 8 SPACE D
F F F F
F
RBA 4096
CI 3
C
R R
RECORD RECORD I
UNUSED SPACE D D
9 10 D
F F
F
RBA 8192
CI 4
C
I
UNUSED SPACE
D
F
RBA 12288
Figure 4-46 Typical ESDS processing (ESDS)
Existing records can never be deleted. If the application wants to delete a record, it must flag
that record as inactive. As far as VSAM is concerned, the record is not deleted. Records can
be updated, but without length change.
ESDS organization is suited for sequential processing with variable records, but in a few read
accesses you need a direct (random) access by key (here using the AIX cluster).
C
R R
CI 0 SLOT 1 SLOT 2 SLOT 3 SLOT 4 SLOT 5 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 11 SLOT 12 SLOT 13 SLOT 14 SLOT 15 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 16 SLOT 17 SLOT 18 SLOT 19 SLOT 20 D
F
D
F
D
F
C
R R
CI 0 SLOT 21 SLOT 22 SLOT 23 SLOT 24 SLOT 25 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 31 SLOT 32 SLOT 33 SLOT 34 SLOT 35 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 36 SLOT 37 SLOT 38 SLOT 39 SLOT 40 D
F
D
F
D
F
Application program:
GET Record 26
C
R R
CI 0 SLOT 1 SLOT 2 SLOT 3 SLOT 4 SLOT 5 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 11 SLOT 12 SLOT 13 SLOT 14 SLOT 15 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 16 SLOT 17 SLOT 18 SLOT 19 SLOT 20 D
F
D
F
D
F
C
R R
CI 0 SLOT 21 SLOT 22 SLOT 23 SLOT 24 SLOT 25 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 31 SLOT 32 SLOT 33 SLOT 34 SLOT 35 D D
D
O F F
F
L
C
R R
CI 3 SLOT 36 SLOT 37 SLOT 38 SLOT 39 SLOT 40 D
F
D
F
I
D
F
CI DATA
C
O
N A CI DATA
T R
R E
A CI DATA
O
L
CI DATA
CI DATA
C
O
N A CI DATA
T R
R E
A CI DATA
O
L
CI DATA
IDCAMS is used to define a linear data set. An LDS has only a data component. An LDS data
set is just a physical sequential VSAM data set comprised of 4 KB physical records, but with a
revolutionary buffer technique called data-in-virtual (DIV).
A linear data set is processed as an entry-sequenced data set, with certain restrictions.
Because a linear data set does not contain control information, it cannot be accessed as
though it contained individual records. You can access a linear data set with the DIV macro. If
using DIV to access the data set, the control interval size must be 4096; otherwise, the data
set will not be processed.
When a linear data set is accessed with the DIV macro, it is referred to as the data-in-virtual
object or the data object.
For information about how to use data-in-virtual, see z/OS MVS Programming: Assembler
Services Guide, SA22-7605.
Data-in-virtual (DIV)
You can access a linear data set using these techniques:
VSAM
DIV, if the control interval size is 4096 bytes. The data-in-virtual (DIV) macro provides
access to VSAM linear data sets.
Window services, if the control interval size is 4096 bytes.
Data-in-virtual (DIV) is an optional and unique buffering technique used for LDS data sets.
Application programs can use DIV to map a data set (or a portion of a data set) into an
address space, a data space, or a hiperspace. An LDS cluster is sometimes referred to as a
DIV object. After setting the environment, the LDS cluster looks to the application as a table
in virtual storage with no need of issuing I/O requests.
Data is read into central storage via the paging algorithms only when that block is actually
referenced. During RSM™ page-steal processing, only changed pages are written to the
cluster in DASD. Unchanged pages are discarded since they can be retrieved again from the
permanent data set.
DIV is designed to improve the performance of applications that process large files
non-sequentially and process them with significant locality of reference. It reduces the
number of I/O operations that are traditionally associated with data retrieval. Likely
candidates are large arrays or table files.
LDS
WINDOW
OFFSET
BLOCK3
BLOCK4 SPAN
BLOCK5
(Address space)
(Dataspace)
or
(Hiperspace)
No actual I/O is done until the program references the data in the window. The reference will
result in a page fault which causes data-in-virtual services to read the data from the linear
data set into the window.
DIV SAVE can be used to write out changes to the data object. DIV RESET can be used to
discard changes made in the window since the last SAVE operation.
The objective of a buffer pool is to avoid I/O operations in random accesses (due to re-visiting
data) and to make these I/O operations more efficient in sequential processing, thereby
improving performance.
For more efficient use of virtual storage, buffer pools can be shared among clusters using
locally or globally shared buffer pools. There are four types of resource pool management,
called modes, defined according to the technique used to manage them:
Not shared resources (NSR)
Local shared resources (LSR)
Global shared resources (GSR)
Record-level shared resources (RLS)
These modes can be declared in the ACB macro of the VSAM data set (MACRF keyword)
and are described in the following section.
INDEX
User ACB
MACRF=
(LSR,NUB)
Data
NSR is used by high-level languages. Since buffers are managed via a sequential algorithm,
NSR is not the best choice for random processing. For applications using NSR, consider
using system-managed buffering, discussed in “VSAM: System-managed buffering (SMB)”
on page 182.
GSR is not commonly used by applications, so you should consider the use of VSAM RLS
instead.
For more information about NSR, LSR, and GSR, refer to “Base VSAM buffering” on
page 356 and also to the IBM Redbooks document VSAM Demystified, SG24-6105.
Usually, SMB allocates many more buffers than would be allocated without SMB.
Performance improvements can be dramatic with random access (particularly when few
buffers were available). The use of SMB is transparent from the point of view of the
application; no application changes are needed.
SMB is available to a data set when all the following conditions are met:
It is an SMS-managed data set.
It is in extended format (DSNTYPE = EXT in the data class).
The application opens the data set for NSR processing.
The information contained in SMB processing techniques is needed because SMB must
maintain an adequate algorithm for managing the CIs in the resource pool. SMB accepts the
ACB MACRF options when the I/O operation is requested. For this reason, the installation
must accurately specify the processing type, through ACCBIAS options:
Direct Optimized (DO): SMB optimizes for totally random record access. When this
technique is used, VSAM changes the buffering management from NSR to LSR.
Direct Weighted (DW): The majority is direct access to records, with some sequential.
Sequential Optimized (SO): Totally sequential access.
Sequential Weighted (SW): The majority is sequential access, with some direct access to
records.
When SYSTEM is used in JCL or in the data class, SMB chooses the processing technique
based on the MACRF parameter of the ACB.
For more information about the use of SMB, refer to VSAM Demystified, SG24-6105.
Extended addressability
VSAM enhancements
Following is a list of the major VSAM enhancements since DFSMS V1R2. For the majority of
these functions extended format is a prerequisite. The enhancements are:
Data compression for KSDS: Good for improving I/O mainly for write-once, read-many
clusters.
Extended addressability: Allows data components larger than 4 GB. The limitation was
caused by an RBA field of 4 byte, now RBA has an 8 byte length.
Record-level sharing (RLS): Allows VSAM data sharing across z/OS systems in a Parallel
Sysplex.
System-managed buffering (SMB): Improves the performance of random NSR
processing.
Data stripping and multi-layering: Improves sequential access performance due to parallel
I/Os in several volumes (stripes).
DFSMS data set separation: Allows the allocation of cluster is distinct physical control
units.
Free space release: As for non-VSAM data sets, the free space not used at the end of the
data component can be released at de-allocation.
DFSORT, together with DFSMS and RACF, form the strategic product base for the evolving
system-managed storage environment. DFSORT is designed to optimize the efficiency and
speed with which operations are completed through synergy with processor, device, and
system features (for example, memory objects, Hiperspace™, data space, striping,
compression, extended addressing, DASD and tape device architecture, processor memory,
and processor cache.
DFSORT example
The simple example in Figure 4-56 illustrates how DFSORT merges data sets by combining
two or more files of sorted records to form a single data set of sorted records.
You can use DFSORT to do simple application tasks such as alphabetizing a list of names, or
you can use it to aid complex tasks such as taking inventory or running a billing system. You
can also use DFSORT's record-level editing capability to perform data management tasks.
For most of the processing done by DFSORT, the whole data set is affected. However, some
forms of DFSORT processing involve only certain individual records in that data set.
DFSORT has utilities such as ICETOOL, which is a multipurpose DFSORT utility that uses
the capabilities of DFSORT to perform multiple operations on one or more data sets in a
single step.
Specifying the DFSORT customization parameters is a very important task for the z/OS
system programmers. Depending on such parameters, DFSORT may use lots of system
resources such as CPU, I/O, and especially virtual storage. The uncontrolled use of virtual
storage may cause IPLs due to the lack of available slots in page data sets. Plan to use the
IEFUSI z/OS exit to control products such as DFSORT.
For articles, online books, news, tips, techniques, examples, and more, visit the z/OS
DFSORT home page:
http://www-1.ibm.com/servers/storage/support/software/sort/mvs
z/OS
DFSMS AMS
NETWORK
HP/UX
AIX FILE
MVS Data Sets
SYSTEM
SERVER
OMVS
TCP/IP
(z/OS NFS) Network
Unix Hierarchical File
z/OS
System
DFSMS
NETWORK
FILE
SYSTEM
Other NFS CLIENT
Sun Solaris Client and
Servers
With the NFS server, you can remotely access z/OS conventional data sets or z/OS UNIX
files from workstations, personal computers, and other systems that run client software for the
The z/OS NFS server acts as an intermediary to read, write, create, or delete z/OS UNIX files
and MVS data sets that are maintained on an MVS host system. The remote MVS data sets
or z/OS UNIX files are mounted from the host processor to appear as local directories and
files on the client system.
With the NFS client you can allow basic sequential access method (BSAM), queued
sequential access method (QSAM), virtual storage access method (VSAM), and z/OS UNIX
users and applications transparent access to data on systems that support the Sun NFS
version 2 protocols and the Sun NFS version 3 protocols.
Other client platforms should work as well since NFS version 4 is an industry standard
protocol, but they have not been tested by IBM.
NFS client software for other IBM platforms is available from other vendors. You can also
access the NFS server from non-IBM clients that use the NFS version 2 or version 3 protocol,
including:
DEC stations running DEC ULTRIX version 4.4
HP 9000 workstations running HP/UX version 10.20
Sun PC-NFS version 5
Sun workstations running SunOS™ or Sun Solaris versions 2.5.3
For further information about NFS, refer to z/OS Network File System Guide and Reference,
SC26-7417, and visit:
http://www-1.ibm.com/servers/eserver/zseries/zos/nfs/
DFSMS Optimizer uses input data from several sources in the system and processes it using
an extract program that merges the data and builds the Optimizer database.
By specifying different filters you can produce reports that help you build a detailed storage
management picture of your enterprise. With the report data, you can use the charting facility
to produce color charts and graphs.
The DFSMS Optimizer provides analysis and simulation information for both SMS and
non-SMS users. The DFSMS Optimizer can help you maximize storage use and minimize
storage costs. It provides methods and facilities for you to:
Monitor and tune DFSMShsm functions such as migration and backup
Create and maintain a historical database of system and data activity
For more information about the DFSMS Optimizer, refer to DFSMS Optimizer User’s Guide
and Reference, SC26-7047 or visit:
http://www-1.ibm.com/servers/storage/software/opt/
RESTORE...
TAPECNTL...
TSO
Figure 4-59 DFSMSdss backing up and restoring volumes and data sets
Note: Like devices have the same track capacity and number of tracks per cylinder (for
example, 3380 Model D, Model E, and Model K). Unlike DASD devices have different
track capacities (for example, 3380 and 3390), a different number of tracks per cylinder,
or both.
Physical
or
Logical ?
TSO
During a restore operation, the data is processed the same way it is dumped because
physical and logical dump tapes have different formats. If a data set is dumped logically, it is
restored logically; if it is dumped physically, it is restored physically. A data set restore
operation from a full volume dump is a physical data set restore operation.
UCAT
DUMP
ABC.FILE
ABC.FILE
TSO
VOLABC
ABC.FILE
DU
MP
01
Logical processing
A logical copy, dump, or restore operation treats each data set and its associated information
as a logical entity, and processes an entire data set before beginning the next one.
Each data set is moved by tracks from the source device and is potentially written to the
target device as a set of data records, allowing data movement between devices with
different track and cylinder configurations. Checking of data record consistency is not
performed during dump operation.
Catalogs and VTOCs are used to select data sets for logical processing. If you do not specify
input volumes, the catalogs are used to select data sets for copy and dump operations. If you
specify input volumes using the LOGINDDNAME, LOGINDYNAM, or STORGRP keywords
on the COPY or DUMP command, DFSMSdss uses VTOCs to select data sets for
processing.
DUMP FULL
CACSW3
TSO
DU
MP
0 1
Physical processing
Physical processing moves data based on physical track images. Because data movement is
carried out at the track level, only target devices with track sizes equal to those of the source
device are supported. Physical processing operates on volumes, ranges of tracks, or data
sets. For data sets, it relies only on volume information (in the VTOC and VVDS) for data set
selection, and processes only that part of a data set residing on the specified input volumes.
Attention: Take care when invoking the TRACKS keyword with the COPY and RESTORE
commands. The TRACKS keyword should be used only for a data recovery operation.
For example, you can use it to “repair” a bad track in the VTOC or a data set, or to
retrieve data from a damaged data set. You cannot use it in place of a full-volume or a
logical data set operation. Doing so could destroy a volume or impair data integrity.
You specify the data set keyword on the DUMP command and input volumes with the
INDDNAME or INDYNAM parameter. This produces a physical data set dump.
The RESTORE command is executed and the input volume is created by a physical dump
operation.
DFD
SS
Sta
nd -
A lo
ne
Ta p
e
Figure 4-63 DFSMSdss stand-alone services
Stand-alone services can perform either a full-volume restore or a tracks restore from dump
tapes produced by DFSMSdss or DFDSS and offers the following benefits:
Provides user-friendly commands to replace the previous control statements
Supports IBM 3494 and 3495 Tape Libraries, and 3590 Tape Subsystems
Supports IPLing from a DASD volume, in addition to tape and card readers
Allows you to predefine the operator console to be used during stand-alone services
processing
For detailed information about the stand-alone service, and other DFSMSdss information,
refer to z/OS DFSMSdss Storage Administration Reference, SC35-0424, and z/OS
DFSMSdss Storage Administration Guide, SC35-0423 and visit:
http://www-1.ibm.com/servers/storage/software/sms/dss/
Availability Space
Automatic Backup
Incremental Backup
Availability management is used to make data available by automatically copying new and
changed data set to backup volumes.
Space management is used to manage DASD space by enabling inactive data sets to be
moved off fast-access storage devices, thus creating free space or new allocations.
DFSMShsm also provides for other supporting functions that are essential to your
installation's environment.
For further information about DFSMShsm, refer to z/OS DFSMShsm Storage Administration
Guide, SC35-0421 and z/OS DFSMShsm Storage Administration Reference, SC35-0422,
and visit:
http://www-1.ibm.com/servers/storage/software/sms/hsm/
SMS-managed Non-SMS-managed
Primary and
Storage Groups
Secondary Volumes
(volumes)
User
Catalog
DFSMShsm
Control
Backup Data
Sets
Functions
TAPE
Availability management
DFSMShsm backs up your data—automatically or by command—to ensure availability if
accidental loss of the data sets or physical loss of volumes should occur. DFSMShsm also
allows the storage administrator to copy backup and migration tapes, and to specify that
copies be made in parallel with the original. You can store the copies on site as protection
from media damage, or offsite as protection from site damage. DFSMShsm also provides
disaster backup and recovery for user-defined groups of data sets (aggregates) so that you
can restore critical applications at the same location or at an offsite location.
Note: You must also have DFSMSdss to use the DFSMShsm functions.
Availability management ensures that a recent copy of your DASD data set exists. The
purpose of availability management is to ensure that lost or damaged data sets can be
retrieved at the most current possible level. DFSMShsm uses DFSMSdss as a fast data
mover for backups. Availability management automatically and periodically performs
functions that:
1. Copy all the data sets on DASD volumes to tape volumes
2. Copy the changed data sets on DASD volumes (incremental backup) either to other
DASD volumes or to tape volumes
DFSMShsm minimizes the space occupied by the data sets on the backup volume by using
compression and stacking.
SMS-managed Non-SMS-managed
Primary
Storage Groups
Volumes
(volumes)
User
Catalog
DFSMShsm
Control
Data
Migration Sets
DASD
Level 1
Space management
Space management is the function of DFSMShsm that allows you to keep DASD space
available for users in order to meet the service level objectives for your system. The purpose
of space management is to manage your DASD storage efficiently. To do this, space
management automatically and periodically performs functions that:
1. Move low activity data sets (using DFSMSdss) from user-accessible volumes to
DFSMShsm volumes
2. Reduce the space occupied by data on both the user-accessible volumes and the
DFSMShsm volumes
DFSMShsm improves DASD space usage by keeping only active data on fast-access
storage devices. It automatically frees space on user volumes by deleting eligible data sets,
releasing overallocated space, and moving low-activity data to lower cost-per-byte devices,
even if the job did not request tape.
It is possible to have more than one z/OS image sharing the same DFSMShsm policy. In this
case one of the DFSMShsm images is the primary host and the others are secondary. The
primary HSM host is identified by HOST= in the HSM startup and is responsible for:
Hourly space checks
During auto backup: CDS backup, backup of ML1 data sets to tape
During auto dump: Expiration of dump copies and deletion of excess dump VTOC copy
data sets
During secondary space management (SSM): Cleanup of MCDS, migration volumes, and
L1-to-L2 migration
If you are running your z/OS HSM images in sysplex (parallel or basic), you can use
secondary host promotion to allow a secondary image to assume the primary image's tasks if
the primary host fails. Secondary host promotion uses XCF status monitoring to execute the
promotion. To indicate a system as a candidate, issue:
SETSYS PRIMARYHOST(YES)
and
SSM(YES)
Primary Volumes
or Level 0 (ML0)
Migration
Migration Level 1 (ML1)
Level 2 (ML2)
Migration
Level 2 (ML2)
DFSMShsm uses the following three-level storage device hierarchy for space management:
Level 0: DFSMShsm-managed storage devices at the highest level of the hierarchy; these
devices contain data directly accessible to your application.
Level 1 and Level 2: Storage devices at the lower levels of the hierarchy; level 1 and level
2 contain data that DFSMShsm has compressed and optionally compacted into a format
that you cannot use. Devices at this level provide lower cost per byte storage and usually
slower response time. Usually L1 is in a cheaper DASD (or the same cost, but with the
gain of compression) and L2 is on tape.
Note: If you have a DASD controller that compresses data, you can skip level 1 (ML1)
migration because the data in L0 is already compacted/compressed.
Fast
replication
Spill
Level 0
Backup
Migration
Daily Level 1 Migration
Backup Level 2
Agregate Dump
backup Volumes
Volume types
DFSMShsm supports the following volume types:
Level 0 (L0) volumes contain data sets that are directly accessible to you and the jobs you
run. DFSMShsm-managed volumes are those L0 volumes that are managed by the
DFSMShsm automatic functions. These volumes must be mounted and online when you
refer to them with DFSMShsm commands.
Migration level 1 (ML1) volumes are DFSMShsm-supported DASD on which DFSMShsm
maintains your data in DFSMShsm format. These volumes are normally permanently
mounted and online. They can be:
Also in z/OS V1R7, a new command V SMS,VOLUME is introduced. It allows you to change the
state of the DFSMShsm volumes without having to change and reactivate the SMS
configuration using ISMF.
HSM.HMIG.ABC.FILE1.T891008.I9012
Level 0 Level 1
10 days
without ABC.FILE1 dsname
any
access
ABC.FILE2
Migrate
ABC.FILE3
HSM.HMIG.ABC.FILE1.T891008.I9012
Level 1 Level 0
dsname ABC.FILE1
ABC.FILE2
Recall
ABC.FILE3
Automatic recall
Using an automatic recall process returns a migrated data set from an ML1 or ML2 volume to
a DFSMShsm-managed volume. When a user refers to the data set, DFSMShsm reads the
system catalog for the volume serial number. If the volume serial number is MIGRAT,
DFSMShsm finds the migrated data set, recalls it to a DFSMShsm-managed volume, and
updates the catalog. The result of the recall process is a data set that resides on a user
volume in a user readable format. The recall can also be requested by a DFSMShsm
command. Automatic recall returns your migrated data set to a DFSMShsm-managed
volume when you refer to it. The catalog is updated accordingly with the real volser.
Recall returns a migrated data set to a user L0 volume. The recall is transparent and the
application does not need to know that it happened or where the migrated data set resides.
To provide applications with quick access to their migrated data sets, DFSMShsm allows up
to 15 concurrent recall tasks. RMF monitor III shows delays caused by the recall operation.
The MVS allocation routine discovers that the data set is migrated when, while accessing the
catalog, it finds the word MIGRAT instead of the volser.
Command recall
Command recall returns your migrated data set to a user volume when you enter the HRECALL
DFSMShsm command through an ISMF panel or by directly keying in the command.
IBM 3494
DFSMSrmm
In your enterprise, you store and manage your removable media in several types of media
libraries. For example, in addition to your traditional tape library (a room with tapes, shelves,
and drives), you might have several automated and manual tape libraries. You probably also
have both onsite libraries and offsite storage locations, also known as vaults or stores.
With the DFSMSrmm functional component of DFSMS, you can manage your removable
media as one enterprise-wide library (single image) across systems. Because of the need for
global control information, these systems must have accessibility to some shared DASD
volumes. DFSMSrmm manages your installation's tape volumes and the data sets on those
volumes. DFSMSrmm also manages the shelves where volumes reside in all locations
except in automated tape library data servers.
DFSMSrmm manages all tape media (such as cartridge system tapes and 3420 reels), as
well as other removable media you define to it. For example, DFSMSrmm can record the
shelf location for optical disks and track their vital record status; however, it does not manage
the objects on optical disks.
Library management
DFSMSrmm can manage the following devices:
A removable media library, which incorporates all other libraries, such as:
– System-managed manual tape libraries
Examples of automated tape libraries include IBM TotalStorage Enterprise Automated Tape
Library (3494) and IBM TotalStorage Virtual Tape Servers (VTS).
Shelf management
DFSMSrmm groups information about removable media by shelves into a central online
inventory, and keeps track of the volumes residing on those shelves. DFSMSrmm can
manage the shelf space that you define in your removable media library and in your storage
locations.
Volume management
DFSMSrmm manages the movement and retention of tape volumes throughout their life
cycle.
For more information about DFSMSrmm, refer to z/OS DFSMSrmm Guide and Reference,
SC26-7404 and z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405,
and visit:
http://www-1.ibm.com/servers/storage/software/sms/rmm/
IBM 3494
DFSMSrmm automatically records information about data sets on tape volumes so that you
can manage the data sets and volumes more efficiently. When all the data sets on a volume
have expired, the volume can be reclaimed and reused. You can optionally move volumes
that are to be retained to another location.
DFSMSrmm helps you manage your tape volumes and shelves at your primary site and
storage locations by recording information in a DFSMSrmm control data set.
In the removable media library, you store your volumes in “shelves,” where each volume
occupies a single shelf location. This shelf location is referred to as a rack number in the
DFSMSrmm TSO subcommands and ISPF dialog. A rack number matches the volume’s
external label. DFSMSrmm uses the external volume serial number to assign a rack number
when adding a volume, unless you specify otherwise. The format of the volume serial you
define to DFSMSrmm must be one to six alphanumeric characters. The rack number must be
six alphanumeric or national characters.
You can have several automated tape libraries or manual tape libraries. You use an
installation-defined library name to define each automated tape library or manual tape library
to the system. DFSMSrmm treats each system-managed tape library as a separate location
or destination.
Since z/OS 1.6, a new EDGRMMxx parmlib member OPTION command, together with VLPOOL
command, allows better support for the client/server environment.
z/OS 1.8 DFSMSrmm introduces an option to provide tape data set authorization
independent of the RACF TAPVOL and TAPEDSN. This option allows you to use RACF
generic DATASET profiles for both DASD and tape data sets.
All tape media and drives supported by z/OS are supported in this environment. Using
DFSMSrmm, you can fully manage all types of tapes in a non-system-managed tape library,
including 3420 reels, 3480, 3490, and 3590 cartridge system tapes.
Storage location
Storage locations are not part of the removable media library because the volumes in storage
locations are not generally available for immediate use. A storage location is comprised of
shelf locations that you define to DFSMSrmm. A shelf location in a storage location is
identified by a bin number. Storage locations are typically used to store removable media that
are kept for disaster recovery or vital records. DFSMSrmm manages two types of storage
locations: installation-defined storage locations and DFSMSrmm built-in storage locations.
You can define an unlimited number of installation-defined storage locations, using any
eight-character name for each storage location. Within the installation-defined storage
location, you can define the type or shape of the media in the location. You can also define
the bin numbers that DFSMSrmm assigns to the shelf locations in the storage location. You
can request DFSMSrmm shelf-management when you want DFSMSrmm to assign a specific
shelf location to a volume in the location.
IBM 3494
RMM CDS
DFSMSrmm helps you manage the movement of your volumes and retention of your data
over their full life, from initial use to the time they are retired from service. Among the
functions DFSMSrmm performs for you are:
Automatically initializing and erasing volumes
Recording information about volumes and data sets as they are used
Expiration processing
Identifying volumes with high error levels that require replacement
To make full use of all of the DFSMSrmm functions, you specify installation setup options and
define retention and movement policies. DFSMSrmm provides you with utilities to implement
the policies you define. Since z/OS 1.7, we have DFSMSrmm enterprise enablement that
allows high-level languages to issue DFSMSrmm commands through Web services.
You can define shelf space in storage locations. When you move volumes to a storage
location where you have defined shelf space, DFSMSrmm checks for available shelf space
and then assigns each volume a place on the shelf if you request it. You can also set up
DFSMSrmm to reuse shelf space in storage locations.
To allow your business to grow efficiently and profitably, you need to find ways to control the
growth of your information systems and use your current storage more effectively.
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. 219
5.1 Storage management
Availability
Space
Security
Performance
dfp
dss rmm
DFSMS
IBM
3494
ISMF
VTS
hsm tvs
Storage management
Storage management involves data set allocation, placement, monitoring, migration, backup,
recall, recovery, and deletion. These activities can be done either manually or by using
automated processes.
The DFSMS software product, together with hardware products and installation-specific
requirements for data and resource management, comprises the key to system-managed
storage in a z/OS environment.
The heart of DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage
administrator defines policies that automate the management of storage and hardware
devices. These policies describe data allocation characteristics, performance and availability
goals, backup and retention requirements, and storage requirements for the system. SMS
governs these policies for the system and the Interactive Storage Management Facility
(ISMF) provides the user interface for defining and maintaining the policies.
DFSMS + z/OS
RACF + + DFSORT
DFSMS environment
The DFSMS environment consists of a set of hardware and IBM software products which
together provide a system-managed storage solution for z/OS installations.
DFSMS uses a set of constructs, user interfaces, and routines (using the DFSMS products)
that allow the storage administrator to better manage the storage system. The core logic of
DFSMS, such as the Automatic Class Selection (ACS) routines, ISMF code, and constructs,
is located in DFSMSdfp. DFSMShsm and DFSMSdss are involved in the management class
construct.
In this environment, the Resource Access Control Facility (RACF) and Data Facility Sort
(DFSORT) products complement the functions of the base operating system. RACF provides
resource security functions, and DFSORT adds the capability for faster and more efficient
sorting, merging, copying, reporting, and analyzing of business information.
Tape System-managed storage lets you exploit the device technology of new devices without
having to change the JCL UNIT parameter. In a multi-library environment, you can select the
drive based on the library where the cartridge or volume resides. You can use the IBM
TotalStorage Enterprise Automated Tape Library (3494 or 3495) to automatically mount tape
volumes and manage the inventory in an automated tape library. Similar functionality is
available in a system-managed manual tape library. If you are not using SMS for tape
management, you can still access the IBM TotalStorage Enterprise Automated Tape Library
(3494 or 3495) using Basic Tape Library Storage (BTLS) software.
You can use DFSMShsm to automatically back up your different types of data sets and use
point-in-time copy to maintain access to critical data sets while they are being backed up.
Concurrent copy, virtual concurrent copy, SnapShot, and FlashCopy, along with
backup-while-open, have an added advantage in that they avoid invalidating a backup of a
CICS VSAM KSDS due to a control area or control interval split.
You can also create a logical group of data sets, so that the group is backed up at the same
time to allow recovery of the application defined by the group. This is done with the aggregate
backup and recovery support (ABARS) provided by DFSMShsm.
You can also use system-determined block sizes to automatically reblock physical sequential
and partitioned data sets that can be reblocked.
The policies defined in your installation represent decisions about your resources, such as:
What performance objectives are required by the applications accessing the data
Based on these objectives, you can try to better exploit cache data striping. By tracking
data set I/O activities, you can make better decisions about data set caching policies and
improve overall system performance. For object data, you can track transaction activities
to monitor and improve OAM's performance.
When and how to back up data - incremental or total
Determine the backup frequency, the number of backup versions, and the retention period
by consulting user group representatives. Be sure to consider whether certain data
backups need to be synchronized. For example, if the output data from application A is
used as input for application B, you must coordinate the backups of both applications to
prevent logical errors in the data when they are recovered.
Whether data sets should be kept available for use during backup or copy
The purpose of a backup plan is to ensure the prompt and complete recovery of data. A
well-documented plan identifies data that requires backup, the levels required, responsibilities
for backing up the data, and methods to be used.
ACS Routines
g em e n t Cla
n a ss
Ma
Which are the
services?
Sto
a Class
rage Class
What does it Data What is the
look like? Set service level?
Da t
Where is it
placed?
St
o rag
e G r ou p
For example, the administrator can define one storage class for data entities requiring high
performance, and another for those requiring standard performance. Then, the administrator
writes Automatic Class Selection (ACS) routines that use naming conventions or other criteria
of your choice to automatically assign the classes that have been defined to data as that data
is created. These ACS routines can then be validated and tested.
DFSMS facilitates all of these tasks by providing menu-driven panels with the Interactive
Storage Management Facility (ISMF). ISMF panels make it easy to define classes, test and
validate ACS routines, and perform other tasks to analyze and manage your storage. Note
that many of these functions are available in batch through the NaviQuest tool.
(1) (2)
Assign Storage Not
Data Set Class (SC)
Not applicable
system-managed
(3)
Object Stored Stored Stored
(4)
Define OAM Assign
Assign System
Volume Group (SG)
Storage Groups Storage Group
(SG) (SG)
How to be system-managed
Using SMS, you can automate storage management for individual data sets and objects, and
for DASD, optical, and tape volumes. Figure 5-7 shows how a data set, object, DASD volume,
tape volume, or optical volume becomes system-managed. The numbers shown in
parentheses are associated with the following notes:
1. A DASD data set is system-managed if you assign it a storage class. If you do not assign a
storage class, the data set is directed to a non-system-managed DASD or tape volume -
one that is not assigned to a storage group.
2. You can assign a storage class to a tape data set to direct it to a system-managed tape
volume. However, only the tape volume is considered system-managed, not the data set.
3. Objects are also known as byte-stream data, and this data is used in specialized
applications such as image processing, scanned correspondence, and seismic
measurements. Object data typically has no internal record or field structure and, once
written, the data is not changed or updated. However, the data can be referenced many
times during its lifetime. Objects are processed by OAM. Each object has a storage class;
therefore, objects are system-managed. The optical or tape volume on which the object
resides is also system-managed.
4. Tape volumes are added to tape storage groups in tape libraries when the tape data set is
created.
Data class attributes define space and data characteristics that are normally specified on JCL
DD statements, TSO/E ALLOCATE command, IDCAMS DEFINE commands, and dynamic
allocation requests. For tape data sets, data class attributes can also specify the type of
cartridge and recording method, and if the data is to be compacted. Users then need only
specify the appropriate data classes to create standardized data sets.
You can override some data set attributes assigned in the data class, but you cannot change
the data class name assigned through an ACS routine.
Note: The data class name is not saved for non-system-managed data sets, although the
allocation attributes in the data class are used to allocate the data set.
For objects on tape, we recommend that you do not assign a data class via the ACS routines.
To assign a data class, specify the name of that data class on the SETOAM command.
If you change a data class definition, the changes only affect new allocations. Existing data
sets allocated with the data class are not changed.
Some of the availability requirements that you specify to storage classes (such as cache and
dual copy) can only be met by DASD volumes attached through one of the following storage
control units or a similar device:
3990-3 or 3990-6
RAMAC Array Subsystem
Enterprise Storage Server (ESS)
DS6000 or DS8000
Figure 5-9 shows storage control unit configurations and their storage class attribute values.
With a storage class, you can assign a data set to dual copy volumes to ensure continuous
availability for the data set. With dual copy, two current copies of the data set are kept on
separate DASD volumes (by the control unit). If the volume containing the primary copy of the
data set is damaged, the companion volume is automatically brought online and the data set
continues to be available and current. Remote copy is the same, with the two volumes in
distinct control units (generally remote).
You can specify an I/O response time objective with storage class by using the millisecond
response time (MSR) parameter. During data set allocation, the system attempts to select the
closest available volume to the specified performance objective. Also along the data set life,
through the use MSR, DFSMS dynamically uses the cache algorithms as DASD Fast Write
(DFW) and Inhibit Cache Load (ICL) in order to reach the MSR target I/O response time. This
DFSMS function is called dynamic cache management.
For objects, the system uses the performance goals you set in the storage class to place the
object on DASD, optical, or tape volumes. The storage class is assigned to an object when it
is stored or when the object is moved. The ACS routines can override this assignment.
Note: If you change a storage class definition, the changes affect the performance service
levels of existing data sets that are assigned to that class when the data sets are
subsequently opened. However, the definition changes do not affect the location or
allocation characteristics of existing data sets.
SPACE
Expiration
BACKUP
Migration/Object System-Managed
Transition Volume
GDG Storage
Management Management Management
Class Subsystem
Data DFSMShsm
Management and
Requirements DFSMSdss
DFSMShsm-Owned
Management classes let you define management requirements for individual data sets, rather
than defining the requirements for entire volumes. All the data set functions described in the
management class are executed by DFSMShsm and DFSMSdss programs. Figure 5-11 on
page 237 shows the sort of functions an installation can define in a management class.
The ACS routine can override the management class specified in JCL, ALLOCATE or DEFINE
command.You cannot override management class attributes via JCL or command
parameters.
Note: If you change a management class definition, the changes affect the management
requirements of existing data sets and objects that are assigned that class. You can
reassign management classes when data sets are renamed.
When changing a management class definition, the changes affect the management
requirements of existing data sets and objects that are assigned to that class.
SMS-managed
VIO PRIMARY LARGE
OBJECT
TAPE Storage
Groups
OBJECT BACKUP
DFSMShsm-owned
DB2 IMS CICS
Non-system-managed
Migration SYSTEM UNMOVABLE TAPE
Migration
Level 2,
Level 1
Backup,
Dump
Storage groups
A storage group is a collection of storage volumes and attributes that you define. The
collection can be a group of:
System paging volumes
DASD volumes
Tape volumes
Optical volumes
Combination of DASD and optical volumes that look alike
DASD, tape, and optical volumes treated as a single object storage hierarchy
Storage groups, along with storage classes, help reduce the requirement for users to
understand the physical characteristics of the storage devices which contain their data.
In a tape environment, you can also use tape storage groups to direct a new tape data set to
an automated or manual tape library.
DFSMShsm uses some of the storage group attributes to determine if the volumes in the
storage group are eligible for automatic space or availability management.
Figure 5-12 shows an example of how an installation can group storage volumes according to
their objective. In this example:
SMS-managed DASD volumes are grouped into storage groups so that primary data sets,
large data sets, DB2 data, IMS data, and CICS data are all separated.
Note: A storage group is assigned to a data set only through the storage group ACS
routine. Users cannot specify a storage group when they allocate a data set, although they
can specify a unit and volume.
Whether or not to honor a user’s unit and volume request is an installation decision, but we
recommend that you discourage users from directly requesting specific devices. It is more
effective for users to specify the logical storage requirements of their data by storage and
management class, which the installation can then verify in the ACS routines.
For objects, there are two types of storage groups, OBJECT and OBJECT BACKUP. An
OBJECT storage group is assigned by OAM when the object is stored; the storage group
ACS routine can override this assignment. There is only one OBJECT BACKUP storage
group, and all backup copies of all objects are assigned to this storage group.
SMS volume selection
SMS determines which volumes are used for data set allocation by developing a list of all
volumes from the storage groups assigned by the storage group ACS routine. Volumes are
then either removed from further consideration or flagged as the following:
Primary Volumes online, below threshold, that meet all the specified criteria in the
storage class.
Secondary Volumes that do not meet all the criteria for primary volumes.
Tertiary When the number of volumes in the storage group is less than the number of
volumes that are requested.
Rejected Volumes that do not meet the required specifications. They are not
candidates for selection.
SMS starts volume selection from the primary list; if no volumes are available, SMS selects
from the secondary; and, if no secondary volumes are available, SMS selects from the
tertiary list.
SMS interfaces with the system resource manager (SRM) to select from the eligible volumes
in the primary list. SRM uses device delays as one of the criteria for selection, and does not
prefer a volume if it is already allocated in the jobstep. This is useful for batch processing
when the data set is accessed immediately after creation.
SMS does not use SRM to select volumes from the secondary or tertiary volume lists. It uses
a form of randomization to prevent skewed allocations in instances such as when new
volumes are added to a storage group, or when the free space statistics are not current on
volumes.
For a striped data set, when multiple storage groups are assigned to an allocation, SMS
examines each storage group and selects the one that offers the largest number of volumes
attached to unique control units. This is called control unit separation. Once a storage group
has been selected, SMS selects the volumes based on available space, control unit
separation, and performance characteristics if they are specified in the assigned storage
class.
DataSet
DataSet
The user-defined group of data sets can be those belonging to an application, or any
combination of data sets that you want treated as a separate entity. Aggregate processing
enables you to:
Back up and recover data sets by application, to enable business to resume at a remote
site if necessary
Move applications in a non-emergency situation in conjunction with personnel moves or
workload balancing
Duplicate a problem at another site
You can use aggregate groups as a supplement to using management class for applications
that are critical to your business. You can associate an aggregate group with a management
class. The management class specifies backup attributes for the aggregate group, such as
the copy technique for backing up DASD data sets on primary volumes, the number of
Although SMS must be used on the system where the backups are performed, you can
recover aggregate groups to systems that are not using SMS, provided that the groups do not
contain data that requires that SMS be active, such as PDSEs. You can use aggregate groups
to transfer applications to other data processing installations, or to migrate applications to
newly-installed DASD volumes. You can transfer the application's migrated data, along with its
active data, without recalling the migrated data.
DFSMSdss or Storage
DFSMShsm Storage Class Class
Conversion of ACS Routine
Existing Data Sets not
Storage assigned
Class Assigned
Management Class
ACS Routine System-Managed
Volume
Storage Group
ACS Routine
The ACS language contains a number of read-only variables, which you can use to analyze
new data allocations. For example, you can use the read-only variable &DSN to make class
and group assignments based on data set or object collection name, or &LLQ to make
assignments based on the low-level qualifier of the data set or object collection name.
With z/OS V1R6, you can use a new ACS routine read-only security label variable,
&SECLABL, as input to the ACS routine. A security label is a name used to represent an
association between a particular security level and a set of security categories. It indicates
the minimum level of security required to access a data set protected by this profile.
You use the four read-write variables to assign the class or storage group you determine for
the data set or object, based on the routine you are writing. For example, you use the
&STORCLAS variable to assign a storage class to a data set or object.
For each SMS configuration, you can write as many as four routines: one each for data class,
storage class, management class, and storage group. Use ISMF to create, translate, validate,
and test the routines.
Because data allocations, whether dynamic or through JCL, are processed through ACS
routines, you can enforce installation standards for data allocation on system-managed and
non-system-managed volumes. ACS routines also enable you to override user specifications
for data, storage, and management class, and requests for specific storage volumes.
You can use the ACS routines to determine the SMS classes for data sets created by the
Distributed FileManager/MVS. If a remote user does not specify a storage class, and if the
ACS routines decide that the data set should not be system-managed, the Distributed
FileManager/MVS terminates the creation process immediately and returns an error reply
message to the source. Therefore, when you construct your ACS routines, consider the
potential data set creation requests of remote users.
SMS configuration
An SMS configuration is composed of:
A set of data class, management class, storage class, and storage group
ACS routines to assign the classes and groups
Optical library and drive definitions
Tape library definitions
Aggregate group definitions
SMS base configuration, that contains information such as:
– Default management class
– Default device geometry
– The systems in the installation for which the subsystem manages storage
The SMS configuration is stored in SMS control data sets, which are VSAM linear data sets.
You must define the control data sets before activating SMS. SMS uses the following types of
control data sets:
Source Control Data Set (SCDS)
Active Control Data Set (ACDS)
Communications Data Set (COMMDS)
COMMDS ACDS
SCDS
SMS
You use the SCDS to develop and test but, before activating a configuration, retain at least
one prior configuration should you need to regress to it because of error. The SCDS is never
used to manage allocations.
We recommend that you have extra ACDSs in case a hardware failure causes the loss of your
primary ACDS. It must reside on a shared device, accessible to all systems, to ensure that
they share a common view of the active configuration. Do not have the ACDS reside on the
same device as the COMMDS or SCDS. Both the ACDS and COMMDS are needed for SMS
operation across the complex. Separation protects against hardware failure. You should also
create a backup ACDS in case of hardware failure or accidental data loss or corruption.
The COMMDS must reside on a shared device accessible to all systems. However, do not
allocate it on the same device as the ACDS. Create a spare COMMDS in case of a hardware
failure or accidental data loss or corruption. SMS activation fails if the COMMDS is
unavailable.
Managing
temporary
data
Managing tape
volumes
Implementing DFSMS
You can implement SMS to fit your specific needs. You do not have to implement and use all
of the SMS functions. Rather, you can implement the functions you are most interested in
first. For example, you can:
Set up a storage group to only exploit the functions provided by extended format data sets,
such as striping, system-managed buffering (SMB), partial release, and so on.
Put some of your data in a pool of one or more storage groups and assign them policies at
the storage group level to implement DFSMShsm operations in stages.
Exploit VSAM record level sharing (RLS).
In this book, we present an overview of the steps needed to activate, and manage data with,
a minimal SMS configuration, without affecting your JCL or data set allocations. To implement
DFSMS in your installation, however, refer to z/OS DFSMS Implementing System-Managed
Storage, SC26-7407.
All of these elements are required for a valid SMS configuration, except for the storage class
ACS routine.
The steps needed to activate the minimal configuration are presented in Figure 5-18. When
implementing DFSMS, beginning by implementing a minimal configuration allows you to:
Gain experience with ISMF applications for the storage administrator, since you use ISMF
applications to define and activate the SMS configuration.
Specify SHAREOPTIONS(2,3) only for the SCDS. This lets one update-mode user operate
simultaneously with other read-mode users between regions.
Define GRS resource names for active SMS control data sets
If you plan to share SMS control data sets between systems, consider the effects of multiple
systems sharing these data sets. Access is serialized by the use of RESERVE, which locks
out access to the entire device volume from other systems until the RELEASE is issued by the
task using the resource. This is undesirable, especially when there are other data sets on the
volume.
Place the resource name IGDCDSXS in the RESERVE conversion RNL as a generic entry to
convert the RESERVE/RELEASE to an ENQueue/DEQueue. This minimizes delays due to
contention for resources and prevents deadlocks associated with the VARY SMS command.
Important: If there are multiple SMS complexes within a global resource serialization
complex, be sure to use unique COMMDS and ACDS data set names to prevent false
contention.
For information about allocating COMMDS and ACDS data set names, see z/OS DFSMS
Implementing System-Managed Storage, SC26-7407.
Security
definitions
Storage ACS
Group Storage
Class
Validade Validate
Translate SCDS
Defining a data class, a management class, and creating their respective ACS routines are
not required for a valid SCDS. However, because of the importance of the default
management class, we recommend that you include it in your minimal configuration.
For a detailed description of SMS classes and groups, see z/OS DFSMS Implementing
System-Managed Storage, SC26-7407.
The DFSMS product tape contains a set of sample ACS routines. The appendix of z/OS
DFSMSdfp Storage Administration Reference, SC26-7402 contains sample definitions of the
SMS classes and groups that are used in the sample ACS routines. The starter set
configuration can be used as a model for your own SCDS. For a detailed description of base
configuration attributes and how to use ISMF to define its contents, see z/OS DFSMSdfp
Storage Administration Reference, SC26-7402.
In the storage class ACS routine, the &STORCLAS variable is set to a null value to prevent
users from coding a storage class in JCL before you want to have system-managed data sets.
You define the class using ISMF. Select Storage Class in the primary menu. Then you can
define the class, NONSMS, in your configuration in one of two ways:
Select option 3 Define in the Storage Class Application Selection panel. The CDS Name
field must point to the SCDS you are building.
Select option 1 Display in the Storage Class Application Selection panel. The CDS Name
field must point to the starter set SCDS. Then, in the displayed panel, use the COPY line
operator to copy the definition of NONSMS from the starter set SCDS to your own SCDS.
Defining a non-existent volume lets you activate SMS without having any system-managed
volumes. No data sets are system-managed at this time. This condition provides an
opportunity to experiment with SMS without any risk to your data.
Define a storage group (for example, NOVOLS) in your SCDS. A name like NOVOLS is useful
because you know it does not contain valid volumes.
No management classes are assigned when the minimal configuration is active. Definition of
this default is done here to prepare for the managing permanent data implementation phase.
The management class, STANDEF, is defined in the starter set SCDS. You can copy its
definition to your own SCDS in the same way as the storage class, NONSMS.
The storage group ACS routine will never run if a null storage class is assigned. Therefore, no
data sets are allocated as system-managed by the minimal configuration. However, you must
code a trivial one to satisfy the SMS requirements for a valid SCDS. After you have written the
ACS routines, use ISMF to translate them into executable form.
Follow these steps to create a data set that contains your ACS routines:
1. If you do not have the starter set, allocate a fixed-block PDS or PDSE with LRECL=80 to
contain your ACS routines. Otherwise, start with the next step.
2. On the ISMF Primary Option Menu, select Automatic Class Selection to display the ACS
Application Selection panel.
3. Select option 1 Edit. When the next panel is shown, enter in the Edit panel the name of
the PDS or PDSE data set you want to create to contain your source ACS routines.
For more information, see z/OS DFSMS: Using the Interactive Storage Management Facility,
SC26-7411.
ACS Storage
Routines Group
Every SMS system must have an IGDSMSzz member in SYS1.PARMLIB that specifies a
required ACDS and COMMDS control data set pair. This ACDS and COMMDS pair is used if
the COMMDS of the pair does not point to another COMMDS.
If the COMMDS of the pair refers to another COMMDS during IPL, it means a more recent
COMMDS has been used. SMS uses the most recent COMMDS to ensure that you cannot
IPL with a down-level configuration.
The data sets that you specify for the ACDS and COMMDS pair must be the same for every
system in an SMS complex. Whenever you change the ACDS or COMMDS, update the
IGDSMSzz for every system in the SMS complex so that it specifies the same data sets.
IGDSMSzz has many parameters. For a complete description of the SMS parameters, see
z/OS MVS Initialization and Tuning Reference, SA22-7592, and z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
Activating a new
SMS configuration
Starting SMS
To start SMS—which starts the SMS address space—use either of these methods:
With SMS=xx defined in IEASYSxx and SMS defined as a valid subsystem, IPL the
system. This starts SMS automatically.
With SMS defined as a valid subsystem to z/OS, IPL the system. Start SMS later, using
the SET SMS=yy MVS operator command.
You can manually activate a new SMS configuration in two ways. Note that SMS must be
active before you use one of these methods:
1. Activating an SMS configuration from ISMF:
– From the ISMF Primary Option Menu panel, select Control Data Set.
– In the CDS Application Selection panel, enter your SCDS data set name and select 5
Activate, or enter the ACTIVATE command on the command line.
The ACTIVATE command, which runs from the ISMF CDS application, is equivalent to the
SETSMS operator command with the SCDS keyword specified.
If you use RACF, you can enable storage administrators to activate SMS configurations from
ISMF by defining the facility STGADMIN.IGD.ACTIVATE.CONFIGURATION and issuing
permit commands for each storage administrator.
When and how to use the Initializes SMS parameters and Changes SMS parameters only
command. starts SMS if SMS is defined when SMS is running.
but not started at IPL. Changes
SMS parameters when SMS is
running.
What default values are Default values are used for No default values. Parameters
available. non-specified parameters. non-specified remain
unchanged.
For more information about operator commands, refer to z/OS MVS System Commands,
SA22-7627.
D SMS,SG(STRIPE),LISTVOL
IGD002I 16:02:30 DISPLAY SMS 581
The DISPLAY SMS command can be used in different variations. For the full functionality of this
command, refer to z/OS MVS System Commands, SA22-7627.
Inefficient space usage and poor data allocation cause problems with space and performance
management. In a DFSMS environment, you can enforce good allocation practices to help
reduce some of these problems. The following section highlights how to exploit SMS
capabilities.
Data classes can be determined from the user-specified value on the DATACLAS parameter
(DD card, TSO Alloc, Dynalloc macro), from a RACF default, or by ACS routines. ACS
routines can also override user-specified or RACF default data classes.
You can override a data class attribute (not the data class itself) using JCL or dynamic
allocation parameters. DFSMS usually does not change values that are explicitly specified,
because doing so would alter the original meaning and intent of the allocation. There is an
Users cannot override the data class attributes of dynamically-allocated data sets if you use
the IEFDB401 user exit.
For additional information about data classes see also “Using data classes” on page 231.
For sample data classes, descriptions, and ACS routines, see z/OS DFSMS Implementing
System-Managed Storage, SC26-7407.
SUB
FILE.PDSE
VOLSMS
You take full advantage of system-managed storage when you allow the system to place data
on the most appropriate device in the most efficient way, when you use system-managed data
When converting data sets for use in DFSMS, users do not have to remove these parameters
from existing JCL because volume and unit information can be ignored with ACS routines.
(However, you should work with users to evaluate UNIT and VOL=SER dependencies before
conversion).
If you keep the VOL=SER parameter for a non-SMS volume, but you are trying to access a
system-managed data set, then SMS might not find the data set. All SMS data sets (the ones
with a storage class) must reside on a system-managed volume.
You must implement a naming convention for your data sets. Although a naming convention is
not a prerequisite for DFSMS conversion, it makes more efficient use of DFSMS. You can
also reduce the cost of storage management significantly by grouping data that shares
common management requirements. Naming conventions are an effective way of grouping
data. They also:
Simplify service-level assignments to data
Facilitate writing and maintaining ACS routines
Allow data to be mixed in a system-managed environment while retaining separate
management criteria
Provide a filtering technique useful with many storage management products
Simplify the data definition step of aggregate backup and recovery support
Most naming conventions are based on the HLQ and LLQ of the data name. Other levels of
qualifiers can be used to identify generation data sets and database data. They can also be
used to help users to identify their own data.
Do not embed information that is subject to frequent change in the HLQ, such as department
number, application location, output device type, job name, or access method. Set a standard
within the HLQ. Figure 5-28 shows examples of naming standards.
Figure 5-29 shows examples of how you can use LLQ naming standards to indicate the
storage management processing criteria.
The first column lists the LLQ of a data name. An asterisk indicates where a partial qualifier
can be used. For example, LIST* indicates that only the first four characters of the LLQ must
be LIST; valid qualifiers include LIST1, LISTING, and LISTOUT. The remaining columns show
the storage management processing information for the data listed.
Negotiate with your user group representatives to agree on the specific policies for the
installation, how soon you can implement them, and how strongly you enforce them.
You can simplify storage management by limiting the number of data sets and volumes that
cannot be system-managed.
DC C
DC B
DC A
DATA CLASS ATTRIBUTES
DATA SET TYPE
RECORD LENGTH
BLOCKSIZE
SPACE REQUIREMENTS
EXPIRATION DATE
VSAM ATTRIBUTES
Data class names should indicate the type of data they are assigned to. This makes it easier
for users to identify the template they need to use for allocation.
You define data classes using the ISMF data class application. Users can access the Data
Class List panel to determine which data classes are available and the allocation values that
each data class contains.
Figure 5-32 on page 274 contains information that can help in this task. For more information
about planning and defining data classes, see z/OS DFSMSdfp Storage Administration
Reference, SC26-7402.
For detailed information about specifying data class attributes, see z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
Figure 5-33 Using data class (DC) ACS routine to enforce standards
The data class ACS routine provides an automatic method for enforcing standards because it
is called for system-managed and non-system-managed data set allocations. Standards are
enforced automatically at allocation time, rather than through manual techniques after
allocation.
Enforcing standards optimizes data processing resources, improves service to users, and
positions you for implementing system-managed storage. You can fail requests or issue
warning messages to users who do not conform to standards. Consider enforcing the
following standards in your DFSMS environment:
Prevent extended retention or expiration periods.
Prevent specific volume allocations, unless authorized. For example, you can control
allocations to spare, system, database, or other volumes.
Require valid naming conventions before implementing DFSMS system management for
permanent data sets.
MVS/ESA
For example, with the use of data classes, you have less use for the JCL keywords: UNIT,
DCB, and AMP. When you start using system-managed data sets, you do not need to use the
JCL VOL keyword.
In the following sections, we present some sample jobs exemplifying the use of JCL keywords
when:
Creating a sequential data set
Creating a VSAM cluster
Specifying a retention period
Specifying an expiration date
//NEWDATA DD DSN=FILE.SEQ1,
// DISP=(,CATLG),
// SPACE=(50,(5,5)),AVGREC=M,
// RECFM=VB,LRECL=80
FILE.SEQ1
Figure 5-35 shows an example of JCL used to create a data set in a system-managed
environment.
Table 5-2 lists the attributes a user can override with JCL.
For more information about data classes refer to “Using data classes” on page 231 and “Data
class attributes” on page 274.
As previously mentioned, in order to use a data class, the data set does not have to be
system-managed. An installation can take advantages of a minimal SMS configuration to
simplify JCL use and manage data set allocation.
For information about managing data allocation, refer to z/OS DFSMS: Using Data Sets,
SC26-7410.
//VSAM DD DSN=NEW.VSAM,
// DISP=(,CATLG),
// SPACE=(1,(2,2)),AVGREC=M,
// RECORG=KS,KEYLEN=17,KEYOFF=6,
// LRECL=80
NEW.VSAM
NEW.VSAM.DATA
NEW.VSAM.INDEX
You can use JCL DD statement parameters to override some data class attributes; refer to
Table 5-2 for those related to VSAM data sets.
A data set with a disposition of MOD is treated as a NEW allocation if it does not already
exist; otherwise, it is treated as an OLD allocation.
For a non-SMS environment, a VSAM cluster creation is only done through IDCAMS. In
Figure 5-36, NEW.VSAM refers to a KSDS VSAM cluster.
You cannot use certain parameters in JCL when allocating VSAM data sets, although you can
use them in the IDCAMS DEFINE command.
//RETAIN DD DSN=DEPTM86.RETPD.DATA,
// DISP=(,CATLG),RETPD=365
//RETAIN DD DSN=DEPTM86.EXPDT.DATA,
// DISP=(,CATLG),EXPDT=2006/013
The VTOC entry for non-VSAM and VSAM data sets contains the expiration date as declared
in the JCL, the TSO ALLOCATE command, the IDCAMS DEFINE command, or in the data class
definition. The expiration date is placed in the VTOC either directly from the date
specification, or after it is calculated from the retention period specification. The expiration
date in the catalog entry exists for information purposes only. If you specify the current date or
an earlier date, the data set is immediately eligible for replacement.
You can use a management class to limit or ignore the RETPD and EXPDT parameters given
by a user. If a user specifies values that exceed the maximum allowed by the management
class definition, the retention period is reset to the allowed maximum. For an expiration date
beyond year 1999 use the following format: YYYY/DDD. For more information about using
management class to control retention period and expiration date, refer to z/OS DFSMShsm
Storage Administration Guide, SC35-0421.
Attention: Expiration dates 99365, or 99366, or 1999/365 or 1999/366 are special dates
and they mean never expires.
If you have DFSMS installed, you can extend PDSE sharing to enable multiple users on
multiple systems to concurrently create new PDSE members and read existing members.
Using the PDSESHARING keyword in the SYS1.PARMLIB member, IGDSMSxx, you can
specify:
NORMAL. This allows multiple users to read any member of a PDSE.
EXTENDED. This allows multiple users to read any member or create new members of a
PDSE.
All systems sharing PDSEs need to be upgraded to DFSMS to use the extended PDSE
sharing capability.
After updating the IGDSMSxx member of SYS1.PARMLIB, you need to issue the SET SMS
ID=xx command for every system in the complex to activate the sharing capability. See also
z/OS DFSMS: Using Data Sets, SC26-7410 for information about PDSE sharing.
Although SMS supports PDSs, you should consider converting these to the PDSE format.
Refer to 4.26, “PDSE: Conversion” on page 155 for more information about PDSE
conversion.
By using the &DSNTYPE read-only variable in the ACS routine for data-class selection, you
can control which PDSs are to be allocated as PDSEs. The following values are valid for
DSNTYPE in the data class ACS routines:
&DSNTYPE = 'LIBRARY' for PDSEs.
&DSNTYPE = 'PDS' for PDSs.
&DSNTYPE is not specified. This indicates that the allocation request is provided by the
user through JCL, the TSO/E ALLOCATE command, or dynamic allocation.
If you specify a DSNTYPE value in the JCL, and a different DSNTYPE value is also specified
in the data class selected by ACS routines for the allocation, the value specified in the data
class is ignored.
SUB
FILE.PDSE
VOLSMS
Temporary Permanent
Data Data
Object Data
Database
System Data
Data
These are some common types of data that can be system-managed. For details on how
these data types can be system-managed using SMS storage groups, see z/OS DFSMS
Implementing System-Managed Storage, SC26-7407.
Temporary data Data sets used only for the duration of a job, job step, or terminal
session, and then deleted. These data sets can be cataloged or
uncataloged, and can range in size from small to very large.
Permanent data Data sets consisting of:
• Interactive data
• TSO user data sets
• ISPF/PDF libraries you use during a terminal session
Data sets classified in this category are typically small, and are
frequently accessed and updated.
Batch data Data that is classified as either online-initiated, production, or test.
• Data accessed as online-initiated are background jobs that an
online facility (such as TSO) generates.
Uncataloged data
When data sets are cataloged, users do not need to know which volumes the data sets reside
on when they reference them; they do not need to specify unit type or volume serial number.
This is essential in an environment with storage groups, where users do not have private
volumes.
Panel Help
-------------------------------------------------------------------------------
ISMF PRIMARY OPTION MENU - z/OS DFSMS V1 R6
Enter Selection or Command ===>
ISMF provides interactive access to the space management, backup, and recovery services
of the DFSMShsm and DFSMSdss functional components of DFSMS, to the tape
management services of the DFSMSrmm functional component, as well as to other products.
DFSMS introduces the ability to use ISMF to define attributes of tape storage groups and
libraries.
A storage administrator uses ISMF to define the installation's policy for managing storage by
defining and managing SMS classes, groups, and ACS routines. ISMF then places the
configuration in an SCDS. You can activate an SCDS through ISMF or an operator command.
ISMF is menu-driven, with fast paths for many of its functions. ISMF uses the ISPF data-tag
language (DTL) to give its functional panels on workstations the look of common user access
(CUA®) panels and a graphical user interface (GUI).
ISPF/PDF
DFSMS
ISMF generates a data list based on your selection criteria. Once the list is built, you can use
ISMF entry panels to perform space management or backup and recovery tasks against the
entries in the list.
As a user performing data management tasks against individual data sets or against lists of
data sets or volumes, you can use ISMF to:
Edit, browse, and sort data set records
Delete data sets and backup copies
Protect data sets by limiting their access
Recover unused space from data sets and consolidate free space on DASD volumes
Copy data sets or DASD volumes to the same device or another device
Migrate data sets to another migration level
You cannot allocate data sets from ISMF. Data sets are allocated from ISPF, from TSO, or
with JCL statements. ISMF provides the DSUTIL command, which enables users to get to
ISPF and toggle back to ISMF.
Panel Help
----------------------------------------------------------------------------
ISMF PRIMARY OPTION MENU - z/OS DFSMS V1 R8
Enter Selection or Command ===>
Figure 5-46 ISMF Primary Option Menu panel for storage administrator mode
Accessing ISMF
How you access ISMF depends on your site.
You can create an option on the ISPF Primary Option Menu to access ISMF. Then access
ISMF by typing the appropriate option after the arrow on the Option field, in the ISPF
Primary Option Menu. This starts an ISMF session from the ISPF/PDF Primary Option
Menu.
To access ISMF directly from TSO, use the command:
ISPSTART PGM(DGTFMD01) NEWAPPL(DGT)
There are two Primary Option Menus, one for storage administrators, and another for end
users. Figure 5-46 shows the menu available to storage administrators; it includes additional
applications not available to end users.
Option 0 controls the user mode or the type of Primary Option Menu to be displayed. Refer to
“ISMF: Profile option” on page 295 for information about how to change the user mode.
The ISMF Primary Option Menu example assumes installation of DFSMS at the current
release level. For information about adding the DFSORT option to your Primary Option Menu,
refer to DFSORT Installation and Customization Release 14, SC33-4034.
Panel Help
------------------------------------------------------------------
ISMF PROFILE OPTION MENU
Enter Selection or Command ===>
You can select ISMF or ISPF JCL statements for processing batch jobs.
Use ENTER to see the line operator descriptions in sequence or choose them
by number:
Figure 5-48 shows the panel you reach when you press the Help PF key with the cursor in the
Line Operator field of the panel shown in Figure 5-49 on page 297 where the arrow points to
the data set. The Data Set List Line Operators panel shows the commands available to enter
in that field. If you want an explanation about a specific command, type the option
corresponding to the desired command and a panel is displayed showing information about
the command function.
You can exploit the Help PF key, when defining classes, to obtain information about what you
have to enter in the fields. Place the cursor in the field and press the Help PF key.
To see and change the assigned functions to the PF keys, enter the KEYS command in the
Command field.
Figure 5-50 shows the data set list generated for the generic data set name MHLRES2.**.
If ISMF is unable to get certain information required to check if a data set meets the selection
criteria specified, that data set is also to be included in the list. Missing information is
indicated by dashes on the corresponding column.
The Data Fields field shows how many fields you have in the list. You can navigate throughout
these fields using Right and Left PF keys. The figure also shows the use of the actions bar.
Volume option
Selecting option 2 (Volume) from the ISMF Primary Option Menu takes you to the Volume List
Selection Menu panel, as follows:
Selecting option 1 (DASD) displays the Volume Selection Entry Panel, shown in part (1) of
Figure 5-51. Using filters, you can select a Volume List Panel, shown in part (2) of the figure.
To view the commands, you can use in the Line Operator field (marked with a circle in the
figure), place the cursor in the field and press the Help PF key.
Data class attributes are assigned to a data set when the data set is created. They apply to
both SMS-managed and non-SMS-managed data sets. Attributes specified in JCL or
equivalent allocation statements override those specified in a data class. Individual attributes
in a data class can be overridden by JCL, TSO, IDCAMS, and dynamic allocation statements.
Entering the DISPLAY line command in the Line Operator field, in front of a data class name,
displays the information about that data class, without requiring you to navigate using the
Right and Left PF keys.
The Storage Class Application Selection panel lets the storage administrator specify
performance objectives and availability attributes that characterize a collection of data sets.
For objects, the storage administrator can define the performance attribute Initial Access
Response Seconds. A data set or object must be assigned to a storage class in order to be
managed by DFSMS.
You can specify the DISPLAY line operator next to any class name on a class list to generate a
panel that displays values associated with that particular class. This information can help you
decide whether you need to assign a new DFSMS class to your data set or object.
If you determine that a data set you own should be associated with a different management
class or storage class, and if you have authorization, you can use the ALTER line operator
against a data set list entry to specify another storage class or management class.
ISMF lists
After obtaining a list (data set, data class, or storage class), you can save the list by typing
SAVE listname in the Command panel field. To see the saved lists, use the option L (List) in
the ISMF Primary Option Menu.
The List Application panel displays a list of all lists saved from ISMF applications. Each entry
in the list represents a list that was saved. If there are no saved lists to be found, the ISMF
Primary Option Menu panel is redisplayed with the message that the list is empty.
You can reuse and delete saved lists. From the List Application, you can reuse lists as though
they were created from the corresponding application. You can then use line operators and
commands to tailor and manage the information in the saved lists.
For more about the ISMF panel, refer to z/OS DFSMS: Using the Interactive Storage
Management Facility, SC26-7411.
Chapter 6. Catalogs
A catalog is a z/OS data set that describes other data set attributes and records the location
of a data set so that the data set can be retrieved without requiring the user to specify its
volume location. Multiple user catalogs contain information about user data sets, and a single
master catalog contains entries for system data sets and user catalogs.
In z/OS, the component controlling catalogs is embedded in DFSMSdfp and is called Catalog
Management. Catalog Management has one address space for itself named Catalog Address
Space (CAS). This address space is used for buffering and to store control blocks, together
with some code. The modern catalog structure in z/OS is named integrated catalog facility
(ICF).
All data sets managed by the storage management subsystem (SMS) must be cataloged in
an ICF catalog.
Most installations depend on the availability of catalog facilities to run production job streams
and to support online users. For maximum reliability and efficiency, all permanent data sets
should be cataloged and catalog recovery procedures must exist to guarantee continuous
availability in z/OS.
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. 305
6.1 Catalogs
structure 1 VVDS
VVDS VTOC
VVDS
VVDS VTOC
VVDS
structure 2 VVDS
VVDS VTOC
VVDS
Catalogs
A catalog is a data set that contains information about other data sets. It provides users with
the ability to locate a data set by name, without knowing the volume where the data set
resides. When a data set catalog is utilized, your users need to know less about your storage
setup. Thus, data sets can be moved from one device to another, without requiring a change
in JCL DD statements that refer to an existing data set.
Cataloging data sets also simplifies backup and recovery procedures. Catalogs are the
central information point for VSAM data sets; all VSAM data sets must be cataloged. In
addition, all SMS-managed data sets must be cataloged.
Activity towards the catalog is much more intense in a Batch/TSO workload than in a CICS/
DB2 workload, where the majority of data sets are allocated at CICS/DB2 initialization time.
When we talk about a catalog, we usually mean the BCS. The VVDS can be considered an
extension of the volume table of contents (VTOC). The VVDS is volume-specific, whereas the
complexity of the BCS depends on your definitions. The relationship between the BCS and
the VVDS is many-to-many. That is, a BCS can point to multiple VVDSs and a VVDS can
point to multiple BCSs.
VTOC VVDS
. DSNAME2
DSNAMEn
VOL002
.
. VTOC VVDS
DSNAME4 VOL003
DSNAMEn DSNAMEn ... VOL002
DSNAME5
For non-VSAM data sets that are not SMS-managed, all catalog information is contained
within the BCS. For other types of data sets, there is other information available in the VVDS.
The BCS contains the information about where a data set resides. That can be a DASD
volume or tape or an other storage medium.
Related information in the BCS is grouped into logical, variable-length, spanned records
related by key. The BCS uses keys that are the data set names (plus one character for
extensions). One control interval can contain multiple BCS records. To reduce the number of
I/Os necessary for catalog processing, logically-related data is consolidated in the BCS.
VVDS characteristics
The VVDS is a VSAM entry-sequenced data set (ESDS) that has a 4 KB control interval size.
The hexadecimal RBA of a record is used as its key or identifier.
Volser is the volume serial number of the volume on which the VVDS resides.
You can explicitly define the VVDS using IDCAMS, or it is implicitly created after you define
the first VSAM or SMS-managed data set on the volume.
VVDSSPACE keyword
Before z/OS V1R7, the default space parameter is TRACKS(10,10), which may be too small
for sites that use custom 3390 volumes (the ones greater than 3390-9). With z/OS V1R7,
there is a new VVDSSPACE keyword of the F CATALOG command, as follows:
F CATALOG,VVDSSPACE(primary,secondary)
An explicitly defined VVDS is not related to any BCS until a data set or catalog object is
defined on the volume. As data sets are allocated on the VVDS volume, each BCS with
VSAM data sets or SMS-managed data sets residing on that volume is related to the VVDS.
VVDSSPACE indicates that the Catalog Address Space should use the values specified as
the primary and secondary allocation amount in tracks for an implicitly defined VVDS. The
default value is ten tracks for both the primary and secondary values. The specified values
are preserved across a Catalog Address Space restart, but are not preserved across an IPL.
SYSCAT VOLABC
MASTER USER
CATALOG CATALOG
Catalog by function:
Master catalog
User catalog
SYS1.PARMLIB
ABC.DSNAME
SYS1.LINKLIB
DEF.DSNAME
ABC.DSNAME1
SYSRES VOL001
Catalogs by function
By function, the catalogs (BCSs) can be classified as master catalog and user catalog. A
particular case of a user catalog is the volume catalog, which is a user catalog containing only
tape library and tape volume entries.
There is no structural difference between a master catalog and a user catalog. What makes a
master catalog different is how it is used, and what data sets are cataloged in it. For example,
the same catalog can be master in one z/OS and user in the other z/OS.
The master catalog for a system must contain entries for all user catalogs and their aliases
that the system uses. Also, all SYS1 data sets must be cataloged in the master catalog for
proper system initialization.
Attention: To minimize update activity to the master catalog, and to reduce the exposure
to breakage, we strongly recommend that only SYS1 data sets, user catalog connector
records, and the aliases pointing to those connectors should be in the master catalog.
For more information refer to z/OS MVS Initialization and Tuning Reference, SA22-7592.
For information about the IDCAMS LISTCAT command, see also “Listing a catalog” on
page 322.
If you do not want to run an IDCAMS job, you can run LISTCAT as a line command in ISPF
Option 3.4. List the SYS1.PARMLIB and type LISTC ENT(/), as shown in Figure 6-5.
Note: The / specifies to use the data set name on the line where the command is entered.
User catalogs
The difference between the master catalog and the user catalogs, as we saw, is in the
function. User catalogs should be used to contain information about your installation
cataloged data sets other than SYS1 data sets. There are no set rules as to how many you
should have or how large they should be. It depends entirely on your environment. Cataloging
data sets for two unrelated applications in the same catalog creates a single point of failure
for them that otherwise may not exist. An assessment of the impact of outage of a given
catalog may help determine if it is too big or would impact too many different applications.
// DD DSN=PAY.D1
// DD DSN=PAY.D2
// DD DSN=DEPT1.VAC
// DD DSN=DEPT2.VAC
PAY.D1
...
UCAT1
PAY.D1
MCAT PAY.D2
...
ALIAS: PAY
UCAT1 PAY.D2
ALIAS: DEPT1
...
ALIAS: DEPT2
UCAT2
SYS1.LINKLIB UCAT2
SYS1.PARMLIB DEPT1.VAC
... DEPT2.VAC
...
DEPT1.VAC
DEPT2.VAC
...
Using aliases
Aliases are used to tell catalog management which user catalog your data set is cataloged in.
First, you place a pointer to an user catalog in the master catalog through the IDCAMS
DEFINE UCAT command. Then, you define an appropriate alias name for a user catalog in the
master catalog. Next, match the high-level qualifier (HLQ) of your data set with the alias. This
identifies the appropriate user catalog to be used to satisfy the request.
In Figure 6-6, all data sets with an HLQ of PAY have their information in the user catalog
UCAT1 because in the master catalog there is an alias PAY pointing to UCAT1.
The data sets with an HLQ of DEPT1 and DEPT2 respectively have their information in the
user catalog UCAT2 because in the master catalog there are aliases DEPT1 and DEPT2
pointing to UCAT2.
Note: Aliases can also be used with non-VSAM data sets in order to create alternate
names to the same data set. Those aliases are not related to a user catalog.
To define an alias, use the IDCAMS command DEFINE ALIAS. An example is in “Defining a
catalog and its aliases” on page 316.
However, the multilevel alias facility should only be used when a better solution cannot be
found. The need for the multilevel alias facility may indicate some data set naming
conventions problems.
For more information about the multilevel alias facility refer to z/OS DFSMS: Managing
Catalogs, SC26-7409.
Start
Standard search order for a LOCATE request
Y
STEPCAT Search data set in STEPCAT
?
N
Continue if not found
Y
JOBCAT Search/define data set in
? JOBCAT
N
Continue if not found
You can use RACF to prevent the use of the CATALOG parameter and restrict the ability to
define data sets in the master catalog.
Note: For SMS-managed data sets, JOBCAT and STEPCAT DD statements are not
allowed and cause a job failure. Also, they are not recommended even for non-SMS data
sets, because they may cause conflicted information. Therefore, do not use them, and
remember that since z/OS V1R7 they are phased out.
To use an alias to identify the catalog to be searched, the data set must have more than one
data set qualifier.
For information about the catalog standard search order also refer to z/OS DFSMS:
Managing Catalogs, SC26-7409.
TEST1.B
Defining a catalog
You can use the IDCAMS to define and maintain catalogs. See also “Access method services
(IDCAMS)” on page 135. Defining a master catalog or user catalog is basically the same.
Use the access method services command DEFINE USERCATALOG ICFCATALOG to define the
basic catalog structure (BCS) of an ICF catalog. Using this command you do not specify
whether you want to create a user or a master catalog. How to identify the master catalog to
the system is described in “Catalogs by function” on page 310.
A connector entry to this user catalog is created in the master catalog, as the listing in
Figure 6-10 shows.
The attributes of the user catalog are not defined in the master catalog. They are described in
the user catalog itself and its VVDS entry. This is called the self describing record. The self
describing record is given a key of binary zeros to ensure it is the first record in the catalog.
There are no associations (aliases) yet for this user catalog. To create any, you need to
define aliases.
To define a volume catalog (for tapes), use the parameter VOLCATALOG instead of ICFCATALOG.
See z/OS DFSMS Access Method Services for Catalogs, SC26-7394, for more detail.
If you do not want to change or add any attributes, you need only supply the entry name of
the object being defined and the MODEL parameter. When you define a BCS, you must also
specify the volume and space information for the BCS.
For further information about using a model refer to z/OS DFSMS: Managing Catalogs,
SC26-7409.
Defining aliases
In order to use a catalog, the system must be able to determine which data sets should be
defined in that catalog. The simplest way to accomplish this is to define aliases in the master
catalog for the user catalog. Before defining an alias, carefully consider the effect the new
alias has on old data sets. A poorly chosen alias could make some data sets inaccessible.
You can define aliases for the user catalog in the same job in which you define the catalog by
including DEFINE ALIAS commands after the DEFINE USERCATALOG command. You can use
conditional operators to ensure the aliases are only defined if the catalog is successfully
defined. After the catalog is defined, you can add new aliases or delete old aliases.
You cannot define an alias if a data set cataloged in the master catalog has the same
high-level qualifier as the alias. The DEFINE ALIAS command fails with a "Duplicate data
set name" error. For example, if a catalog is named TESTE.TESTSYS.ICFCAT, you cannot
define the alias TESTE for any catalog.
Use the sample SYSIN for an IDCAMS job in Figure 6-11 to define aliases TEST1 and
TEST2.
DEFINE ALIAS -
(NAME(TEST1) -
RELATE(OTTO.CATALOG.TEST))
DEFINE ALIAS -
(NAME(TEST2) -
RELATE(OTTO.CATALOG.TEST))
These definitions result in the following entries in the master catalog (Figure 6-12).
Both aliases have an association to the newly defined user catalog. If you now create a new
data set with an HLQ of TEST1 or TEST2, its entry will be directed to the new user catalog.
Also, the listing of the user catalog connector now shows both aliases (Figure 6-13).
CATALOG.MASTER
Tip: Convert all intra-sysplex RESERVES in global ENQs through the conversion RNL.
Independent of the number of catalogs, use the virtual lookaside facility (VLF) for buffering
the user catalog CIs. The master catalog CIs are naturally buffered in the catalog address
space (CAS). Multiple catalogs can reduce the impact of the loss of a catalog by:
Reducing the time necessary to recreate any given catalog
Allowing multiple catalog recovery jobs to be in process at the same time
Recovery from a pack failure is dependent on the total amount of catalog information on a
volume—regardless of whether this information is stored in one or many catalogs.
When you are using multiple user catalogs you should consider a concept of grouping data
sets under different high-level qualifiers. You can then spread them over multiple catalogs by
defining aliases for the different catalogs.
Cache
MVS 1 Cache
MVS 2
Note: The device must be defined as shared to all systems that access it.
If some systems have the device defined as shared and some do not, catalog corruption will
occur. Check with your system programmer to determine shared volumes. Note that it is not
necessary that the catalog actually be shared between systems; the catalog address space
assumes it is shared if it meets the criteria stated. All VVDSs are defined as shared. Tape
volume catalogs can be shared in the same way as other catalogs.
By default, catalogs are defined with SHAREOPTIONS(3 4). You can specify that a catalog is
not to be shared by defining the catalog with SHAREOPTIONS(3 3). Only define a catalog as
unshared if you are certain it will not be shared. Place unshared catalogs on volumes that
have been initialized as unshared. Catalogs that are defined as unshared and that reside on
shared volumes will become damaged if referred to by another system.
Attention: To avoid catalog corruption, define a catalog volume on a shared UCB and set
catalog SHAREOPTIONS to (3 4) on all systems sharing a catalog.
Using SHAREOPTIONS 3 means that VSAM does not issue the ENQ SYSVSAM SYSTEMS
for the catalog; SHAREOPTIONS 4 means that the VSAM buffers need to be refreshed.
You can check whether a catalog is shared by running the operator command:
MODIFY CATALOG,ALLOCATED
There is a flag in the catalog that indicates whether the catalog is shared.
If a catalog is not really shared with another system, move the catalog to an unshared device
or alter its SHAREOPTIONS to (3 3). To prevent potential catalog damage, never place a
catalog with SHAREOPTIONS (3 3) on a shared device.
There is one VVR in a shared catalog that is used as a log by all the catalog management
accessing such catalog. This log is used to guarantee the coherency of each catalog buffer in
each z/OS system.
The checking also affects performance because in order to maintain the integrity, for every
catalog access, a special VVR in the shared catalog must be read before using the cached
version of the BCS record. This access implies a DASD reserve and I/O operations.
To avoid I/O operations to read the VVR you can use enhanced catalog sharing (ECS). For
information about ECS, see “Enhanced catalog sharing” on page 351.
Checking also ensures that the control blocks for the catalog in the CAS are updated. This
occurs in the event the catalog has been extended, or otherwise altered from another system.
This checking maintains data integrity.
You can use the LISTCAT output to monitor VSAM data sets including catalogs. The statistics
and attributes listed can be used to help determine if you should reorganize, recreate, or
otherwise alter a VSAM data set to improve performance or avoid problems.
The LISTCAT command can be used in many variations to extract information about a
particular entry in the catalog. It extracts the data from the BCS and VVDS.
LISTCAT examples
Here are some LISTCAT examples you should know to monitor your catalogs:
List all ALIAS entries in the master catalog:
LISTCAT ALIAS CAT(master.catalog.name)
This command provides a list of all aliases that are currently defined in your master
catalog. If you need information only about one specific alias, use the keyword
ENTRY(aliasname) and specify ALL to get detailed information. For a sample output of this
command see Figure 6-12 on page 318.
TEST1.A TEST1.B
Since z/OS V1R7, an attempt to define a page data set in a catalog not pointed to by the
running master causes an IDCAMS message, instead of it being executed and causing later
problems.
Sometimes in your installation you need to delete an alias, delete only a catalog entry, or you
may have orphaned catalog information on your DASD and need to delete a VSAM or a
Delete aliases
To simply delete an alias, use the IDCAMS DELETE ALIAS command, specifying the alias you
are deleting. To delete all the aliases for a catalog, use EXPORT DISCONNECT to disconnect the
catalog. The aliases are deleted when the catalog is disconnected. When you again connect
the catalog (using IMPORT CONNECT) the aliases remain deleted.
Figure 6-20 Delete the VVDS entry for a non-VSAM data set
Caution: When deleting VSAM KSDS, you must issue a DELETE VVR for each of the
components, the DATA, and the INDEX.
The DELETE command with keyword RECOVERY removes the GDG base catalog entry from the
catalog.
Delete an ICF
When deleting an ICF, you must take care to specify whether you want to delete only the
catalog, or if you want to delete all associated data. The following examples show how to
delete a catalog.
Delete with recovery
In Figure 6-22, a user catalog is deleted in preparation for replacing it with an imported
backup copy. The VVDS and VTOC entries for objects defined in the catalog are not
deleted and the data sets are not scratched, as shown in the JCL.
RECOVERY specifies that only the catalog data set is deleted, without deleting the objects
defined in the catalog.
Delete an empty user catalog
In Figure 6-23 on page 327, a user catalog is deleted. A user catalog can be deleted when
it is empty; that is, when there are no objects cataloged in it other than the catalog's
volume. If the catalog is not empty, it cannot be deleted unless the FORCE parameter is
specified.
Attention: The FORCE parameter deletes all data sets in the catalog. The DELETE command
deletes both the catalog and the catalog's user catalog connector entry in the master
catalog.
For more information about the DELETE command, refer to z/OS DFSMS Access Method
Services for Catalogs, SC26-7394.
Where:
SCRATCH The non-VSAM data set being deleted from the catalog is to be removed from
the VTOC of the volume on which it resides. When SCRATCH is specified for
a cluster, alternate index, page space, or data space, the VTOC entries for
the volumes involved are updated to reflect the deletion of the object.
NOSCRATCH The non-VSAM data set being deleted from the catalog is to remain in the
VTOC of the volume on which it resides or it has already been scratched from
the VTOC. When NOSCRATCH is specified for a cluster, page space,
alternate index, or data space, the VTOC entries for the volumes involved are
not updated.
To execute the DELETE command against a migrated data set, you must have RACF group
ARCCATGP defined. In general to allow certain authorized users to perform these operations
on migrated data sets without recalling them, perform the following steps:
1. Define a RACF catalog maintenance group named ARCCATGP.
ADDGROUP (ARCCATGP)
2. Connect the desired users to that group.
Only when such a user is logged on under group ARCCATGP does DFSMShsm bypass the
automatic recall for UNCATALOG, RECATALOG, and DELETE/NOSCRATCH requests for
migrated data sets. For example, the following LOGON command demonstrates starting a
TSO session under ARCCATGP:
LOGON userid | password GROUP(ARCCATGP)
For further information about ARCCATGP group, refer to z/OS DFSMShsm Implementation
and Customization Guide, SC35-0418.
Backing up a BCS
IDCAMS EXPORT command
DFSMSdss logical dump command
DFSMShsm BACKDS command
Backing up a VVDS
Backup the full volume
Backup all data sets described in the VVDS
Backup procedures
The two parts of an ICF catalog, the BCS and the VVDS, require different backup techniques.
The BCS can be backed up like any other data set, whereas the VVDS should only be backed
up as part of a volume dump. The entries in the VVDS and VTOC are backed up when the
data sets they describe are:
Exported with IDCAMS
Logically dumped with DFSMSdss
Backed up with DFSMShsm
Important: Because catalogs are essential system data sets, it is important that you
maintain backup copies. The more recent and accurate a backup copy, the less impact a
catalog outage will have on your installation.
Backing up a BCS
To back up a BCS you can use one of the following methods:
The access method services EXPORT command
The DFSMSdss logical DUMP command
The DFSMShsm BACKDS command
The copy created by these utilities is a portable sequential data set that can be stored on a
tape or direct access device, which can be of a different device type than the one containing
the source catalog.
When these commands are used to back up a BCS, the aliases of the catalog are saved in
the backup copy. The source catalog is not deleted, and remains as a fully functional catalog.
The relationships between the BCS and VVDSs are unchanged.
You cannot permanently export a catalog by using the PERMANENT parameter of EXPORT.
The TEMPORARY option is used even if you specify PERMANENT or allow it to default.
Figure 6-25 shows you an example for an IDCAMS EXPORT.
Note: You cannot use IDCAMS REPRO or other copying commands to create and recover
BCS backups.
You should also make periodic volume dumps of the master catalog's volume. This dump can
later be used by the stand-alone version of DFSMSdss to restore the master catalog if you
cannot access the volume from another system.
Backing up a VVDS
The VVDS should not be backed up as a data set to provide for recovery. To back up the
VVDS, back up the volume containing the VVDS, or back up all data sets described in the
VVDS (all VSAM and SMS-managed data sets). If the VVDS ever needs to be recovered,
recover the entire volume, or all the data sets described in the VVDS.
You can use either DFSMSdss or DFSMShsm to back up and recover a volume or individual
data sets on the volume.
MASTER
CATALOG
LOCK IMPORT
USER
CATALOG
Recovery procedures
Before you run the recovery procedures mentioned in this section, you should also read 6.22,
“Fixing temporary catalog problems” on page 349.
Normally, a BCS is recovered separately from a VVDS. A VVDS usually does not need to be
recovered, even if an associated BCS is recovered. However, if you need to recover a VVDS,
and a BCS resides on the VVDS’s volume, you must recover the BCS as well. If possible, you
should export the BCS before recovering the volume, and then recover the BCS from the
exported copy. This ensures a current BCS.
Before recovering a BCS or VVDS, try to recover single damaged records. If damaged
records can be rebuilt, you can avoid a full recovery.
Single BCS records can be recovered using the IDCAMS DELETE and DEFINE commands as
described in 6.11, “Defining and deleting data sets” on page 324. Single VVDS and VTOC
records can be recovered using the IDCAMS DELETE command and by recovering the data
sets on the volume.
The way you recover a BCS depends on how it was saved (see 6.12, “Backup procedures” on
page 329). When you recover a BCS, you do not need to delete and redefine the target
catalog unless you want to change the catalog's size or other characteristics, or unless the
BCS is damaged in such a way as to prevent the usual recovery.
Lock the BCS before you start recovery so that no one else has access to it while you recover
the BCS. If you do not restrict access to the catalog, users might be able to update the
catalog during recovery or maintenance and create a data integrity exposure. The catalog
also will be unavailable to any system that shares the catalog. You cannot lock a master
catalog.
After you recover the catalog, update the BCS with any changes which have occurred since
the last backup, for example, by running IDCAMS DEFINE RECATALOG for all missing entries.
You can use the access method services DIAGNOSE command to identify certain
unsynchronized entries.
For further information about recovery procedures, see z/OS DFSMS: Managing Catalogs,
SC26-7409. For information about the IDCAMS facility, see z/OS DFSMS Access Method
Services for Catalogs, SC26-7394.
UCAT1
Index Data component
DSNAME1 DSNAME1 ... VOL001 VOL001
VVDS
DSNAME2 DSNAME2 ... VOL002
DSNAME1 ... UCAT1
DSNAME3 DSNAME3 ... VOL001
DSNAME3 ... UCAT1
DSNAME4 DSNAME4 ... VOL002
DSNAME5 ... UCAT27
DSNAME5 DSNAME5 ... VOL001
. VVDS
VSAM errors
There are two kinds of VSAM errors that could happen to your BCS or VVDS:
Logical errors
The records on the DASD volume still have valid physical characteristics like record size
or CI size. The VSAM information in those records is wrong, like pointers from one record
to another or the end-of-file information.
When errors in the VSAM structure occur, they are in most cases logical errors for the BCS.
Because the VVDS is an entry-sequenced data set (ESDS), it has no index component.
Logical errors for an ESDS are unlikely.
You can use the IDCAMS EXAMINE command to analyze the structure of the BCS. As
explained previously, the BCS is a VSAM key-sequenced data set (KSDS). Before running the
EXAMINE, you should run an IDCAMS VERIFY to make sure that the VSAM information is
current, and ALTER LOCK the catalog to prevent update from others while you are inspecting it.
With the parameter INDEXTEST you analyze the integrity of the index. With parameter
DATATEST you analyze the data component. If only the index test shows errors, you might
have the chance to recover the BCS by just running an EXPORT/IMPORT to rebuild the index. If
there is an error in the data component, you probably have to recover the BCS as described
in “Recovery procedures” on page 331.
Catalog errors
By catalog errors we mean errors in the catalog information of a BCS or VVDS, or
unsynchronized information between the BCS and VVDS. The VSAM structure of the BCS is
still valid, that is, an EXAMINE returns no errors.
Catalog errors can make a data set inaccessible. Sometimes it is sufficient to delete the
affected entries, sometimes the catalog needs to be recovered (see “Recovery procedures”
on page 331).
You can use the IDCAMS DIAGNOSE command to validate the contents of a BCS or VVDS.
You can use this command to check a single BCS or VVDS and to compare the information
between a BCS and multiple VVDSs.
For various DIAGNOSE examples see z/OS DFSMS Access Method Services for Catalogs,
SC26-7394.
DEFINE
RECATALOG
Protecting Catalogs
RACF
STGADMIN profiles in
vsam.rectlg
RACF FACILITY class:
dataset.def
STGADMIN.IDC.DIAGNOSE.CATALOG
STGADMIN.IDC.DIAGNOSE.VVDS dataset.ghi
STGADMIN.IDC.EXAMINE.DATASET
Protecting catalogs
The protection of data includes:
Data security: the safety of data from theft or intentional destruction
Data integrity: the safety of data from accidental loss or destruction
Data can be protected either indirectly, by preventing access to programs that can be used to
modify data, or directly, by preventing access to the data itself. Catalogs and cataloged data
sets can be protected in both ways.
To protect your catalogs and cataloged data, use the Resource Access Control Facility
(RACF) or a similar product.
For information about using APF for program authorization, see z/OS MVS Programming:
Authorized Assembler Services Guide, SA22-7608.
All IDCAMS load modules are contained in SYS1.LINKLIB, and the root segment load
module (IDCAMS) is link-edited with the SETCODE AC(1) attribute. These two
characteristics ensure that access method services executes with APF authorization.
To open a catalog as a data set, you must have ALTER authority and APF authorization.
When defining an SMS-managed data set, the system only checks to make sure the user has
authority to the data set name and SMS classes and groups. The system selects the
appropriate catalog, without checking the user's authority to the catalog. You can define a
data set if you have ALTER or OPERATIONS authority to the applicable data set profile.
Deleting any type of RACF-protected entry from a RACF-protected catalog requires ALTER
authorization to the catalog or to the data set profile protecting the entry being deleted. If a
non-VSAM data set is SMS-managed, RACF does not check for DASDVOL authority. If a
non-VSAM, non-SMS-managed data set is being scratched, DASDVOL authority is also
checked.
For ALTER RENAME, the user is required to have the following two types of authority:
ALTER authority to either the data set or the catalog
ALTER authority to the new name (generic profile) or CREATE authority to the group
Be sure that RACF profiles are correct after you use REPRO MERGECAT or CNVTCAT on a
catalog that uses RACF profiles. If the target and source catalogs are on the same volume,
the RACF profiles remain unchanged.
Tape data sets defined in an integrated catalog facility catalog can be protected by:
Controlling access to the tape volumes
Controlling access to the individual data sets on the tape volumes
Profiles
To control the ability to perform functions associated with storage management, define
profiles in the FACILITY class whose profile names begin with STGADMIN (storage
administration). For a complete list of STGADMIN profiles, see z/OS DFSMSdfp Storage
Administration Reference, SC26-7402. Examples of some profiles are:
STGADMIN.IDC.DIAGNOSE.CATALOG
STGADMIN.IDC.DIAGNOSE.VVDS
STGADMIN.IDC.EXAMINE.DATASET
UCAT1
UCAT2
Merging catalogs
You might find it beneficial to merge catalogs if you have many small or seldom-used
catalogs. An excessive number of catalogs can complicate recovery procedures and waste
resources such as CAS storage, tape mounts for backups, and system time performing
backups.
Merging catalogs is accomplished in much the same way as splitting catalogs (see “Splitting a
catalog” on page 339). The only difference between splitting catalogs and merging them is
that in merging, you want all the entries in a catalog to be moved to a different catalog, so that
you can delete the obsolete catalog.
Use the following steps to merge two integrated catalog facility catalogs:
1. Use ALTER LOCK to lock both catalogs.
2. Use LISTCAT to list the aliases for the catalog you intend to delete after the merger:
//JOB ...
//S1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//DD1 DD DSN=listcat.output,DISP=(NEW,CATLG),
// SPACE=(TRK,(10,10)),
// DCB=(RECFM=VBA,LRECL=125,BLKSIZE=629)
//SYSIN DD *
LISTC ENT(catalog.name) ALL -
OUTFILE(DD1)
/*
Important: This step can take a long time to complete. If the MERGECAT job is cancelled
for some reason, all merged entries so far remain in the target catalog. They are not
backed out in case the job fails. See also “Recovering from a REPRO MERGECAT
Failure” in z/OS DFSMS: Managing Catalogs, SC26-7409.
Since z/OS V1R7, REPRO MERGECAT provides the capability to copy a range of records from
one user catalog to another. It allows recovery of a broken catalog by enabling you to copy
from one specific key to another specific key just before where the break occurred and
then recover data beginning after the break. Refer to the parameters FROMKEY/TOKEY in the
previous example.
5. Use the listing created in step 2 to create a sequence of DELETE ALIAS and DEFINE ALIAS
commands to delete the aliases of the obsolete catalog, and to redefine the aliases as
aliases of the catalog you are keeping.
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the
changed catalogs and uses a different master catalog.
6. Use DELETE USERCATALOG to delete the obsolete catalog. Specify RECOVERY on the
DELETE command.
7. If your catalog is shared, run the EXPORT DISCONNECT command on each shared system to
remove unwanted user catalog connector entries. If your catalog is shared, run the EXPORT
DISCONNECT command on each shared system to remove unwanted user catalog
connector entries.
8. Use ALTER UNLOCK to unlock the remaining catalog.
You can also merge entries from one tape volume catalog to another using REPRO MERGECAT.
REPRO retrieves tape library or tape volume entries and redefines them in a target tape volume
catalog. In this case, VOLUMEENTRIES needs to be used to correctly filter the appropriate
entries. The LEVEL parameter is not allowed when merging tape volume catalogs.
UCAT2
UCAT3
Splitting catalogs
You can split a catalog to create two catalogs or to move a group of catalog entries if you
determine that a catalog is either unacceptably large or that it contains too many entries for
critical data sets.
If the catalog is unacceptably large (a catalog failure would leave too many entries
inaccessible), then you can split the catalog into two catalogs. If the catalog is of an
acceptable size but contains entries for too many critical data sets, then you can simply move
entries from one catalog to another.
To split a catalog or move a group of entries, use the access method services REPRO MERGECAT
command. Use the following steps to split a catalog or to move a group of entries:
1. Use ALTER LOCK to lock the catalog. If you are moving entries to an existing catalog, lock it
as well.
2. If you are splitting a catalog, define a new catalog with DEFINE USERCATALOG LOCK (see also
“Defining a catalog and its aliases” on page 316).
3. Use LISTCAT to obtain a listing of the catalog aliases you are moving to the new catalog.
Use the OUTFILE parameter to define a data set to contain the output listing (see also
“Merging catalogs” on page 337).
4. Use EXAMINE and DIAGNOSE to ensure that the catalogs are error-free. Fix any errors
indicated (see also “Checking the integrity on an ICF structure” on page 333).
Important: This step can take a long time to complete. If the MERGECAT job is cancelled
for some reason, all merged entries so far will remain in the target catalog. They are not
backed out in case the job fails. See also “Recovering from a REPRO MERGECAT
Failure” in z/OS DFSMS: Managing Catalogs, SC26-7409.
6. Use the listing created in step 3 to create a sequence of DELETE ALIAS and DEFINE ALIAS
commands for each alias. These commands delete the alias from the original catalog, and
redefine them as aliases for the catalog which now contains entries belonging to that alias
name.
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the
changed catalogs and uses a different master catalog.
7. Unlock both catalogs using ALTER UNLOCK.
Catalog performance
Performance should not be your main consideration when you define catalogs. It is more
important to create a catalog configuration that allows easy recovery of damaged catalogs
with the least amount of system disruption. However, there are several options you can
choose to improve catalog performance without affecting the recoverability of a catalog.
Remember that in an online environment, such as CICS/DB2, the number of data set
allocations is minimal and consequently the catalog activity is low.
Buffering catalogs
The simplest method of improving catalog performance is to use a buffer to maintain catalog
records within CAS private area address space or VLF data space.
Two types of buffer are available exclusively for catalogs. The in-storage catalog (ISC) buffer
is contained within the catalog address space (CAS). The catalog data space buffer (CDSC)
is separate from CAS and uses the z/OS VLF component, which stores the buffered records
The two types of buffer are used to keep catalog records in the storage. This avoids I/Os that
would be necessary to read the records from DASD.
There are several things you need to take into considerations to decide what kind of buffer to
use for which catalog. See z/OS DFSMS: Managing Catalogs, SC26-7409 for more
information about buffering.
Another kind of caching is using Enhanced Catalog Sharing to avoid I/Os to read the catalog
VVR. Refer to “Enhanced catalog sharing” on page 351 for information.
Master catalog
If the master catalog only contains entries for catalogs, catalog aliases, and system data sets,
the entire master catalog is read into main storage during system initialization. Because the
master catalog, if properly used, is rarely updated, the performance of the master catalog is
not appreciably affected by I/O requirements. For that reason, keep the master catalog small
and do not define user data sets into it.
For more information about these values see z/OS DFSMS Access Method Services for
Catalogs, SC26-7394.
Since z/OS V1R7, a new catalog auto-tuning (every 10-minutes) automatically modifies
temporarily the number of data buffers, index buffers, and VSAM strings for catalogs. When
any modification occurs the message IEC391I is issued telling the new values. This function
is by default enabled, but can be disabled through the F CATALOG,DISABLE(AUTOTUNING).
If the catalog is shared only within one GRSplex, you should convert the SYSIGGV2 resource
to a global enqueue to avoid reserves on the volume on which the catalog resides. If you are
not converting SYSIGGV2, you can have ENQ contentions on those volumes and you can
even run into deadlock situations.
For more information refer to z/OS MVS Planning: Global Resource Serialization,
SA22-7600.
F CATALOG,REPORT,PERFORMANCE command
This MODIFY command is very important for performance analysis. Try to get familiar with
each meaning in order to understand what can be done to improve the catalog performance.
These counters can be zeroed through the use of a reset command.
Also the command F CATALOG,REPORT,CACHE produces rich information about the use
of catalog buffering.
System 1 System 2
User catalog
As soon as a user requests a catalog function (for example, to locate or define a data set), the
CAS gets control to handle the request. When it has finished, it returns the requested data to
the user. A catalog task which handles a single user request is called a service task. To each
user request a service task is assigned. The minimum number of available service tasks is
specified in the SYSCATxx member of SYS1.NUCLEUS (or the LOADxx member of
SYS1.PARMLIB). A table called the CRT keeps track of these service tasks.
The CAS contains all information necessary to handle a catalog request, like control block
information about all open catalogs, alias tables, and buffered BCS records.
During the initialization of an MVS system, all user catalog names identified in the master
catalog, their aliases, and their associated volume serial numbers are placed in tables in
CAS.
You can use the MODIFY CATALOG operator command to work with the catalog address space.
See also “Working with the catalog address space” on page 347.
Since z/OS 1.8 the maximum number of parallel catalog requests is 999, as defined in the
SYSCAT parmlib member. Previously it was 180.
Never use RESTART to refresh catalog or VVDS control blocks or to change catalog
characteristics. Restarting CAS is a drastic procedure, and if CAS cannot restart, you will
have to IPL the system.
When you issue MODIFY CATALOG,RESTART, the CAS mother task is abended with abend code
81A, and any catalog requests in process at the time are redriven.
The restart of CAS in a new address space should be transparent to all users. However, even
when all requests are redriven successfully and receive a return code of zero, the system
might produce indicative dumps. There is no way to suppress these indicative dumps.
For a discussion about the entire functionality of the MODIFY CATALOG command, refer to z/OS
DFSMS: Managing Catalogs, SC26-7409.
You can use the following commands to close or unallocate a BCS or VVDS in the catalog
address space. The next access to the BCS or VVDS reopens it and rebuilds the control
blocks.
MODIFY CATALOG,CLOSE(catalogname) - Closes the specified catalog but leaves it
allocated.
MODIFY CATALOG,UNALLOCATE(catalogname) - Unallocates a catalog; if you do not specify a
catalog name, all catalogs are unallocated.
MODIFY CATALOG,VCLOSE(volser) - Closes the VVDS for the specified volser.
MODIFY CATALOG,VUNALLOCATE - Unallocates all VVDSs; you cannot specify a volser, so try
to use VCLOSE first.
Delays or hangs can occur if the catalog needs one of these resources and it is held already
by someone else, for example by a CAS of another system. You can use the following
commands to display global resource serialization (GRS) data:
D GRS,C - Display GRS contention data for all resources, who is holding a resource, and
who is waiting.
D GRS,RES=(resourcename) - Displays information for a specific resource.
D GRS,DEV=devicenumber - Displays information about a specific device, such as whether it
is reserved by the system.
You should route these commands to all systems in the sysplex to get an overview about
hang situations.
When you have identified a catalog address space holding a resource for a long time, or the
GRS outputs do not show you anything but you have still catalog problems, you can use the
following command to get detailed information about the catalog services task:
MODIFY CATALOG,LIST - Lists the currently active service tasks, their task IDs, duration, and
the job name for which the task is handling the request.
You should watch for tasks with long duration time. You can get detailed information about a
specific task by running the following command for a specific task ID:
MODIFY CATALOG,LISTJ(taskid),DETAIL - Shows you detailed information about a service
task, for example if it’s waiting for the completion of an ENQ.
When you have identified a long running task which could be in a deadlock situation with
another task (on another system), you can end and redrive the task to resolve the lockout.
The following commands help you to end a catalog service task:
MODIFY CATALOG,END(taskid),REDRIVE - End a service task and redrive it.
MODIFY CATALOG,END(taskid),NOREDRIVE - Permanently end the task without redriving.
MODIFY CATALOG,ABEND(taskid) - Abnormally end a task which could not be stopped by
using the END parameter.
You can use the FORCE parameter for these commands if the address space that the service
task is operating on behalf of has ended abnormally. Use this parameter only in this case.
You could also try to end the job for which the catalog task is processing a request.
For more information about the MODIFY CATALOG command and fixing temporary catalog
problems, refer to z/OS DFSMS: Managing Catalogs, SC26-7409.
MODIFY CATALOG,ECSHR(AUTOADD)
VVR VVR
Coupling
CATALOG
Facility
Most of the overhead associated with shared catalog is eliminated if you use enhanced
catalog sharing. ECS uses a cache coupling facility structure to keep the special VVR. Also
the coupling facility structure (as defined in CFRM) keeps a copy of updated records.
There is no I/O necessary to read the catalog VVR in order to verify the updates. In addition,
the eventual modifications are also kept in the coupling facility structure, thus avoiding more
I/O.
ECS saves about 50 percent in elapsed time and provides an enormous reduction in
ENQ/Reserves.
Only those catalogs that were added are shared in ECS mode. The command MODIFY
CATALOG,ECSHR(STATUS) shows you the ECS status for each catalog, if it is eligible or not, and
if it is already activated.
Restrictions
The following restrictions apply to ECS mode usage:
You cannot use ECS mode from one system and VVDS mode from another system
simultaneously to share a catalog. You will get an error message if you try this.
Attention: If you attempt to use a catalog that is currently ECS-active from a system
outside the sysplex, the request might break the catalog.
No more than 1024 catalogs can currently be shared using ECS from a single system.
All systems sharing the catalog in ECS mode must have connectivity to the same
Coupling Facility, and must be in the same global resource serialization (GRS) complex.
When you use catalogs in ECS mode, convert the resource SYSIGGV2 to a SYSTEMS
enqueue. Otherwise, the catalogs in ECS mode will be damaged.
For more information about ECS, refer to z/OS DFSMS: Managing Catalogs, SC26-7409.
For information about defining coupling facility structures, see z/OS MVS Setting Up a
Sysplex, SA22-7625.
As an extension of VSAM RLS, DFSMStvm enables any job or application that is designed for
data sharing to read-share or write-share VSAM recoverable data sets. VSAM RLS provides
a server for sharing VSAM data sets in a sysplex. VSAM RLS uses coupling-facility-based
locking and data caching to provide sysplex-scope locking and data access integrity, while
DFSMStvs adds logging, commit, and backout processing.
To understand DFSMStvs, it is necessary to first review base VSAM information and VSAM
record-level sharing (RLS).
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. 353
7.1 VSAM share options
SHAREOPTIONS (crossregion,crosssystem)
The cross-region share options specify the amount of sharing allowed among regions within
the same system or multiple systems. Cross-system share options specify how the data set is
shared among systems. The serialization should be done by using global resource
serialization (GRS) or a similar product.
SHAREOPTIONS (1,x)
The data set can be shared by any number of users for read access (open for input), or it can
be accessed by only one user for read/write access (open for output). If the data set is open
for output by one user, a read or read/write request by another user will fail. With this option,
VSAM ensures complete data integrity for the data set. When the data set is already open for
RLS processing, any request to open the data set for non-RLS access will fail.
SHAREOPTIONS (2,x)
The data set can be shared by one user for read/write access, and by any number of users for
read access. If the data set is open for output by one user, another open for output request
will fail, whereas a request for read access will succeed. With this option, VSAM ensures write
SHAREOPTIONS (3,x)
The data set can be opened by any number of users for read and write request. VSAM does
not ensure any data integrity. It is the responsibility of the users to maintain data integrity by
using enqueue and dequeue macros. This setting does not allow any type of non-RLS access
while the data set is open for RLS processing.
For more information about VSAM share options, refer to z/OS DFSMS: Using Data Sets,
SC26-7410.
For more information about VSAM buffering techniques refer to 4.43, “VSAM: Buffering
modes” on page 180.
MACRF=(NSR/LSR/GSR)
The Access Method Control block (ACB) describes an open VSAM data set. A subparameter
for the ACB macro is MACRF, in which you can specify the buffering technique to be used by
VSAM. For LSR and GSR, you need to run the BLDVRP macro before opening the data set to
create the resource pool.
For information about VSAM macros, refer to z/OS DFSMS: Macro Instructions for Data Sets,
SC26-7408.
(record B) (Record E)
CICS CICS
AOR AOR
CICS CICS
AOR AOR
VSAM
CICS CICS
AOR AOR
System 1 System n
Problems
There are a couple of problems with this kind of CICS configuration:
CICS FOR is a single point of failure.
Multiple system performance is not acceptable.
Lack of scalability.
Over time the FORs became a bottleneck since the CICS environments became more and
more complex. CICS required a solution to have direct shared access to VSAM data sets
from multiple CICSs.
CICS CICS
AOR AOR
CICS CICS
AOR AOR
coupling facility
CICS CICS
AOR AOR
System 1 System n
VSAM record-level sharing (RLS) is a method of access to your existing VSAM files that
provides full read and write integrity at the record level to any number of users in your parallel
sysplex.
With VSAM RLS, multiple CICS systems can directly access a shared VSAM data set,
eliminating the need to ship functions between the application-owning regions and file-owning
regions. CICS provides the logging, commit, and backout functions for VSAM recoverable
data sets. VSAM RLS provides record-level serialization and cross-system caching. CICSVR
provides a forward recovery utility.
Level of sharing
The level of sharing that is allowed between applications is determined by whether or not a
data set is recoverable. For example:
Both CICS and non-CICS jobs can have concurrent read or write access to
nonrecoverable data sets. There is no coordination between CICS and non-CICS, so data
integrity can be compromised.
Coupling facility
The coupling facility (CF) is a shareable storage medium. It is licensed internal code (LIC)
running in a special type of PR/SM™ logical partition (LPAR) in certain zSeries and S/390
processors. It can be shared by the systems in one sysplex only. A CF makes data sharing
possible by allowing data to be accessed throughout a sysplex with assurance that the data
will not be corrupted and that the data will be consistent among all sharing users.
VSAM RLS uses a coupling facility to perform data-set-level locking, record locking, and data
caching. VSAM RLS uses the conditional write and cross-invalidate functions of the coupling
facility cache structure, thereby avoiding the need for control interval (CI) level locking.
VSAM RLS uses the coupling facility caches as store-through caches. When a control interval
of data is written, it is written to both the coupling facility cache and the direct access storage
device (DASD). This ensures that problems occurring with a coupling facility cache do not
result in the loss of VSAM data.
VSAM RLS also supports access to a data set through an alternate index, but it does not
support opening an alternate index directly in RLS mode. Also, VSAM RLS does not support
access through an alternate index to data stored under z/OS UNIX System Services.
Extended format, extended addressability, and spanned data sets are supported with VSAM
RLS. Compression is also supported.
Keyrange data sets and the IMBED attribute for a KSDS are obsolete. You cannot define new
data sets as keyrange or with an imbedded index anymore. However, there still might be old
data sets with these attributes in your installation.
Exception: SHAREOPTIONS(2,x)
For non-RLS access, SHAREOPTIONS(2,x) are handled as always. One user can have the
data set open for read/write access and multiple users can have it open for read access only.
VSAM does not provide data integrity for the readers.
If the data set is open for RLS access, non-RLS opens for read are possible. These are the
only share options, where a non-RLS request to open the data set will not fail if the data set is
already open for RLS processing. VSAM does not provide data integrity for the non-RLS
readers.
Non-CICS access
RLS access from batch jobs to data sets that are open by CICS depends on whether the data
set is recoverable or not. For recoverable data sets, non-CICS access from other applications
(that do not act as recoverable resource manager) is not allowed.
See “VSAM RLS/CICS data set recovery” on page 368 for details.
CICS CICS
R/W R/W
1 3 1 4
Batch 1 3 4 Batch
R/O R/O
coupling facility
SMSVSAM SMSVSAM
data space data space
1 2 3 4
System 1 System 2
VSAM data set
MACRF=RLS
The first request for a record after data set open for RLS processing will cause an I/O
operation to read in the CI that contains this record. A copy of the CI is stored into the cache
structure of the coupling facility and in the buffer pool in the data space.
Buffer coherency
Buffer coherency is maintained through the use of coupling facility (CF) cache structures and
the XCF cross-invalidation function. For the example in Figure 7-8, that means:
1. System 1 opens the VSAM data set for read/write processing.
2. System 1 reads in CI1 and CI3 from DASD; both CIs are stored in the cache structure in
the coupling facility.
3. System 2 opens the data set for read processing.
For further information about cross-invalidation refer to z/OS MVS Programming: Sysplex
Services Guide, SA22-7617.
The VSAM RLS coupling facility structures are discussed in more detail in “Coupling facility
structures for RLS sharing” on page 373.
GET UPD RPL_1 GET UPD RPL_2 GET CR RPL_3 GET NRI RPL_4
(Record B) (Record E) (Record B) (Record B)
Waits for
record lock
Record A
Record B
Record C Record B Record E
Holder Holder (EXCL)
control Record D (EXCL) CICS2.Tran2
interval (CI) CICS1.Tran1
Record E Waiter
Record F (SHARE)
CICS3.Tran3
Record G
VSAM RLS locks
The type of read integrity is specified either in the ACB macro or in the JCL DD statement:
ACB RLSREAD=NRI/CR/CRE
//dd1 DD dsn=datasetname,RLS=NRI/CR/CRE
Example
In our example in Figure 7-9 we have the following situation:
1. CICS transaction Tran1 gets an exclusive lock on Record B for update processing.
2. Transaction Tran2 gets an exclusive lock for update processing on Record E, which is in
the same CI.
3. Transaction Tran3 needs a shared lock also on Record B for consistent read; it has to wait
until the exclusive lock by Tran1 is released.
4. Transaction Tran4 does a dirty read (NRI); it doesn’t have to wait because in that case, no
lock is necessary.
With NRI, Tran4 can read the record even though it is held exclusively by Tran1. There is no
read integrity for Tran4.
Coupling Facility
RLS locking is performed in the coupling facility through the use of a CF lock structure
(IGWLOCK00) and the XES locking services.
Contention
When contention occurs on a VSAM record, the request that encountered the contention
waits for the contention to be removed. The lock manager provides deadlock detection. When
a lock request is in deadlock, the request is rejected, resulting in the VSAM record
management request completing with a deadlock error response.
A data set is considered recoverable if the LOG attribute has one of the following values:
UNDO
The data set is backward recoverable. Changes made by a transaction that does not
succeed (no commit was done) are backed out. CICS provides the transactional recovery.
See also “Transactional recovery” on page 370.
ALL
The data set is both backward and forward recoverable. In addition to the logging and
recovery functions provided for backout (transactional recovery), CICS records the image
of changes to the data set, after they were made. The forward recovery log records are
used by forward recovery programs and products such as CICS VSAM Recovery
(CICSVR) to reconstruct the data set in the event of hardware or software damage to the
data set. This is referred to as data set recovery. For LOG(ALL) data sets, both types of
recovery are provided, transactional recovery and data set recovery.
Non-CICS read/write access for recoverable data sets that are open by CICS is not allowed.
The recoverable attribute means that when the file is accessed in RLS mode, transactional
recovery is provided. With RLS, the recovery is only provided when the access is through
CICS file control, so RLS does not permit a batch (non-CICS) job to open a recoverable file
for OUTPUT.
Exclusive locks that VSAM RLS holds on the modified records cause other transactions that
have read-with-integrity requests and write requests for these records to wait. After the
modifying transaction is committed or backed out, VSAM RLS releases the locks and the
other transactions can access the records.
If the transaction fails, its changes are backed out. This capability is called transactional
recovery.
The CICS backout function removes changes made to the recoverable data sets by a
transaction. When a transaction abnormally ends, CICS performs a backout implicitly.
Example
In our example in Figure 7.11, transaction Trans1 is complete (committed) after Record 1 and
Record 2 are updated. Transactional recovery ensures that either both changes are made or
neither change is made. When the application requests commit, both changes are made
atomically. In the case of an failure after updating Record 1, the change to this record is
backed out. This applies only for recoverable data sets, not for non-recoverable ones.
Batch window
The batch window is a period of time in which online access to recoverable data sets must be
disabled. During this time, no transaction processing can be done. This is normally done
because it is necessary to run batch jobs or other utilities that do not properly support
recoverable data, even if those utilities use also RLS access. Therefore, to allow these jobs or
utilities to safely update the data, it is first necessary to make a copy of the data. In the event
that the batch job or utility fails or encounters an error, this copy can be safely restored and
online access can be re-enabled. If the batch job completes successfully, the updated copy of
the data set can be safely used because only the batch job had access to the data while it
was being updated. Therefore, the data cannot have been corrupted by interference from
online transaction processing.
See “Interacting with VSAM RLS” on page 388 for information about how to quiesce and
unquiesce a data set.
Lock structure
In a parallel sysplex you need only one lock structure for VSAM RLS because only one VSAM
sharing group is permitted. The required name is IGWLOCK00.
Ensure that the coupling facility lock structure has universal connectivity so that it is
accessible from all systems in the parallel sysplex that support VSAM RLS.
Tip: For high-availability environments, use a nonvolatile coupling facility for the lock
structure. If you maintain the lock structure in a volatile coupling facility, a power outage
could cause a failure and loss of information in the coupling facility lock structure.
They are also used as system buffer pool with cross-invalidation being done (see “Buffering
under VSAM RLS” on page 364).
Each coupling facility cache structure is contained in a single coupling facility. You may have
multiple coupling facilities and multiple cache structures.
There is also a tool available called CFSIZER. It is available on the IBM Web site at:
http://www-1.ibm.com/servers/eserver/zseries/cfsizer/vsamrls.html
ACTIVE STRUCTURE
----------------
ALLOCATION TIME: 02/24/2005 14:22:56
CFNAME : CF1
COUPLING FACILITY: 002084.IBM.02.000000026A3A
PARTITION: 1F CPCID: 00
ACTUAL SIZE : 14336 K
STORAGE INCREMENT SIZE: 256 K
ENTRIES: IN-USE: 0 TOTAL: 33331, 0% FULL
LOCKS: TOTAL: 2097152
PHYSICAL VERSION: BC9F02FD EDC963AC
LOGICAL VERSION: BC9F02FD EDC963AC
SYSTEM-MANAGED PROCESS LEVEL: 8
XCF GRPNAME : IXCLO001
DISPOSITION : KEEP
ACCESS TIME : 0
NUMBER OF RECORD DATA LISTS PER CONNECTION: 16
MAX CONNECTIONS: 4
# CONNECTIONS : 4
For more information about VSAM RLS parameters refer to z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
CACHE01
CICS CICS
R/W IGWLOCK00 R/W
SMSVSAM coupling facility SMSVSAM
data space data space
Batch Batch
R/O R/O
System 1 System 2
Both the primary and secondary SHCDS contain the same data. With the duplexing of the
data, VSAM RLS ensures that processing can continue in case VSAM RLS loses connection
to one SHCDS or the control data set got damaged. In that case, you can switch the spare
SHCDS to active.
To calculate the size of the sharing control data sets, follow the guidelines in z/OS DFSMSdfp
Storage Administration Reference, SC26-7402.
SHCDS operations
Use the following command to activate your newly defined SHCDS for use by VSAM RLS:
For the primary and secondary SHCDS, use
VARY SMS,SHCDS(SHCDS_name),NEW
For the spare SHCDS use
VARY SMS,SHCDS(SHCDS_name),NEWSPARE
D SMS,SHCDS
IEE932I 539
IGW612I 17:10:12 DISPLAY SMS,SHCDS
Name Size %UTIL Status Type
WTSCPLX2.VSBOX48 10800Kb 4% GOOD ACTIVE
WTSCPLX2.VSBOX52 10800Kb 4% GOOD ACTIVE
WTSCPLX2.VSBOX49 10800Kb 4% GOOD SPARE
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
Figure 7-20 Example of SHCDS display
Note: In the VARY SMS,SHCDS commands, the SHCDS name is not fully qualified.
SMSVSAM takes as a default the first two qualifiers, which must always be
SYS1.DFPSHCDS. You must specify only the last two qualifiers as the SHCDS names.
SC=CICS1
SC=NORLS
PAYSTRUC
Cache set name =
(blank)
The following steps describe how to define a cache set and how to associate the cache
structures to the cache set:
1. From the ISMF primary option menu for storage administrators, select option 8, Control
Data Set.
2. Select option 7, Cache Update, and make sure that you specified the right SCDS name
(SMS share control data set, do not mix up with SHCDS).
3. Define your CF cache sets (see Figure 7-22).
Guaranteed Space . . . . . . . . . N (Y or N)
Guaranteed Synchronous Write . . . N (Y or N)
Multi-Tiered SG . . . . . . . . . . (Y, N, or blank)
Parallel Access Volume Capability N (R, P, S, or N)
CF Cache Set Name . . . . . . . . . PUBLIC1 (up to 8 chars or blank)
CF Direct Weight . . . . . . . . . 6 (1 to 11 or blank)
CF Sequential Weight . . . . . . . 4 (1 to 11 or blank)
Note: Be sure to change your Storage Class ACS routines so that RLS data sets are
assigned the appropriate storage class.
More detailed information about setting up SMS for VSAM RLS is in z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
LOGSTREAMID(logstreamname)
Specifies the name of the CICS forward recovery
logstream for data sets with LOG(ALL)
Another way to assign the LOG attribute and a LOGSTREAMID is to use a data class that has
those values already defined.
The LOG parameter is described in detail in “VSAM RLS/CICS data set recovery” on
page 368.
Use the LOGSTREAMID parameter to assign a CICS forward recovery log stream to a data
set which is forward recoverable.
For more information about the IDCAMS DEFINE and ALTER commands, see z/OS DFSMS
Access Method Services for Catalogs, SC26-7394.
For information about the IXCMIAPU utility, see z/OS MVS Setting Up a Sysplex, SA22-7625.
CACHE02
subsystem subsystem
CICS2 CICS2
CACHE01
MMFSTUFF MMFSTUFF
IGWLOCK00
batch job SMSVSAM SMSVSAM batch job
HSM coupling facility HSM
System 1 System n
The SMSVSAM address space needs to be started on each system where you want to exploit
VSAM RLS. It is responsible for centralizing all processing necessary for cross-system
sharing, which includes one connect per system to XCF lock, cache, and VSAM control block
structures.
Terminology
We use the following terms to describe an RLS environment:
RLS server
The SMSVSAM address space is also referred to as the RLS server.
SETSMS command
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The
syntax is:
SETSMS CF-TIME(nnn|3600)
DEADLOCK_DETECTION(iiii,kkkk)
RLSINIT
RLS_MAXCFFEATURELEVEL({A|Z})
RLS_MAX_POOL_SIZE(nnnn|100)
SMF_TIME(YES|NO)
For information about these PARMLIB values refer to 7.15, “Update PARMLIB with VSAM
RLS parameters” on page 376.
Display commands
There are a several display commands available to provide RLS-related information.
Display the status of the SMSVSAM address space:
DISPLAY SMS,SMSVSAM{,ALL}
Specify ALL to see the status of all the SMSVSAM servers in the sysplex.
Display information about the coupling facility cache structure:
DISPLAY SMS,CFCACHE(CF_cache_structure_name|*)
Display information about the coupling facility lock structure IGWLOCK00:
DISPLAY SMS,CFLS
This information includes the lock rate, lock contention rate, false contention rate, and
average number of requests waiting for locks.
Display XCF information for a CF structure:
DISPLAY XCF,STR,STRNAME=structurename
Provides information like status, type, and policy size for a CF structure.
For further DISPLAY commands refer to z/OS MVS System Commands, SA22-7627.
The quiesce status of a data set is set in the catalog and is shown in an IDCAMS LISTCAT
output for the data set. See “Interpreting RLSDATA in an IDCAMS LISTCAT output” on
page 393 for information about interpreting LISTCAT outputs.
This new size can be larger or smaller than the size of the current CF cache structure, but it
cannot be larger than the maximum size specified in the CFRM policy. The SETXCF
START,ALTER command will not work unless the structure’s ALLOW ALTER indicator is set to
YES.
Attention: This section provides you only an overview about useful commands you should
know to work with VSAM RLS. Before you use any of these commands other than the
DISPLAY command, read the official z/OS manuals carefully.
When you dump data sets that are designated by CICS as eligible for backup-while-open
processing, data integrity is maintained through serialization interactions between:
CICS (database control program)
VSAM RLS
VSAM record management
DFSMSdfp
DFSMSdss
Backup-while-open
In order to allow DFSMSdss to take a backup while your data set is open by CICS, you need
to define the data set with the BWO attribute TYPECICS or assign a data class with this
attribute.
TYPECICS
Use TYPECICS to specify BWO in a CICS environment. For RLS processing, this
activates BWO processing for CICS. For non-RLS processing, CICS determines whether
For information about the BWO processing refer to z/OS DFSMSdss Storage Administration
Reference, SC35-0424.
You can use the sample JCL in Figure 7-31 to run an IDCAMS LISTCAT job.
RLSDATA
RLSDATA contains the following information:
LOG
This field shows you the type of logging used for this data set. It can be NONE, UNDO or
ALL.
Note: If the RLS-IN-USE indicator is on, it doesn’t mean that the data set is currently in
use by VSAM RLS. It just means that the last successful open was for RLS processing.
Non-RLS open will always attempt to call VSAM RLS if the RLS-IN-USE bit is on in the
catalog. This bit is a safety net to prevent non-RLS users from accessing a data set
which may have retained or lost locks associated with it. The RLS-IN-USE bit is set on
by RLS open and is left on after close. This bit is only turned off by a successful
non-RLS open or by the IDCAMS SHCDS CFRESET command.
LOGSTREAMID
This value tells you the forward recovery log stream name for this data set if the LOG
attribute has the value of ALL.
RECOVERY TIMESTAMP
The recovery time stamp gives the time the most recent backup was taken when the data
set was accessed by CICS using VSAM RLS.
All LISTCAT keywords are described in Appendix B of z/OS DFSMS Access Method Services
for Catalogs, SC26-7394.
Objective of DFSMStvs
The objective of DFSMStvs is to provide transactional recovery directly within VSAM. It is an
extension to VSAM RLS. It allows any job or application that is designed for data sharing to
read/write share VSAM recoverable files.
DFSMStvs is a follow-on project/capability based on VSAM RLS. VSAM RLS supports CICS
as a transaction manager. This provides sysplex data sharing of VSAM recoverable files
DFSMStvs adds logging and commit/back out support to VSAM RLS. DFSMStvs requires
and supports the RRMS (recoverable resource management services) component as the
commit or sync point manager.
DFSMStvs provides a level of data sharing with built-in transactional recovery for VSAM
recoverable files that is comparable with the data sharing and transactional recovery support
for data bases provided by DB2 and IMSDB.
Before DFSMStvs, those two types of recovery were only supported by CICS.
CICS performs the transactional recovery for data sets defined with a LOG parameter UNDO
or ALL.
For forward recoverable data sets (LOG(ALL)) CICS also records updates in a log stream for
forward recovery. CICS itself does not perform forward recovery, it performs only logging. For
forward recovery you need a utility like CICS VSAM recovery (CICSVR).
Without DFSMStvs, batch jobs cannot perform transactional recovery and logging. That is the
reason batch jobs were granted only read access to a data set that was opened by CICS in
RLS mode. A batch window was necessary to run batch updates for CICS VSAM data sets.
With DFSMStvs, batch jobs can perform transactional recovery and logging concurrently with
CICS processing. Batch jobs can now update data sets while they are in use by CICS. No
batch window is necessary any more.
Peer recovery
Peer recovery allows DFSMStvs to recover for a failed DFSMStvs instance to clean up any
work that was left in an incomplete state and to clear retained locks that resulted from the
failure.
For more information about peer recovery refer to z/OS DFSMStvs Planning and Operation
Guide, SC26-7348.
z/OS RRMS:
- Registration services Prepare/Commit
- Context services DFSMStvs
Rollback
- Resource recovery
services (RRS)
Another
Recoverable
Resource Mgr
Another
Recoverable
Resource Mgr
When an application issues a commit request directly to z/OS or indirectly through a sync
point manager that interfaces with the z/OS syncpoint manager, DFSMStvs is invoked to
participate in the 2-phase commit process.
Other resource managers (like DB2) whose recoverable resources were modified by the
transaction are also invoked by the z/OS syncpoint manager, thus providing a commit scope
across the multiple resource managers.
Two-phase commit
The two-phase commit protocol is a set of actions used to make sure that an application
program either makes all changes to the resources represented by a single unit of recovery
(UR), or makes no changes at all. This protocol verifies that either all changes or no changes
are applied even if one of the elements (such as the application, the system, or the resource
manager) fails. The protocol allows for restart and recovery processing to take place after
system or subsystem failure.
For a discussion of the term unit of recovery see “Unit of work and unit of recovery” on
page 402.
+$100 +$100
$700 $800 $700 $800
Transaction to Incomplete
transfer $100 transaction
Atomic updates
A transaction is known as atomic when an application changes data in multiple resource
managers as a single transaction, and all of those changes are accomplished through a
single commit request by a sync point manager. If the transaction is successful, all the
changes are committed. If any piece of the transaction is not successful, then all changes are
backed out. An atomic instant occurs when the sync point manager in a two-phase commit
process logs a commit record for the transaction.
Refer also to “Transactional recovery” on page 370 for information about recovery of an
uncompleted transaction.
update 1
update 2
commit
} A = unit of recovery
synchronized explicit
update 3
update 4
update 5
} B
update 6
} C
End of program synchronized implicit
Figure 7-36 Unit of recovery example
RRS uses unit of recovery (UR) to mean much the same thing. So, a unit of recovery is the
set of updates between synchronization points. There are implicit synchronization points at the
start and at the end of a transaction. There should also be explicit synchronization points
requested by an application within a transaction or batch job. It is preferable to use explicit
synchronization for greater control of the number of updates in a unit of recovery.
Changes to data are durable after a synchronization point. That means that the changes
survive any subsequent failure.
In Figure 7-36 there are three units of recovery, noted as A, B and C. The synchronization
points between the units of recovery are either:
Implicit - At the start and end of the program
Explicit - When requested by commit
...
System 1 System n
CICSA undo log log stream
CICS/DFSMStvs merged
lock structures forward recovery log log
stream
coupling facility
DFSMStvs logging
DFSMStvs logging uses the z/OS system logger. The design of DFSMStvs logging is similar
to the design of CICS logging. Forward recovery logstreams for VSAM recoverable files will
be shared across CICS and DFSMStvs. CICS will log changes made by CICS transactions
while DFSMStvs will log changes made by its callers.
Types of logs
There are different types of logs involved in DFSMStvs (and CICS) logging. They are:
Undo logs (mandatory, one per image) - tvsname.IGWLOG.SYSLOG
The backout or undo log contains images of changed records for recoverable data sets as
they existed prior to being changed. It is used for transactional recovery to back out
uncommitted changes if a transaction failed.
Shunt logs (mandatory, one per image) - tvsname.IGWSHUNT.SHUNTLOG
The shunt log is used when backout requests fail and for long running units of recovery.
The system logger writes log data to log streams. The log streams are put in list structures in
the coupling facility (except for DASDONLY log streams).
As Figure 7-37 on page 403 shows, you can merge forward recovery logs for use by CICS
and DFSMStvs. You can also share it by multiple VSAM data sets. You cannot share an undo
log by CICS and DFSMStvs; you need one per image.
For information about how to define log streams and list structures refer to 7.32, “Prepare for
logging” on page 409.
You can modify an application to use DFSMStvs by specifying RLS in the JCL or the ACB and
having the application access a recoverable data set using either open for input with CRE or
open for output from a batch job.
Application considerations
In order for an application to participate in transactional recovery, it must first understand the
concept of a transaction. It is not a good idea simply to modify an existing batch job to use
DFSMStvs with no further changes. This would cause the entire job to be seen as a single
transaction. As a result, locks would be held and log records would need to exist for the entire
life of the job. This could cause a tremendous amount of contention for the locked resources.
It could also cause performance degradation as the undo log becomes exceedingly large.
RLS and DFSMStvs provide isolation until commit/backout. The application programmer
should consider the following rules:
Share locks on records accessed with repeatable read.
Hold write locks on changed records until the end of a transaction.
Use commit to apply all changes and release all locks.
Handle all work that is part of one UR under the same context
For information about units of recovery, see 7.27, “Unit of work and unit of recovery” on
page 402. You should reconsider your application to handle work that is part of one unit of
recovery under the same context.
Instead, the batch application must have a built-in method of tracking its processing position
within a series of transactions. One potential method of doing this is to use a VSAM
recoverable file to track the job’s commit position. When the application fails, any
uncommitted changes are backed out.
The already committed changes cannot be backed out, because they are already visible to
other jobs or transactions. In fact, the records that were changed by previously committed UR
may have since been changed again by other jobs or transactions. Therefore, when the job is
rerun, it is important that it determines its restart point and not attempt to redo any changes it
had committed before the failure.
For this reason, it is important that jobs and applications using DFSMStvs be written to
execute as a series of transactions and use a commit point tracking mechanism for restart.
A sample JCL you can use to define a new CFRM policy is shown in Figure 7-42.
Multiple log streams can write data to a single coupling facility structure. This does not mean
that the log data is merged; the log data stays segregated according to log stream.
Figure 7-43 shows how to define the structures in the LOGR policy.
For the different types of log streams that are used by DFSMStvs refer to “DFSMStvs logging”
on page 403. A log stream is a VSAM linear data set which simply contains a collection of
data. To define log streams, you can use the example JCL in Figure 7-44.
Attention: Log streams are single-extent VSAM linear data sets and need
SHAREOPTIONS(3,3). The default is SHAREOPTIONS(1,3), so you must alter the share
options explicitly by running IDCAMS ALTER.
For information about these PARMLIB parameters, refer to z/OS MVS Initialization and
Tuning Reference, SA22-7592.
IGWTV001 IGWTV002
SMSVSAM SMSVSAM
System 1 System 2
coupling facility
recoverable
data set
As soon as an application that does not act as a recoverable resource manager has RLS
access to a recoverable data set, DFSMStvs is invoked (see also “Accessing a data set with
DFSMStvs” on page 405). DFSMStvs calls VSAM RLS (SMSVSAM) for record locking and
buffering. With DFSMStvs built on top of VSAM RLS, full sharing of recoverable files becomes
possible. Batch jobs can now update the recoverable files without first quiescing CICS' access
to them.
SETSMS command
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The syntax
is:
SETSMS AKP(nnn|1000)
QTIMEOUT(nnn|300)
MAXLOCKS(max|0,incr|0)
These are the only DFSMStvs PARMLIB specifications you can overwrite using the SETSMS
command. For information about these parameter see “Update PARMLIB with DFSMStvs
parameters” on page 412.
Display command
There are a few display commands you can use to get information about DFSMStvs.
Display common DFSMStvs information:
DISPLAY SMS,TRANVSAM{,ALL}
This command lists information about the DFSMStvs instance on the system were it was
issued. To get information from all systems use ALL. This information includes name and
state of the DFSMStvs instance, values for AKP, start type, and qtimeout, and also the
names, types, and states of the used log streams.
Display information about a particular job that uses DFSMStvs:
DISPLAY SMS,JOB(jobname)
The information about the particular job includes the current job step, the current ID, and
status of the unit of recovery used by this job.
Display information about a particular unit of recovery currently active within the sysplex:
DISPLAY SMS,URID(urid|ALL)
This command provides information about a particular UR in the sysplex or about all URs
of the system on which this command was issued. If ALL is specified, you don’t get
information about shunted URs and URs that are restarting. The provided information
Attention: This chapter provides only an overview about new operator commands you
should know to work with DFSMStvs. Before you use any of these commands other than
the DISPLAY command, read the official z/OS manuals carefully.
Summary
Base VSAM
Functions and limitations
DFSMStvs
Functions
Figure 7-48
Summary
In this chapter we showed the limitations of base VSAM that made it necessary to develop
VSAM RLS. Further, we exposed the limitations of VSAM RLS that were the reason to
enhance VSAM RLS by the functions provided by DFSMStvs.
Base VSAM
– VSAM does not provide read or read/write integrity for share options other than 1.
– User needs to use enqueue/dequeue macros for serialization.
– The granularity of sharing on a VSAM cluster is at the control interval level.
– Buffers reside in the address space.
– Base VSAM does not support CICS as a recoverable resource manager; a CICS file
owning region is necessary to ensure recovery.
VSAM RLS
– Enhancement of base VSAM.
– User does not need to serialize; this is done by RLS locking.
– Granularity of sharing is record level, not CI level.
– Buffers reside in the data space and coupling facility.
– Supports CICS as a recoverable resource manager (CICS logging for recoverable data
sets); no CICS file owning region is necessary.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
z/OS DFSMStvs Administration Guide, GC26-7483
Device Support Facilities User’s Guide and Reference Release 17, GC35-0033
z/OS MVS Programming: Assembler Services Guide, SA22-7605
z/OS MVS System Commands, SA22-7627
z/OS MVS System Messages,Volume 1 (ABA-AOM), SA22-7631
DFSMS Optimizer User’s Guide and Reference, SC26-7047
z/OS DFSMStvs Planning and Operating Guide, SC26-7348
z/OS DFSMS Access Method Services for Catalogs, SC26-7394
z/OS DFSMSdfp Storage Administration Reference, SC26-7402
z/OS DFSMSrmm Guide and Reference, SC26-7404
z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
z/OS DFSMS Implementing System-Managed Storage, SC26-7407
z/OS DFSMS: Using Data Sets, SC26-7410
z/OS DFSMS: Using the Interactive Storage Management Facility, SC26-7411
z/OS DFSMS: Using Magnetic Tapes, SC26-7412
z/OS DFSMSdfp Utilities, SC26-7414
z/OS Network File System Guide and Reference, SC26-7417
DFSORT Getting Started with DFSORT R14, SC26-4109
DFSORT Installation and Customization Release 14, SC33-4034
z/OS DFSMShsm Storage Administration Guide, SC35-0421
z/OS DFSMShsm Storage Administration Reference, SC35-0422
© Copyright IBM Corp. 2004, 2005, 2007. All rights reserved. 421
z/OS DFSMSdss Storage Administration Guide, SC35-0423
z/OS DFSMSdss Storage Administration Reference, SC35-0424
z/OS DFSMS Object Access Method Application Programmer’s Reference, SC35-0425
z/OS DFSMS Object Access Method Planning, Installation, and Storage Administration
Guide for Object Support, SC35-0426
Tivoli Decision Support for OS/390 System Performance Feature Reference Volume I,
SH19-6819
Online resources
These Web sites and URLs are also relevant as further information sources:
For articles, online books, news, tips, techniques, examples, and more, visit the z/OS
DFSORT home page:
http://www-1.ibm.com/servers/storage/support/software/sort/mvs
DFSMS, Data set The ABCs of z/OS System Programming is an eleven volume collection that provides
an introduction to the z/OS operating system and the hardware architecture. Whether INTERNATIONAL
basics, SMS
you are a beginner or an experienced system programmer, the ABCs collection TECHNICAL
provides the information that you need to start your research into z/OS and related SUPPORT
Storage management subjects. If you would like to become more familiar with z/OS in your current
environment, or if you are evaluating platforms to consolidate your e-business ORGANIZATION
software and
applications, the ABCs collection will serve as a powerful technical tool.
hardware
The contents of the volumes are:
Catalogs, VSAM, Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF,
and z/OS delivery and installation BUILDING TECHNICAL
DFSMStvs Volume 2: z/OS implementation and daily maintenance, defining subsystems,
JES2 and JES3, LPA, LNKLST, authorized libraries, Language Environment, and
INFORMATION BASED ON
SMP/E
PRACTICAL EXPERIENCE
Volume 3: Introduction to DFSMS, data set basics, storage management
hardware and software, VSAM, System-managed storage, catalogs, and IBM Redbooks are developed
DFSMStvs by the IBM International
Volume 4: Communication Server, TCP/IP and VTAM Technical Support
Volume 5: Base and Parallel Sysplex, System Logger, Resource Recovery Organization. Experts from
Services (RRS), global resource serialization (GRS), z/OS system operations, IBM, Customers and Partners
automatic restart management (ARM), Geographically dispersed Parallel Sysplex from around the world create
(GPDS) timely technical information
Volume 6: Introduction to security, RACF, Digital certificates and PKI, Kerberos, based on realistic scenarios.
cryptography and z990 integrated cryptography, zSeries firewall technologies, Specific recommendations
LDAP, Enterprise identity mapping (EIM), and firewall technologies are provided to help you
Volume 7: Printing in a z/OS environment, Infoprint Server and Infoprint Central implement IT solutions more
Volume 8: An introduction to z/OS problem diagnosis effectively in your
Volume 9: z/OS UNIX System Services environment.
Volume 10: Introduction to z/Architecture, zSeries processor design, zSeries
connectivity, LPAR concepts, HCD, and HMC
Volume 11: Capacity planning, performance management, WLM, RMF, and SMF
For more information:
ibm.com/redbooks