CICS System Definition Guide
CICS System Definition Guide
SC33-1682-02
CICS® Transaction Server for OS/390® IBM
SC33-1682-02
Note!
Before using this information and the product it supports, be sure to read the general information under “Notices” on page xi.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
What this book is about . . . . . . . . . . . . . . . . . . . . . xiii
Who should read this book . . . . . . . . . . . . . . . . . . . . xiii
What you need to know to understand this book . . . . . . . . . . . . xiii
How to use this book . . . . . . . . . . . . . . . . . . . . . . xiii
Notes on terminology . . . . . . . . . . . . . . . . . . . . . . xiii
Book structure . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . xv
CICS Transaction Server for OS/390 . . . . . . . . . . . . . . . . xv
CICS books for CICS Transaction Server for OS/390 . . . . . . . . . xv
CICSPlex SM books for CICS Transaction Server for OS/390 . . . . . . xvi
Other CICS books . . . . . . . . . . . . . . . . . . . . . . xvi
Books from related libraries . . . . . . . . . . . . . . . . . . . . xvi
Systems Network Architecture (SNA) . . . . . . . . . . . . . . . xvi
Systems Application Architecture (SAA) . . . . . . . . . . . . . . xvi
Advanced communications function for VTAM (ACF/VTAM) . . . . . . . xvi
Telecommunications Access Method (TCAM) . . . . . . . . . . . . xvii
Virtual Storage Access Method (VSAM) . . . . . . . . . . . . . . xvii
DATABASE 2 (DB2). . . . . . . . . . . . . . . . . . . . . . xvii
Programming language support . . . . . . . . . . . . . . . . . xvii
Information Management System (IMS) . . . . . . . . . . . . . . xvii
IBM CICS VSAM Recovery MVS/ESA . . . . . . . . . . . . . . . xix
MVS . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Determining if a publication is current . . . . . . . . . . . . . . . . xix
Contents v
XRF considerations . . . . . . . . . . . . . . . . . . . . . . 119
Chapter 13. Defining the CICS system definition data set . . . . . . . 135
Summary of steps to create a CSD . . . . . . . . . . . . . . . . . 135
Calculating disk space . . . . . . . . . . . . . . . . . . . . . . 136
Defining and initializing the CICS system definition . . . . . . . . . . . 137
Creating a larger CSD . . . . . . . . . . . . . . . . . . . . . 139
File processing attributes for the CSD . . . . . . . . . . . . . . . . 139
Sharing and availability of the CSD in non-RLS mode . . . . . . . . . . 140
Shared user access from the same CICS region . . . . . . . . . . . 140
Multiple users of the CSD within a CICS region (non-RLS) . . . . . . . 143
Sharing a CSD by CICS regions within a single MVS image (non-RLS) . . . 143
Sharing a CSD in a multi-MVS environment (non-RLS) . . . . . . . . . 144
Multiple users of one CSD across CICS or batch regions (non-RLS) . . . . 144
Sharing the CSD between different releases of CICS . . . . . . . . . 145
Other factors restricting CSD access . . . . . . . . . . . . . . . 146
Sharing the CSD in RLS mode . . . . . . . . . . . . . . . . . . 147
Differences in CSD management between RLS and non-RLS access . . . 148
Specifying file control attributes for the CSD . . . . . . . . . . . . . 149
Effect of RLS on the CSD batch utility DFHCSDUP . . . . . . . . . . 149
Planning for backup and recovery . . . . . . . . . . . . . . . . . 150
Transaction backout during emergency restart . . . . . . . . . . . . 152
Dynamic backout for transactions. . . . . . . . . . . . . . . . . 152
Other recovery considerations . . . . . . . . . . . . . . . . . . 152
RDO command logs . . . . . . . . . . . . . . . . . . . . . . 153
Making the CSD available to CICS . . . . . . . . . . . . . . . . . 155
Installing the RDO transactions . . . . . . . . . . . . . . . . . . 156
Moving your CICS tables to the CSD . . . . . . . . . . . . . . . . 156
Installing definitions for the Japanese language feature. . . . . . . . . . 156
XRF considerations . . . . . . . . . . . . . . . . . . . . . . . 157
Chapter 15. Defining and using auxiliary trace data sets . . . . . . . . 171
Defining auxiliary trace data sets . . . . . . . . . . . . . . . . . . 171
Starting and controlling auxiliary trace . . . . . . . . . . . . . . . . 172
Job control statements to allocate auxiliary trace data sets . . . . . . . . 173
Space calculations . . . . . . . . . . . . . . . . . . . . . . 173
Job control statements for CICS execution . . . . . . . . . . . . . . 173
XRF considerations . . . . . . . . . . . . . . . . . . . . . . . 174
Trace utility program (DFHTU530) . . . . . . . . . . . . . . . . . 174
Chapter 17. Defining the CICS availability manager data sets . . . . . . 181
The XRF control data set. . . . . . . . . . . . . . . . . . . . . 181
JCL to define the XRF control data set . . . . . . . . . . . . . . . 182
Space calculations . . . . . . . . . . . . . . . . . . . . . . 183
Job control statements for CICS execution . . . . . . . . . . . . . 183
The XRF message data set . . . . . . . . . . . . . . . . . . . . 183
JCL to define the XRF message data set . . . . . . . . . . . . . . 183
Space calculations . . . . . . . . . . . . . . . . . . . . . . 184
Job control statement for CICS execution . . . . . . . . . . . . . . 186
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
I/O error handling . . . . . . . . . . . . . . . . . . . . . . . 187
Contents vii
Comparison with user-maintained data tables . . . . . . . . . . . . 203
Coupling facility data table models . . . . . . . . . . . . . . . . 203
Coupling facility data table structures and servers. . . . . . . . . . . 203
Defining a coupling facility data table pool . . . . . . . . . . . . . 205
Chapter 19. Defining the CDBM GROUP command data set. . . . . . . 207
Job control statements for CICS execution . . . . . . . . . . . . . . 208
Record layout in the CDBM GROUP command file . . . . . . . . . . . 208
Chapter 23. Defining the CICS JVM execution environment variables . . . 333
JVM environment variables . . . . . . . . . . . . . . . . . . . . 333
Contents ix
Appendix. System initialization parameters grouped by functional area . . 417
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply in the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore this statement may not apply to
you.
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs
and other programs (including this one) and (ii) the mutual use of the information
which has been exchanged, should contact IBM United Kingdom Laboratories,
MP151, Hursley Park, Winchester, Hampshire, England, SO21 2JN. Such
information may be available, subject to appropriate terms and conditions, including
in some cases, payment of a fee.
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, or other countries, or both:
ACF/VTAM AD/Cycle
BookManager CICS
CICS/ESA CICS/MVS
CICS/VSE COBOL/370
DATABASE 2 DB2
DFSMS ESA/390
Hiperbatch IBM
IBMLink IMS/ESA
Language Environment MVS/DFP
MVS/ESA MVS/SP
MVS/XA NetView
OS/390 Processor Resource/Systems Manager
PR/SM RACF
SAA Systems Application Architecture
VSE/ESA VTAM
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and other countries.
Other company, product, and service names may be trademarks or service marks
of others.
Notes on terminology
“CICS” is used throughout this book to mean the CICS element of the IBM CICS
Transaction Server for OS/390.
“RACF” is used to mean the IBM Resource Access Control Facility (RACF) or any
other external security manager that provides equivalent function.
| In the programming examples in this book, the dollar symbol ($) is used as a
| national currency symbol and is assumed to be assigned the EBCDIC code point
| X’5B’. In some countries a different currency symbol, for example the pound symbol
| (£), or the yen symbol (¥), is assigned the same EBCDIC code point. In these
| countries, the appropriate currency symbol should be used instead of the dollar
| symbol.
“MVS” is used throughout this book to mean the MVS operating system.
Book structure
Part 1. Installing resource definitions ... pages 1—90
Describes how to install the various resources needed to run a CICS
region, such as control tables and user application programs.
If you have any questions about the CICS Transaction Server for OS/390 library,
see CICS Transaction Server for OS/390: Planning for Installation which discusses
both hardcopy and softcopy books and the ways that the books can be ordered.
DATABASE 2 (DB2)
v Application Programming and SQL Guide, SC26-4377
v Administration Guide, SC26-4374
v Command and Utility Reference, SC26-4378
Bibliography xvii
Table 1. IMS/ESA libraries (continued)
Title Version 4 Version 5 Version 6
Administration Guide: Transaction Manager ---- SC26-8014 SC26-8731
Application Programming: Database Manager ---- SC26-8015 SC26-8727
Application Programming: Database Manager ---- SC26-8037 ----
Summary
Application Programming: DC Calls SC26-4283 ---- ----
Application Programming: Design Guide SC26-4279 SC26-8016 SC26-8728
Application Programming: DL/I Calls SC26-4274 ---- ----
Application Programming: EXEC DLI SC26-4280 SC26-8018 SC26-8726
Commands
Application Programming: DL/I Calls Summary SX26-3765 ---- ----
Application Programming: EXEC DLI SX26-3775 SC26-8036 ----
Commands Summary
Application Programming: Transaction Manager ---- SC26-8017 SC26-8729
Application Programming: Transaction Manager ---- SC26-8038 ----
Summary
Customization Guide ---- SC26-8020 SC26-8732
Common Queue Server Reference ---- ---- LY37-3730
Customization Guide: Database SC26-4624 ---- ----
Customization Guide: Data Communications SC26-4625 ---- ----
Customization Guide: Systems SC26-4285 ---- ----
Data Communication Administration Guide SC26-4286 ---- ----
Database Administration Guide SC26-4281 ---- ----
Database Recovery Control Guide and ---- ---- SC26-8733
Reference
Diagnosis Guide and Reference LY27-9539 LY27-9620 LY37-3731
Failure Analysis Structure Tables (FAST) for LY27-9512 LY27-9621 LY37-3732
Dump Analysis
General Information GC26-4275 GC26-3467 ----
Installation Guide SC26-4276 ---- ----
Installation Volume 1: Installation and ---- SC26-8023 GC26-8736
Verification
Installation Volume 2: System Definition and ---- SC26-8024 GC26-8737
Tailoring
Licensed Programming Specifications ---- GC26-8040 GC26-8738
LU6.1 Adapter for LU6.2 Applications: Program SC26-4392 ---- ----
Description/Operations
Master Index and Glossary SC26-4291 SC26-8027 ----
Messages and Codes SC26-4290 SC26-8028 GC26-8739
Open Transaction Manager Access ---- SC26-8026 SC26-8743
Guide/Reference
Operations Guide SC26-4287 SC26-8029 SC26-8741
Operator’s Reference SC26-4288 SC26-8030 SC26-8742
Release Planning Guide GC26-4386 GC26-8031 GC26-8744
MVS
v OS/390 MVS JCL Reference, GC28-1757
v OS/390 MVS Initialization and Tuning Guide, SC28-1751
v OS/390 MVS Installation Exits, SC28-1753
v OS/390 MVS Conversion Notebook, GC28-1747
v OS/390 MVS System Commands, GC28-1781
v OS/390 MVS System Management Facilities (SMF), GC28-1783
v OS/390 MVS Diagnosis: Procedures, SY28-1082
v OS/390 MVS Diagnosis: Tools and Service Aids, SY28-1085
Subsequent updates will probably be available in softcopy before they are available
in hardcopy. This means that at any time from the availability of a release, softcopy
versions should be regarded as the most up-to-date.
For CICS Transaction Server books, these softcopy updates appear regularly on the
Transaction Processing and Data Collection Kit CD-ROM, SK2T-0730-xx. Each
reissue of the collection kit is indicated by an updated order number suffix (the -xx
part). For example, collection kit SK2T-0730-06 is more up-to-date than
SK2T-0730-05. The collection kit is also clearly dated on the cover.
Bibliography xix
Updates to the softcopy are clearly marked by revision codes (usually a “#”
character) to the left of the changes.
Changes for the CICS Transaction Server for OS/390 Release 3 edition
The main changes made to this book for CICS Transaction Server for OS/390
Release 3 include:
v New chapters
– Defining DB2 support
– Defining sequence numbering resources
– Defining and starting AXM system services
– Starting a coupling facility data table server
– Starting a named counter server
v New SIT parameters
– AICONS
– DOCCODEPAGE
– DSRTPGM
– ENCRYPTION
– FORCEQR
– KEYFILE
– MAXOPENTCBS
– NCPLDFT
– RRMS
– RUWAPOOL
– SSLDELAY
– TCPIP
Changes for the CICS Transaction Server for OS/390 Release 2 edition
The main changes made to this book for the CICS Transaction Server for OS/390
Release 2 include:
v The MVS AUTODELETE and RETPD parameters, used to preserve data on the
system log, and to manage the size of general logs, have been added to
“Chapter 12. Defining CICS log streams” on page 121.
v The SYSLOG system initialization parameter, used to preserve data on the
system log in CICS Transaction Server for OS/390 Release 2,. is obsolete in
CICS Transaction Server for OS/390 Release 3.
v The DB2 resource control table suffix is now specified on the DFHD2INI option of
the INITPARM system initialization parameter.
v The chapter discussing the definition of DB2 support has been removed. All
information about CICS DB2 is available in the CICS DB2 Guide.
v The following new SIT parameters:
– DB2CONN
– DBCTLCON
Changes for the CICS Transaction Server for OS/390 release 1 edition
The main changes made to this book for CICS Transaction Server for OS/390
release 1 include:
v The following new SIT parameters:
– SYDUMAX
– TRDUMAX
– CSDINTEG
– CSDRLS
– OFFSITE
– RLS
– SDTRAN
– SYSLOG
– TDINTRA
v Changes to the following SIT parameters:
– AILDELAY
– AIRDELAY
– DCT
– START=INITIAL
v Resource definition online for transient data destinations.
v Resource definition online for journalmodels.
v An additional section in “Chapter 10. Defining the temporary storage data set” on
page 107, discusses how to define temporary storage pools for temporary
storage data sharing.
v An additional chapter to describe starting up a temporary storage server, together
with information about all the initialization parameters that the TS server can use.
See “Chapter 26. Starting up temporary storage servers” on page 367.
Your CICS system has to know which resources to use, what their properties are,
and how they are to interact with each other.
You supply this information to CICS by using one or more of the following methods
of resource definition:
1. Resource definition online (RDO): This method uses the CICS-supplied online
transactions CEDA, CEDB, and CEDC. Definitions are stored on the CICS
system definition (CSD) file, and are installed into an active CICS system from
the CSD file.
2. DFHCSDUP offline utility: This method also stores definitions in the CSD file.
DFHCSDUP allows you to make changes to definitions in the CSD file by
means of a batch job submitted offline.
3. Automatic installation (autoinstall): This method minimizes the need for a
large number of definitions, by dynamically creating new definitions based on a
“model” definition provided by you.
4. System programming, using the EXEC CICS CREATE commands: You can
use the EXEC CICS CREATE commands to create resources independently of
the CSD file. For further information, see the CICS System Programming
Reference manual.
5. Macro definition: You can use assembler macro source to define resources.
Definitions are stored in assembled tables in a program library, from which they
are installed during CICS initialization.
Which methods you use depends on the resources you want to define. Table 2 on
page 4 suggests some of the things you should consider when deciding which
definition method to use when there is a choice.
For information about CICS resource definition, see the CICS Resource Definition
Guide. For information about the DFHCSDUP utility, see the CICS Operations and
Utilities Guide.
Resource definitions in the CSD are stored in groups. On a COLD or INITIAL start
you specify the resource definitions required in a particular run of CICS by a list of
groups. You can specify up to four lists of groups to be installed during CICS
initialization, using the GRPLIST=listname system initialization parameter. You can
also use the CEDA INSTALL command to install a resource definition or group of
definitions defined in the CSD dynamically on a running CICS region.
You should limit read/write access to resource definitions in the CSD to a small
number of people. To do this, you can:
v Protect groups of resources by using the CEDA command LOCK
v Protect the list of resource groups specified in the system initialization parameter
GRPLIST by using the CEDA command LOCK
v Use the CEDB transaction to create resource definitions, but not to INSTALL
them
v Use the CEDC transaction for read-only access to resource definitions
CICS control tables contain resource definition records for resources that cannot be
defined in the CSD. The tables and their resource definitions are created by using
the CICS table assembly macro instructions. You must use macro instructions to
define non-VTAM networks and terminals, non-VSAM files, databases, and
resources for monitoring and system recovery. For more information about defining
resources in CICS control tables, see “Chapter 2. Defining resources in CICS
control tables” on page 7.
1. Non-VTAM terminals. TCT entries can be for TCAM DCB terminals, BSAM sequential devices, logical device codes (LDCs), and
remote BTAM terminals required for ISC/MRO purposes.
For each of the CICS tables listed on page Table 3 on page 8, complete the
following steps:
1. Code the resource definitions you require.
2. Assemble and link-edit these definitions, using the CICS-supplied procedure
DFHAUPLE, to create a load module in the required CICS load library. The load
library is either CICSTS13.CICS.SDFHLOAD or CICSTS13.CICS.SDFHAUTH,
which you must specify by the NAME parameter of the DFHAUPLE procedure.
The CICS-supplied macros used to create the CICS tables determine whether
tables are loaded above the 16MB line. All tables, other than the TCT, are
loaded above the 16MB line.
3. Name the suffix of the load module by a system initialization parameter. For
most of the CICS tables, if you do not require the table you can code
tablename=NO. The exceptions to this rule are as follows:
v The CLT. Specifying CLT=NO causes CICS to try and load DFHCLTNO. The
CLT is used only in the alternate CICS, when you are running CICS with
XRF, and is always required in that case.
v The SIT. Specifying SIT=NO causes CICS to try and load DFHSITNO. The
SIT is always needed, and you can specify the suffix by coding the SIT
system initialization parameter.
v The TCT. Specifying TCT=NO causes CICS to load a dummy TCT named
DFHTCTDY, as explained on page 318.
v The TLT. Terminal list tables are specified by program entries in the CSD, and
do not have a system initialization parameter.
v The MCT. Specifying MCT=NO causes the CICS monitoring domain to build
dynamically a default monitoring control table. This ensures that default
monitoring control table entries are always available for use when monitoring
is on and a monitoring class (or classes) are active.
4. If you are running CICS with XRF, the active and the alternate CICS regions
share the same versions of tables. However, to provide protection against
DASD failure, you might want to run your active and alternate CICS regions
from separate sets of load libraries—in which case, you should make the
separate copies after generating your control tables.
You can generate several versions of each CICS control table by specifying
SUFFIX=xx in the macro that generates the table. This suffix is then appended to
the default 6-character name of the load module.
To get you started, CICS provides the sample tables listed in Table 4 in the
CICSTS13.CICS.SDFHSAMP library:
Table 4. Sample CICS system tables in the CICSTS13.CICS.SDFHSAMP library
Table Suffix Notes
Unless you have TCAM terminals or are using sequential devices, you do not need
a TCT and should specify TCT=NO. (For information about the effect of TCT=NO,
see page 318.) You define VTAM terminals in the CSD only, either explicitly or by
means of autoinstall model definitions but, for non-VTAM terminals, you must use
DFHTCT macros.
Other tables that have special requirements are program list tables (PLTs), terminal
| list tables (TLTs), and transaction list tables (XLTs). For each TLT, autoinstall for
| programs must be active or you must specify a program resource definition in the
| CSD, using the table name (including the suffix) as the program name. PLTs or
| XLTs are autoinstalled if there is no program resource definition in the CSD. For
| example, to generate a TLT with a suffix of AA (DFHTLTAA), the CEDA command
| would be as follows:
| CEDA DEFINE PROGRAM(DFHTLTAA) GROUP(grpname) LANGUAGE(ASSEMBLER)
For information about program and terminal list tables, see the CICS Resource
Definition Guide .
The command list table (CLT) is used only by the alternate CICS in a CICS system
running with XRF=YES, and differs in many other respects from the other CICS
tables. For more guidance information, including information on resource definition
specific to the CICS extended recovery facility, see the CICS/ESA 3.3 XRF Guide.
SYS1.MACLIB All
CICSTS13.CICS.SDFHMAC All
CICSTS13.CICS.SDFHLOAD All except the CLT, RST, and SIT
CICSTS13.CICS.SDFHAUTH CLT, RST, and SIT
SMP/E global zone All
SMP/E target zone All
MACROS
CICS.
SDFHMAC
SMPCNTL 2
Assembler Assembly listing
SM PJCL1
SM PE O F
3 P unch including link-edit
IEB UPD TE
O bject U tility statements and control statements,
object decks link-edit JCL, and
SMP UPDATE
statem ents
4
Linkage Editor Linkage Editor
listin g
CICS.
SDFHLOAD,
SDFHAUTH
5
Assembler
Tem porary
data set
(SET BDY)
6
SMP UPDATE
(HM ASM P)
SMPCSI
SMP listing
7
IE FB R 14
U tility
Figure 1. Assembling and link-editing the control tables using the DFHAUPLE procedure
Note: Specify NAME=SDFHAUTH on this job for the system initialization table only;
link-edit all other tables (except the CLT and RST) into SDFHLOAD.
Both of these tables must be link-edited, with the reentrant attribute, into an APF
authorized library. You specify that the table is reentrant on the RENTATT
parameter of the DFHAUPLE procedure. A sample job to call the DFHAUPLE
procedure for the CLT and RST is as follows:
CICS also supports the definition of BMS map sets and partition sets interactively
by using licensed programs such as the IBM Screen Definition Facility II (SDF II),
program number 5665-366.
For information about writing programs to use BMS services, see the CICS
Application Programming Guide.
CICS loads BMS map sets and partition sets above the 16MB line if you specify the
residency mode for the map set or partition set as RMODE(ANY) in the link-edit
step. If you are using either map sets or partition sets from earlier releases of CICS,
you can load them above the 16MB line by link-editing them again with
RMODE(ANY). For examples of link-edit steps specifying RMODE(ANY), see the
sample job streams in this chapter.
Note: The DFHASMVS procedure refers to the MVS library SYS1.MODGEN. If you
have not yet restructured MVS (moving members from SYS1.AMODGEN to
SYS1.MODGEN), change the SYS1.MODGEN reference to
SYS1.AMODGEN in the DFHASMVS procedure, until you have restructured
MVS. When you have restructured MVS, you must return the
SYS1.AMODGEN reference to SYS1.MODGEN.
The map set definition macros are assembled twice; once to produce the physical
map set used by BMS in its formatting activities, and once to produce the symbolic
description map set that is copied into the application program.
Map sets can be assembled as either unaligned or aligned (an aligned map is one
in which the length field is aligned on a halfword boundary). Use unaligned maps
except in cases where an application package needs to use aligned maps.
The SYSPARM value alone determines whether the map set is aligned or
unaligned, and is specified on the EXEC PROC=DFHASMVS statement. The TYPE
operand of the DFHMSD macro can only define whether a physical or symbolic
description map set is required. The SYSPARM operand can also be used to
specify whether a physical map set or a symbolic description map set (DSECT) is
to be assembled, in which case it overrides the TYPE operand. If neither operand is
specified, an unaligned DSECT is generated.
For the possible combinations of operands to generate the various types of map
set, see Table 6.
Table 6. SYSPARM and DFHMSD operand combinations for map assembly
Type of map set SYSPARM operand of EXEC TYPE operand of DFHMSD
DFHASMVS statement macro
Aligned symbolic A Not specified
description map A DSECT
set (DSECT) ADSECT Any (takes SYSPARM)
Aligned A MAP
physical map set AMAP Any (takes SYSPARM)
Unaligned Not specified Not specified
symbolic Not specified DSECT
description map DSECT Any (takes SYSPARM)
set (DSECT)
Unaligned Not specified MAP
physical map set MAP Any (takes SYSPARM)
The physical map set indicates whether it was assembled for aligned or unaligned
maps. This information is tested at execution time, and the appropriate map
alignment used. Thus aligned and unaligned map sets can be mixed.
Macro statements
defining the map set
Assembly
Assembler
listing
CICS.
SDFHMAC
Linkage
Editor
in p ut
(object)
CICS.
SDFHLOAD
Figure 5 on page 16 gives an example job stream for the assembly and link-editing
of physical map sets.
Notes:
2 Physical map sets are loaded into CICS-key storage, unless they are link-edited
with the RMODE(ANY) and RENT options. If they are link-edited with these options,
they are loaded into key-0 protected storage, provided that RENTPGM=PROTECT
is specified on the RENTPGM initialization parameter.
However, it is recommended that map sets should not be link-edited with the RENT
or the REFR options because, in some cases, CICS modifies the map set.
For more information about the storage protection facilities available in CICS, see
“Storage protection” on page 353.
3 The MODE statement specifies whether the map set is to be loaded above
(RMODE(ANY)) or below (RMODE(24)) the 16MB line. RMODE(ANY) indicates that
CICS can load the map set anywhere in virtual storage, but tries to load it above
the 16MB line, if possible.
4 Use the NAME statement to specify the name of the physical map set that BMS
loads into storage. If the map set is device-dependent, derive the map set name by
appending the device suffix to the original 1- to 7-character map set name used in
the application program. The suffixes to be appended for the various terminals
supported by CICS BMS depend on the parameter specified in the TERM or
SUFFIX operand of the DFHMSD macros used to define the map set. For
programming information giving a complete list of map set suffixes, see the CICS
Application Programming Reference manual.
To use a physical map set, you must define and install a resource definition for it.
You can do this either by using the program autoinstall function or by using the
CEDA DEFINE MAPSET and INSTALL commands, as described in “Defining
programs, map sets, and partition sets to CICS” on page 56.
Macro statements
defining the
symbolic map
Assembly
Assembler listing
CICS.
SDFHMAC
SYSPUNCH
Figure 6. Installing symbolic description map sets using the DFHASMVS procedure
To use a symbolic description map set in a program, you must assemble the source
statements for the map set and obtain a punched copy of the storage definition
through SYSPUNCH. The first time this is done, you can direct the SYSPUNCH
output to SYSOUT=A to get a listing of the symbolic description map set. If many
map sets are to be used at your installation, or there are multiple users of common
map sets, establish a private user copy library for each language that you use.
When a symbolic description is prepared under the same name for more than one
programming language, a separate copy of the symbolic description map set must
be placed in each user copy library. You must ensure that the user copy libraries
are correctly concatenated with SYSLIB.
You need only one symbolic description map set corresponding to all the different
suffixed versions of the physical map set. For example, to run the same application
on terminals with different screen sizes, you would:
1. Define two map sets each with the same fields, but positioned to suit the screen
sizes. Each map set has the same name but a different suffix, which would
match the suffix specified for the terminal.
2. Assemble and link-edit the different physical map sets separately, but create
only one symbolic description map set, because the symbolic description map
set would be the same for all physical map sets.
If you want to assemble symbolic description map sets in which length fields are
halfword-aligned, change the EXEC statement of the sample job in Figure 7 to the
following:
//ASSEM EXEC PROC=DFHASMVS,PARM.ASSEM='SYSPARM(ADSECT)'
To store a symbolic description map set in a private copy library, use job control
statements similar to the following:
//SYSPUNCH DD DSN=USER.MAPLIB.ASM(map set name),DISP=OLD
//SYSPUNCH DD DSN=USER.MAPLIB.COB(map set name),DISP=OLD
//SYSPUNCH DD DSN=USER.MAPLIB.PLI(map set name),DISP=OLD
Macro statements
defining the map set
2
Assembly
Assembler listing
CICS.
SDFHMAC
Linkage
Editor
in p ut
( object)
3
Linkage Editor
Linkage Editor
CICS. listing
SDFHLOAD
4
Macro statements
Assembler
defining the map set
CICS.
SDFHMAC
Figure 8. Installing a physical map set and a symbolic description map set together
Note: The RMODE statement specifies whether the map set is to be loaded above
(RMODE=ANY) or below (RMODE=24) the 16MB line. RMODE=ANY
indicates that CICS can load the map set anywhere in virtual storage, but
tries to load it above the 16MB line, if possible.
The DFHMAPS procedure produces map sets that are not halfword-aligned. If you
want the length fields in input maps to be halfword-aligned, you have to code A=A
on the EXEC statement. In the sample job in Figure 9, change the EXEC statement
to:
//ASSEM EXEC PROC=DFHMAPS,MAPNAME=mapsetname,A=A
The DFHMAPS procedure directs the symbolic description map set output
(SYSPUNCH) to the CICSTS13.CICS.SDFHMAC library. Override this by specifying
DSCTLIB=name on the EXEC statement, where “name” is the name of the chosen
user copy library.
The job stream in Figure 10 is an example of the assembly and link-edit of partition
sets.
Notes:
1 A partition set is loaded into CICS-key storage, unless it is link-edited with the
RMODE(ANY) and RENT options. If it is link-edited with these options, it is loaded
into key-0 protected storage, provided that RENTPGM=PROTECT is specified on
the RENTPGM initialization parameter.
For more information about the storage protection facilities available in CICS, see
“Storage protection” on page 353.
2 The MODE statement specifies whether the partition set is to be loaded above
(RMODE(ANY)) or below (RMODE(24)) the 16MB line. RMODE(ANY) indicates that
CICS can load the partition set anywhere in virtual storage, but tries to load it above
the 16MB line, if possible.
3 Use the NAME statement to specify the name of the partition set which BMS
loads into storage. If the partition set is device-dependent, derive the partition set
name by appending the device suffix to the original 1- to 7-character partition set
name used in the application program. The suffixes that BMS appends for the
various terminals depend on the parameter specified in the SUFFIX operand of the
DFHPSD macro that defined the partition set.
To use a partition set, you must define and install a resource definition for it. You
can do this either by using the program autoinstall function or by using the CEDA
DEFINE PARTITIONSET and INSTALL commands, as described in the CICS
Resource Definition Guide.
In this chapter, application program generally means any user program that uses
the CICS command-level application programming interface (API). Such programs
can also use:
v SQL statements
v DLI requests
v Common programming interface (CPI) statements
v SAA Resource Recovery statements
v External CICS interface commands
For information about writing CICS application programs, see the CICS Application
Programming Guide.
Note: If you are developing application programs to use the CICS dynamic
transaction routing facility, you are recommended to use the Transaction
Affinities Utility to detect whether the programs are likely to cause
intertransaction affinity. See the CICS Transaction Affinities Utility Guide for
more information about using the utility. See the CICS Application
Programming Guide for a description of intertransaction affinity.
Each procedure has a name of the form DFHwxTyL, where the variables w, x, and
y depend on the type of program (EXCI batch or CICS online), the type of compiler,
and the programming language. Using the preceding naming convention, the
procedure names are given in Table 7.
Table 7. Procedures for installing application programs
Language LE-conforming compilers non-LE-conforming compilers
Online EXCI Online EXCI
Assembler - - DFHEITAL DFHEXTAL
C DFHYITDL DFHYXTDL DFHEITDL DFHEXTDL
C++ DFHYITEL DFHYXTEL - -
COBOL DFHYITVL DFHYXTVL DFHEITVL DFHEXTVL
PL/I DFHYITPL DFHYXTPL DFHEITPL DFHEXTPL
Assembler DFHEAP1$
C DFHEDP1$
COBOL DFHECP1$
PL/I DFHEPP1$
For a description of the translation process, and information about the translator
options that you can specify, see Specifying translator options in the CICS
Application Programming Guide .
If you use CALL, specify PREPROC as the entry point name to invoke the
translator.
Translator option list: The translator option list must begin on a halfword
boundary. The first two bytes contain a binary count of the number of bytes in the
list (excluding the count field). The remainder of the list can contain any of the
translator option keywords, separated by commas, blanks, or both.
Data definition (DD name) list: The DD name list must begin on a halfword
boundary. The first two bytes contain a binary count of the number of bytes in the
list (excluding the count field). Each entry in the list must occupy an 8-byte field.
The sequence of entries is:
If you omit an applicable entry, the translator uses the standard DD name. If you
use a DD name less than 8 bytes long, fill the field with blanks on the right. You can
omit an entry by placing X'FF' in the first byte. You can omit entries at the end of
the list entirely.
The interface modules and their use is described in the following sections.
| To write CICS application programs that request CICS services through the
| command-level application programming interface (API), you can use assembler
| language, C and C++, COBOL, or PL/I. You can also write application programs
| using C++ and Java, using the CICS OO foundation classes for C++, and the
| JCICS classes for Java.
| CICS provides the support needed to run application programs written in assembler
| language, and OS/390 Language Environment (LE) provides the required support
| for all the other languages. The support provided by OS/390 LE covers:
| v Programs compiled by the Language Environment-conforming compilers:
| – IBM COBOL for MVS & VM (5688–197)
| – IBM PL/I for MVS & VM (5688–235)
| – IBM C/C++ for MVS (5655–121)
| – SAA AD/Cycle COBOL/370 (5688–197)
| – SAA AD/Cycle PL/I (5688–235)
| – SAA AD/Cycle C/370 (5688–216)
| If, for some reason, you choose not to use OS/390 LE, the alternative is to install
| runtime support in CICS for each of the old compilers used to compile your
| application programs (VS COBOL II, OS/VS COBOL, PL/I, and C). However, this is
| not recommended
| Installing runtime support for the old compilers is discussed under “Native language
| support for non-LE compilers” on page 30.
| OS/390 LE support
| This section describes CICS support for OS/390 LE and what to do to install that
| support.
| LE initialization takes place during CICS startup, when CICS issues message
| DFHAP1203I applid Language Environment/370 is being initialized. The
| CEECCICS module is loaded, followed by a partition initialization call to it, before
| the start of second phase PLT processing. If LE cannot successfully complete the
| initialization of all languages supported by CICS, or can only initialize some of them,
| it issues messages to the MVS console. If LE initialization fails completely, it may
| be because the CEECCICS module could not be loaded, or something went wrong
| during the loading of a particular language routine.
| For example:
| //* CICS APF-authorized libraries
| //STEPLIB DD DSN=hlq.CICS.SDFHAUTH,DISP=SHR
| // DD DSN=hlq.LE.SCEERUN,DISP=SHR
| //* CICS load libraries
| //DFHRPL DD DSN=hlq.CICS.SDFHLOAD,DISP=SHR
| // DD DSN=hlq.LE.SCEECICS,DISP=SHR
| // DD DSN=hlq.LE.SCEERUN,DISP=SHR
| Use only these LE runtime libraries for all your high-level language application
| programs, including those compiled with old, non-LE-conforming compilers, such
| as VS COBOL II and OS/VS COBOL.
For your application programs, CICS can create and install program resource
definitions automatically or you can create them specifically in the CSD, and install
them by using the GRPLIST system initialization parameter or CEDA INSTALL
command. For more information about installing program resource definitions, see
“Defining programs, map sets, and partition sets to CICS” on page 56.
If you use Version 3 Release 2, or later, of the C/C++ compiler to compile a C++
program, specify the CXX parameter when options are passed to the compiler,
otherwise the C compiler is invoked. Don’t specify CXX if a C program is to be
compiled. See the IBM C/C++ for MVS/ESA Compiler and Run-Time Migration
Guide Version 3 Release 2, SC33-2002, for further information.
For information about LE support for programming languages, see the Program
| Directory for IBM Language Environment for MVS and VM.
Note: To use VS COBOL II, you need the CICS-VS COBOL II interface module,
IGZECIC, in an APF-authorized library in the CICS STEPLIB
concatenation. Do not put it in the LPA, because the LPA is not searched
for this module.
2. Include the libraries containing the VS COBOL II library routines in the DFHRPL
concatenation of your CICS startup JCL. VS COBOL II requires two packages of
subroutines, known as COBPACKs. These subroutines are in two categories: (1)
general and (2) environment-specific, containing system-specific logic. The
COBPACKs you need are:
IGZCPCC
This module contains the CICS environment-specific modules, and is
supplied in the SYS1.COB2CICS library.
IGZCPAC
This module contains the general VS COBOL II subroutines, and is
supplied in the SYS1.COB2LIB library.
If you choose to define the VS COBOL II COBPACKs specifically, you can use the
following commands:
DEFINE PROGRAM(IGZCPCC) GROUP(cob2grp) LANGUAGE(ASSEMBLER) CEDF(NO)
DEFINE PROGRAM(IGZCPAC) GROUP(cob2grp) LANGUAGE(ASSEMBLER) CEDF(NO)
ADD GROUP(cob2grp) LIST(listname)
For information about installing VS COBOL II support for CICS and about running
VS COBOL II applications with CICS, see the VS COBOL II Installation and
Customization manual.
| CICS loader also loads ILBOCOM from the DFHRPL concatenation. Ensure the
| module is available from the OS/VS runtime library in the CICS DFHRPL library.
Note:
Note: Locale is the term defined by the American National Standard for
Information Systems (ANSI) to denote a C/370 programming
language environment for a given national language. The
C/370-supplied locales provide a C/370 programming language
environment for German (EDC$GERM), American English
(EDC$USA), French (EDC$FRAN), Italian (EDC$ITAL) and Spanish
(EDC$SPAI).
For information about C/370, see the IBM C/370 Programming Guide.
Note: CICS run-time support for PL/I Version 2.3 also supports programs compiled
against earlier releases of PL/I.
The group of CSD definitions supplied by CICS in earlier releases, DFHPLI, is not
supplied in CICS TS, nor is it supplied in one of the compatibility groups. For an
explanation about compatibility groups in general, see Sharing the CSD between
different releases of CICS and “CICS-supplied compatibility groups” on page 146.
(The CICS Resource Definition Guide lists the contents of each compatibility group.)
When you have completed the stage 2 jobs, which link-edit the PL/I shared library
modules into the SYS1.PLIBASE library (or another suitable library), ensure that
modules IBMBPSLA and IBMBPSMA are also installed in one of the libraries in the
CICS DFHRPL library concatenation (for example, the CICSTS13.CICS.SDFHLOAD
library) or in the LPA.
When you start up CICS, it attempts to load the modules IBMBPSLA and
IBMBPSMA into the CICS nucleus. If this load fails (for example, because the
modules are not found), PL/I shared library support is not available.
Also, run the job shown in Figure 11, to link-edit module PLISHRE into the
CICSTS13.CICS.SDFHLOAD library.
This section also outlines the steps to install application programs to run under
CICS. (See “Overview of installing application programs” on page 40.)
The DFHMAPS procedure writes the symbolic map set output to the library
specified on the DSCTLIB parameter, which defaults to the
CICSTS13.CICS.SDFHMAC library. If you want to include symbolic map sets in a
user copy library:
v Specify the library name by the DSCTLIB=name operand on the EXEC statement
for the DFHMAPS procedure used to install physical and symbolic map sets
together.
v Include a DD statement for the user copy library in the SYSLIB concatenation of
the job stream used to assemble and compile the application program.
If you choose to let the DFHMAPS procedure write the symbolic map sets to the
CICSTS13.CICS.SDFHMAC library (the default), include a DD statement for the
CICSTS13.CICS.SDFHMAC library in the SYSLIB concatenation of the job
stream used to compile the application program. This is not necessary for the
DFHEITAL procedure used to assemble assembler-language programs, because
these jobs already include a DD statement for the CICSTS13.CICS.SDFHMAC
library in the SYSLIB concatenation.
v For PL/I, specify a library that has a block size of 400 bytes. This is necessary to
overcome the blocksize restriction on the PL/I compiler.
For more information about installing map sets, see “Chapter 3. Installing map sets
and partition sets” on page 13. For information about writing programs to use BMS
services, see the CICS Application Programming Guide.
If you do not specify any AMODE or RMODE attributes for your program, MVS
assigns the system defaults AMODE(24) and RMODE(24). To override these
defaults, you can specify AMODE and RMODE in one or more of the following
places. Assignments in this list overwrite assignments later in the list.
1. On the linkage editor MODE control statement:
MODE AMODE(31),RMODE(ANY)
2. Either of the following:
a. In the PARM string on the EXEC statement of the linkage editor job step:
//LKED EXEC PGM=IEWL,PARM='AMODE(31),RMODE(ANY),..'
b. On the LINK TSO command, which causes processing equivalent to that of
the EXEC statement in the linkage editor step.
3. On AMODE or RMODE statements within the source code of an assembler
program. (You can also set these modes in COBOL by means of the compiler
options; for information about COBOL compiler options, see the relevant
application programming guide for your COBOL compiler.)
4. The link-edit modules DFHECI and DFHEPI assign AMODE(31) and
RMODE(ANY) to COBOL and PL/I programs.
For information about these modes and the rules that govern their use, see the
DFSMS/MVS Program Management.
The following example shows linkage editor control statements for a program coded
to 31-bit standards:
//LKED.SYSIN DD *
MODE AMODE(31),RMODE(ANY)
NAME anyname(R) ("anyname" is your load module name)
/*
//
If there is not enough storage for a task to load a program, the task is suspended
until enough storage becomes available. If any of the DSAs get close to being short
on storage, CICS frees the storage occupied by programs that are not in use. (For
more information about the dynamic storage areas in CICS, see “Storage
protection” on page 353.)
Instead of making RMODE(24) programs resident, you can make them non-resident
and use the library lookaside (LLA) function. The space occupied by such a
program is freed when its usage count reaches zero, making more virtual storage
available. LLA keeps its library directory in storage and stages (places) copies of
LLA-managed library modules in a data space managed by the virtual lookaside
facility (VLF). CICS locates a program module from LLA’s directory in storage,
rather than searching program directories on DASD. When CICS requests a staged
module, LLA gets it from storage without any I/O.
If you want CICS to use modules that you have written to these standards, and
installed in the LPA, specify USELPACOPY(YES) on the program resource
definitions in the CSD.
For information about installing CICS modules in the LPA, see the CICS Transaction
Server for OS/390 Installation Guide.
Note: OS/VS COBOL or pre-CICS/VS 1.6 (DFHE type program stub) programs are
not re-entrant and therefore cannot be loaded into read-only storage.
Programs that are not eligible to reside above 16MB, and are read-only, can reside
in the CICS read-only DSA (RDSA) below 16MB. Therefore, to be eligible for the
RDSA, programs must be:
v Properly written to read-only standards
v Link-edited with the RENT attribute
ERDSA requirements for the specific languages are described in the following
sections.
Assembler
If you want CICS to load your assembler programs in the ERDSA, assemble and
link-edit them with the following options:
Note: If you specify these options, ensure that the program is truly read-only (that
is, does not modify itself in any way—for example, by writing to static
storage), otherwise storage exceptions occur. The program must also be
written to 31-bit addressing standards. See the CICS Problem Determination
Guide for some possible causes of storage protection exceptions in
programs resident in the ERDSA.
| Note: The CICS EXEC interface module for assembler programs (DFHEAI)
| specifies AMODE(ANY) and RMODE(ANY). However, because the
| assembler defaults your application to AMODE(24) and RMODE(24), the
| resulting load module also becomes AMODE(24) and RMODE(24).
| C and C/++
If you want CICS to load your C and C++ programs into the ERDSA, compile and
link-edit them with:
1. The RENT compiler option.
The CICS-supplied procedures DFHYITDL (for C) and DFHYITEL (for C++) have a
LNKPARM parameter that specifies a number of link-edit options. To link-edit an
ERDSA-eligible program, override this parameter from the calling job, and add
RENT to the other options you require. You do not need to add the RMODE(ANY )
option, because the CICS EXEC interface module for C/370 (DFHELII) is link-edited
The following sample job statements show the LNKPARM parameter with the RENT
option added:
| //C370PROG JOB 1,user_name,MSGCLASS=A,CLASS=A,NOTIFY=userid
| //EYTDL EXEC DFHEYTDL,
| .
| (other parameters as necessary)
| .
| // LNKPARM='LIST,MAP,LET,XREF,RENT'
COBOL
| LE-conforming COBOL and VS COBOL II programs are automatically eligible for
| the ERDSA, because:
| v If you use the translator option, CBLCARD (the default), the required compiler
| option, RENT, is included automatically on the CBL statement generated by the
| CICS translator. If you use the translator option, NOCBLCARD, specify the RENT
| option either on the PARM statement of the compile job step, or by using the
| COBOL macro IGYCOPT to set installation-defined options.
| v The COBOL compiler automatically generates code that conforms to read-only
| and 31-bit addressing standards.
| v The CICS EXEC interface module for COBOL (DFHECI) is link-edited with
| AMODE(31) and RMODE(ANY). Therefore, your program is link-edited as
| AMODE(31) and RMODE(ANY) automatically when you include the CICS EXEC
| interface stub.
| You also need to specify the reentrant attribute to the linkage-editor. The
CICS-supplied procedure, DFHYITVL (and also DFHEITVL), has a LNKPARM
parameter that specifies a number of link-edit options. To link-edit an
ERDSA-eligible program, override this parameter from the calling job, and add
RENT to any other options you require. For example:
| //COB2PROG JOB 1,user_name,MSGCLASS=A,CLASS=A,NOTIFY=userid
| //YITVL EXEC DFHYITVL,
| .
| (other parameters as necessary)
| .
| // LNKPARM='LIST,XREF,RENT'
PL/I
CICS PL/I programs are generally eligible for the ERDSA, provided they do not
modify static storage. The following requirements are enforced, either by CICS or
PL/I:
v The required REENTRANT option is included automatically, by the CICS
translator, on the PL/I PROCEDURE statement.
v The PL/I compiler automatically generates code that conforms to 31-bit
addressing standards.
v The CICS EXEC interface module for PL/I (DFHEPI, which is part of the PL/I
DFHPL1OI module) is link-edited with AMODE(31) and RMODE(ANY). Therefore,
your program is link-edited as AMODE(31) and RMODE(ANY) automatically
when you include the CICS EXEC interface stub.
Note: Do not specify the RENT attribute on the link-edit step unless you have
ensured the program is truly read-only (and does not, for example, write to
static storage), otherwise storage exceptions will occur. See the CICS
Problem Determination Guide for some possible causes of storage protection
exceptions in programs resident in the ERDSA.
However, this can increase the search time when loading modules from the
secondary extents. You should avoid using secondary extents if possible.
If you have macro-level programs from an earlier release of CICS, recode them as
command-level programs. Furthermore, references to the CSA or to the TCA are
not allowed. You can specify YES for the system initialization parameter DISMACP
to cause CICS to disable any transaction whose program invokes a CICS macro or
references the CSA or the TCA.
In this COBOL example, the symbolic parameter STUB defaults to DFHEILIC. The
DFHEILIC member contains the statement INCLUDE SYSLIB(DFHECI).
In the DFHEILIP module, after the INCLUDE statement, there is the REPLACE
PLISTART linkage editor command. This command causes the CSECT PLISTART,
which is inserted by the compiler, to be removed because equivalent function is in
the stub DFHPL1OI. The REPLACE PLISTART command is needed for programs
to run under Language Environment, but is optional for other PL/I programs that run
under CICS.
For more information about linkage editor requirements, see “Using your own job
streams” on page 53.
Tra n s la to r
C o m m a n d - le v e l
li s t in g
language translator
In te rm e d ia te
s to ra g e
A s s e m b ly
A sse m b le r li s t in g
C IC S .
SDFHMAC
In te rm e d ia te
s to ra g e
Linkage Editor
Linkage Editor li s t in g
C IC S .
SDFHLOAD
C IC S .
SDFHLOAD
Figure 12. Installing assembler language programs using the DFHEITAL procedure
Figure 13. Sample job control statements to call the DFHwxTAL procedures
Notes:
1 If you are installing a program into either of the read-only DSAs, see “Preparing
application programs to run in the RDSAs” on page 37 for more details.
If you are installing a program that is to be used from the LPA, add:
v RENT to the PARM options in the EXEC statement for the ASM step of the
DFHEITAL procedure
v RENT and REFR options to the LNKPARM parameter on the call to the
DFHEITAL procedure
(See “Preparing applications to run in the link pack area” on page 36.)
2 For information about the translator options you can include on the XOPTS
statement, see Specifying translator options in the CICS Application Programming
Guide .
Tra n s la to r
C o m m a n d - le v e l
li s t in g
C IC S . language translator
SDFHLOAD
In te rm e d ia te
s to ra g e
DFHBMSCA
D F H A ID C o m p ile r
H ig h - l e v e l
li s t in g
C IC S . language com piler
SDFHCOB
or SD FH P L1 D F H E IL IC
D F H E IL I P
In te rm e d ia te
s to ra g e
DFHECI
DFHEPI Linkage Editor
Linkage Editor
li s t in g
C IC S .
SDFHLOAD
DFHPL1OI
S Y S 1 .P L IB A S E C IC S .
or C O BLIB SDFHLOAD
| For information about adding CICS support for OS/390 LE, see “Installing CICS
| support for Language Environment” on page 27. For information about adding CICS
| support for VS COBOL II, see “Installing CICS support for VS COBOL II” on
| page 30.
|
Figure 15. Sample job control statements to call the DFHwxTVL procedures
1 Translator options:
Specify the following translator options according to the version of the COBOL
compiler invoked in the compile step.
OOCOBOL
for the IBM COBOL for MVS and VM compiler. This option is only needed if
the object-oriented syntax (such as class-id and method-id), is used in the
application program. The OOCOBOL option implies the COBOL3, ANSI85
and COBOL2 translator options.
COBOL3
for one of the LE-conforming COBOL compilers. COBOL3 implies the
ANSI85 and COBOL2 translator options.
ANSI85
for the VS COBOL II compiler. This option specifies that the translator is to
translate VS COBOL II programs that implement the ANSI85 standards.
ANSI85 implies the COBOL2 option.
COBOL2
for the VS COBOL II compiler.
Compiler options:
The PARM statement of the COB step in DFHwxTVL specifies values for the
compiler options. For example,
//COB EXEC PGM=IGYCRCTL,REGION=®,
// PARM='NODYNAM,LIB,OBJECT,RENT,RES,APOST,MAP,XREF'
It does not specify values for the SIZE and BUF options. The defaults are
SIZE=MAX, and BUF=4K. SIZE defines the amount of virtual storage available to
the compiler, and BUF defines the amount of dynamic storage to be allocated to
Ensure that the APOST|QUOTE option in effect for the COBOL compiler matches
that for the translator.
There is no BATCH compiler option for VS COBOL II. For information about VS
COBOL II compiler options, see the VS COBOL II Application Programming Guide.
You can change compiler options by using any of the following methods:
v By overriding the PARM statement defined on the COB step of the DFHwxTCL
procedure.
If you specify a PARM statement in the job that calls the procedure, it overrides
all the options specified in the procedure JCL. Ensure that all the options you
want are specified in the override, or in a CBL statement.
v Specifying a CBL statement at the start of the source statements in the job
stream used to call the DFHwxTCL procedure.
v The COBOL installation defaults macro, IGYCOPT.
This is needed if you do not use a CBL statement; that is, have specified the
translator option NOCBLCARD, and the compiler option ALOWCBL=NO.
For information about the translator option CBLCARD|NOCBLCARD, see the CICS
Application Programming Guide. If you choose to use the NOCBLCARD option,
also specify the COBOL compiler option ALOWCBL=NO to prevent an error
message of IGYOS4006-E being issued. The ALOWCBL=NO option can be
overridden at compile time by the JCL PARM option or a TSO command. For more
information about the ALOWCBL compiler option, see the relevant Installation and
Customization manual for your version of COBOL.
2 If you have no input for the translator, you can specify DD DUMMY instead of DD *.
However, if you specify DD DUMMY, also code a suitable DCB operand. (The
translator does not supply all the data control block information for the SYSIN data
set.)
3 The translator options on the XOPTS statement override similar options in the
DFHwxTVL procedure.
| Any translator options you specify should include the type of COBOL translator
| option, COBOL2 or COBOL3.
For information about the translator options you can include on the XOPTS
statement, see the Specifying translator options in the CICS Application
Programming Guide .
4 You can ignore weak external references unresolved by the linkage editor.
The link-edit job step requires access to the libraries containing the
environment-specific modules for CICS, the general VS COBOL II library
subroutines, and the Language Environment link-edit modules, as appropriate. The
required libraries are included in the SYSLIB concatenation of procedures for VS
COBOL II. Override or change the names of these libraries if the modules and
library subroutines are installed in libraries with different names.
If you are installing a program that is to be used from the LPA, add the RENT and
REFR options to the LNKPARM parameter on the call to the DFHEITVL procedure.
(See “Preparing applications to run in the link pack area” on page 36.)
For example, assuming the PL/I application is made up of members PLIAPP and
EXTSUB, which reside in the library defined by ddname L1, and that the SYSLIB
data set contains DFHPL1OI, then the following linkage editor statements should be
used:
INCLUDE SYSLIB (DFHPL1OI)
Ensure that DFHPL1OI is at the head of the load module.
REPLACE PLISTART
Delete unwanted CSECTs from following INCLUDE.
INCLUDE L1 (PLIAPP)
Application procedure (1).
REPLACE PLISTART
Delete unwanted CSECTs from following INCLUDE.
INCLUDE L1 (EXTSUB)
Application procedure (2).
INCLUDE DFHSHRE (PLISHRE)
Optional. Use if shared library is to be used.
NAME APROG (R)
Optional. Defines name of load module.
For more information about preparing PL/I programs, see the OS PL/I Version 2
Programming Guide, SC26-4307-02.
Figure 16. Sample job control statements to call the DFHwxTPL procedures
1 In the DFHEITPL procedure, the link-edit step includes module DFHPL1OI. This
module is generated during the installation of PL/I and is normally placed either in
the CICSTS13.CICS.SDFHLOAD library, the SYS1.PLIBASE library, or a user
library. Find out which library contains this module, and either copy the module to
the CICSTS13.CICS.SDFHLOAD library or concatenate the appropriate library in
LKED.SYSLIB.
If you include the PL/I REPORT and COUNT execution time options, output goes to
the CPLI transient data destination. There is an example of this transient data
queue coded in the sample destination control table in the
CICSTS13.CICS.SDFHSAMP library.
2 If you have no input for the translator, you can specify DD DUMMY instead of DD *.
However, if you specify DD DUMMY, also code a suitable DCB operand. (The
translator does not supply all the data control block information for the SYSIN data
set.)
For information about the translator options you can include on the XOPTS
statement, see the Specifying translator options in the CICS Application
Programming Guide .
Ignore the message from the PL/I compiler: “IEL0548I PARAMETER TO MAIN
PROCEDURE NOT VARYING CHARACTER STRING”.
Warning messages may appear from the PL/I compiler stating that arguments and
parameters do not match for calls to procedure DFHxxxx. These messages indicate
that arguments specified in operands to CICS commands may not have the correct
data type. Carefully check all fields mentioned in these messages, especially
receiver fields.
You can ignore weak external references unresolved by the linkage editor.
If you are installing a program into either of the read-only DSAs, see “Preparing
application programs to run in the RDSAs” on page 37 for more details.
If you are installing a program that is to be used from the LPA, add the RENT and
REFR options to the LNKPARM parameter on the call to the DFHEITPL procedure.
(See “Preparing applications to run in the link pack area” on page 36 for more
information.)
To use the PL/I shared library facility, generate the module PLISHRE (see
“Generating PL/I shared library support for CICS” on page 33) before you compile
and link-edit your application programs. When you have re-link-edited the PLISHRE
module into the CICSTS13.CICS.SDFHLOAD library, put the INCLUDE
SYSLIB(PLISHRE) control statement immediately after the INCLUDE
SYSLIB(DFHPL1OI) statement in the DFHEILIP member in the
CICSTS13.CICS.SDFHPL1 library. Also, code a LKED.SYSLIB DD statement to
concatenate the library that contains the PLISHRE module in front of the
SYS1.PLIBASE library. For example:
//LKED.SYSLIB DD DSN=CICSTS13.CICS.SDFHLOAD,DISP=SHR
// DD DSN=SYS1.PLIBASE,DISP=SHR
Before you can install any C programs, have installed the C library and compiler
and generated CICS support for C. (See “Installing CICS support for C/370” on
page 32.)
s o u rc e
T r a n s la t o r
C o m m a n d -le v e l
lis tin g
C IC S . la n g u a g e tra n s la to r
SDFHLOAD
In te rm e d ia te
s to r a g e
C o m p i le r
H ig h -le v e l
EDC. V1R2M0 DFHBMSCA lis tin g
la n g u a g e co m p ile r
SEDCHDRS D F H A ID
SEDCMSGS
(E D C M S G E )
In te rm e d ia te
s to r a g e
P re -lin k a g e
P re -lin k a g e
e d ito r
EDC. V1R2M0 e d ito r
lis tin g
S E D C L IN K
SEDCCOMP
SEDCMSGS
(E D C M S G E )
In te rm e d ia te
E D C .V 2 R 2 M 1
s to r a g e
S IB M LIN K
D F H E IL ID
D F H E L II
L in k a g e E d ito r
L in k a g e E d ito r
lis tin g
C IC S .
SDFHLOAD
EDC. V1R2M0
SEDCBASE
EDC. V 2R 2M 1 C IC S .
S IB M B A S E SDFHLOAD
1 Compiler options:
You can code compiler options by using the parameter override (PARM.C) in the
EXEC statement that invokes the procedure, or by a C statement at the start of the
source statements. If you use a C statement, you need a parameter override of
BATCH on the EXEC PROC=DFHEITyL statement.
You can compile your C/370 applications under Version 1 Release 2 of the C/370
compiler and run them under Version 1 Release 2 of the C/370 library.
2 If you have no input for the translator, you can specify DD DUMMY instead of DD *.
However, if you specify DD DUMMY, also code a suitable DCB operand. (The
translator does not supply all the data control block information for the SYSIN data
set.)
3 Translator options: For information about the translator options you can
include on the XOPTS statement, see the Specifying translator optionsCICS
Application Programming Guide in the .
4 If you are installing a program into either of the read-only DSAs, see “Preparing
application programs to run in the RDSAs” on page 37 for more details.
If you are installing a program that is to be used from the LPA, add the RENT and
REFR options to the LNKPARM parameter on the call to the DFHEITyL procedure.
(See “Preparing applications to run in the link pack area” on page 36 for more
information.)
The rest of this section summarizes the important points about the translator and
each of the main categories of program. For simplicity, the following discussion
states that you load programs into CICSTS13.CICS.SDFHLOAD or IMS.PGMLIB. In
fact, you can use any libraries, but only when they are either included in the
DFHRPL library concatenation in the CICS job stream, or included in the STEPLIB
library concatenation in the batch job stream (for a stand-alone IMS batch program).
Note: The IMS libraries referred to in the job streams are identified by IMS.libnam
(for example IMS.PGMLIB). If you use your own naming convention for IMS
libraries, please rename the IMS libraries accordingly.
Translator requirements
The CICS translator requires a minimum of 256KB of virtual storage. You may need
to use the translator options CICS and DLI.
If you have no input for the translator, you can specify DD DUMMY instead of DD *.
However, if you specify DD DUMMY, also code a suitable DCB operand. (The
translator does not supply all the data control block information for the SYSIN data
set.)
In the CICS-supplied procedures, the input to the linkage editor step (defined by
the SYSLIN DD statement) concatenates a library member with the object deck.
This member contains an INCLUDE statement for the required interface module.
For example, the DFHEITVL procedure concatenates the library member
DFHEILIC, which contains the following INCLUDE statement:
INCLUDE SYSLIB(DFHECI)
3. Place the load module output from the linkage editor (defined by the SYSLMOD
DD statement) in CICSTS13.CICS.SDFHLOAD, or a user-defined application
program library.
Figure 19 shows sample JCL and an inline procedure, based on the CICS-supplied
procedure DFHEITVL, that can be used to install VS COBOL II application
programs. The procedure does not include the COPYLINK step and concatenation
of the library member DFHEILIC that contains the INCLUDE statement for the
required interface module (as included in the DFHEITVL procedure). Instead, the
JCL provides the following INCLUDE statement:
INCLUDE SYSLIB(DFHECI)
If this statement was not provided, the linkage editor would return an error message
for unresolved external references, and the program output would be marked as not
executable.
For information about defining programs to CICS, see the CICS Resource Definition
Guide.
CICS can provide DL/I database support by using the IBM product Information
Management System/Enterprise Systems Architecture (IMS/ESA) Database
Manager Version 3 (5665-408) Release 1 or later.
Note: CICS/ESA 4.1 was the last release to support local DL/I.
The IMS libraries referred to in the job streams are identified by IMS.libnam (for
example IMS.PGMLIB). If you use your own naming convention for IMS libraries,
please rename the IMS libraries accordingly.
Note: Not all releases of IMS can be used with the storage protection facilities
available in CICS. There are restrictions on the use of storage protection
when running DBCTL. Table 10 summarizes the IMS releases that can be
used with storage protection.
Table 10. Summary of IMS releases that can be used with storage protection
Release of IMS CICS with DBCTL
IMS/ESA 3.1 Storage protection not available
IMS/ESA 4.1 Storage protection
Later IMS/ESA releases Storage protection
For more information about storage protection, see “Storage protection” on page
353.
PDIRs
A directory of program specification blocks (PDIR) is a list of program specification
blocks (PSBs) that define, for DL/I, the use of databases by application programs.
Your CICS region needs a PDIR to access a database owned by a remote CICS
region (remote DL/I support). Your CICS region does not need a PDIR to access a
DL/I database owned by DBCTL. For information about accessing DL/I databases
owned by DBCTL, see the CICS IMS Database Control Guide.
Figure 20. Using CICS remote DL/I support to access DBCTL databases
Notes:
1. CICSB uses remote DL/I to access, via CICSA, databases owned by DBCTL 1
in MVS image 1. This is only needed if CICSB is not connected to DBCTL 1.
2. CICSB uses remote DL/I to access, via CICSC, databases owned by DBCTL 2
in MVS image 2.
3. CICSA (connected to DBCTL 1) is in the same MVS image as DBCTL 1. CICSC
(connected to DBCTL 2) is in the same MVS image as DBCTL 2.
For information about accessing DL/I databases owned by DBCTL, see the CICS
IMS Database Control Guide.
For details of these (and other) system initialization parameters, see “Chapter 21.
| CICS system initialization parameters” on page 215.
If you are running CICS with XRF, see “XRF considerations” on page 74.
VTAM terminals
If your CICS system is to communicate with terminals or other systems using VTAM
services, you must:
1. Define CICS to ACF/VTAM with an APPL statement in SYS1.VTAMLST. For
more information about defining an APPL statement for CICS, see the CICS
Transaction Server for OS/390 Installation Guide.
2. Define to VTAM the terminal resources that CICS is to use. For more
information about defining terminal resources to VTAM, see “Defining CICS
terminal resources to VTAM”.
3. Define to CICS the terminal resources that it is to use. For more information
about defining terminal resources to CICS, see “Defining terminal resources to
CICS” on page 62.
You define terminals, controllers, and lines in VTAM tables 2 as nodes in the
network. Each terminal, or each logical unit (LU) in the case of SNA terminals, must
be defined in the VTAM tables with a VTAM node name that is unique throughout
the VTAM domain.
If you are using VTAM 3.3 or later, you can define the AUTINSTMODEL name,
printer, and alternate printer to VTAM by using VTAM MDLTAB and ASLTAB
macros. These definitions are passed to CICS to select autoinstall models and
printers.
For information about defining resources to VTAM, see the ACF/VTAM Installation
and Resource Definition manual.
If a terminal does not have an explicit definition in the CSD, CICS can create and
install a definition dynamically for the terminal when it logs on, using the CICS
autoinstall facility. CICS can autoinstall terminals by reference to TYPETERM and
model TERMINAL definitions created with the AUTINSTMODEL and
AUTINSTNAME attributes. For information about TYPETERM and TERMINAL
definitions, see the CICS Resource Definition Guide.
If you use autoinstall, you must ensure that the CICS resource definitions correctly
match the VTAM resource definitions. For programming information about VTAM
logmode definitions and their matching CICS autoinstall model definitions, see the
CICS Customization Guide.
If you specify the system initialization parameter TCTUALOC=ANY, CICS stores the
terminal control table user area (TCTUA) for VTAM terminals above the 16MB line if
possible. (See page 302 for more information about the TCTUALOC parameter.)
2. VTAM has tables describing the network of terminals with which it communicates. VTAM uses these tables to manage the flow of
data between CICS and the terminals.
Note: If you do not want a time limit (that is, you assume that all terminals never
hang), specify the TCSWAIT=NO system initialization parameter.
TCAM terminals
CICS supports the DCB interface of ACF/TCAM (also known as the GET/PUT
interface) in an SNA or non-SNA environment. This section describes the DFHTCT
macros you must code to define the CICS terminals connected to this interface.
With the CICS support of the TCAM DCB interface, each TCAM communication line
has associated with it two “sequential” queues: the input process queue and the
output process queue. CICS routes messages for terminals connected using the
TCAM DCB interface to the queue named in the DEST option of the SEND and
CONVERSE commands.
CICS assumes that there is a user-written message control program (MCP) that
processes messages on the TCAM queues. The TCAM MCP is responsible for
polling and addressing terminals, translating code, and line control. Therefore, some
DFHTCT operands that are associated with such activities are irrelevant in the
TCAM DCB interface environment.
For programming information about the CICS/TCAM interface, see the CICS
Customization Guide.
You code one DFHTCT TYPE=SDSCI macro for each input queue, and one for
each output queue. The macros generate DCBs, corresponding to TPROCESS
blocks. CICS treats a queue like a communication line. Each queue is described by
a DFHTCT TYPE=LINE macro; this generates one TCT line entry (TCTLE) for each
queue.
Each input record from TCAM must contain the source terminal identification. CICS
uses this identification as a search argument to find the corresponding TCTTE (by
comparing against the NETNAME value for each TCTTE).
Note: The usual way to ensure that the input records contain the source terminal
identification is to specify OPTCD=W in the DFHTCT TYPE=SDSCI macro. If
you omit this specification, you are responsible for ensuring that the record
contains a suitable source terminal identification.
By using the POOL feature (POOL=YES on the DFHTCT TYPE=LINE macro), you
can establish a pool of common TCTTEs on the output TCTLE that do not contain
terminal identifiers. As required, terminal identifiers are assigned to the TCTTEs or
removed from association with the TCTTEs. For programming information about the
TCTTEs, see the CICS Customization Guide.
The two data sets defined by the DFHTCT TYPE=SDSCI macros simulate a CICS
terminal known by the name specified in the TRMIDNT operand of the DFHTCT
TYPE=TERMINAL macro. The DSCNAMEs of the input and output data sets must
be specified in the ISADSCN and OSADSCN operands of the DFHTCT TYPE=LINE
macro respectively.
You can also use two DASD data sets to simulate a terminal. You must code a DD
statement for each data set defined by an SDSCI macro, and the DD name on the
DD statement must be the name coded on the DDNAME (or DSCNAME) parameter
of the SDSCI macro. For example, you might code:
//DISKIN1 DD DSN=SIMULATD.TERMINAL.IN,
// UNIT=3380,DISP=OLD,VOL=SER=volid
//DISKOT1 DD DSN=SIMULATD.TERMINAL.OUT,
// UNIT=3380,DISP=OLD,VOL=SER=volid
Input from this simulated terminal is read from the DISKIN1 data set. Output to the
terminal is written to the DISKOT1 data set.
Each statement in the input file (from CARDIN or DISKIN1 in the examples used
above), must end with a character representing X'E0'. The standard EBCDIC
symbol for this end-of-data hexadecimal value is a backslash (\) character, and this
is the character defined to CICS in the pregenerated system. You can redefine this
for your installation on the EODI system initialization parameter; see “Chapter 21.
| CICS system initialization parameters” on page 215 for details.
Figure 21. Example of TCT definitions needed to use BSAM device for START commands
| Include the following DD statement in your CICS startup JCL to support the
| sequential device defined in Figure 21
| //PRNT001 DD DUMMY
| Terminating
End-of-file does not terminate sequential input. You should use CESF GOODNIGHT
as the last transaction, to close the device and terminate reading from the device.
Otherwise, CICS invokes the terminal error program (DFHTEP), and issues the
messages in Table 11 at end-of-file on the sequential device:
Table 11. Warning messages if a sequential terminal is not closed
Message Destination
DFHTC2507 date time applid Input event rejected return code zz {on line CSMT
w/term|at term}termid {, trans}tranid{, rel line=} rr,time
DFHTC2500 date time applid {Line|CU|Terminal} out of service {Term|W/Term} CSMT
termid
Use of CESF GOODNIGHT puts the sequential device into RECEIVE status and
terminates reading from the device. However, if you close an input device in this
way, the receive-only status is recorded in the warm keypoint at CICS shutdown.
This means that the terminal is still in RECEIVE status in a subsequent warm start,
and CICS does not then read the input file.
You can also use CESF LOGOFF to close the device and terminate reading from
the device, but CICS still invokes DFHTEP to issue the messages in Table 11 at
end-of-file. However, the device is left in TTI status, and is available for use when
restarting CICS in a warm start.
If you want CICS to read from a sequential input data set, either during or following
a warm start, you can choose one of the following methods:
For programming information about the use of EXEC CICS INQUIRE and EXEC
CICS SET commands, see the CICS System Programming Reference manual.
For programming information about writing post initialization-phase programs, see
the CICS Customization Guide.
If you use BSAM devices for testing purposes, the final transaction to close down
CICS could be CEMT PERFORM SHUT.
Console devices
You can operate CICS from a console device 3 .
You can use a terminal as both a system console and a CICS terminal. To enable
this, you must define the terminal as a console in the CSD. (You cannot define
consoles in the TCT.)
Suitably authorized TSO users can enter MODIFY commands from terminals
connected to TSO. To enable this, define the TSO user as a console device in the
CSD.
You can use each console device for normal operating system functions and to
invoke CICS transactions. In particular, you can use the console device for CICS
master terminal functions to control CICS terminals or to control several CICS
regions in conjunction with multiregion operation. Consequently, you can be a
master terminal operator for several CICS regions.
You can also use console devices to communicate with alternate CICS regions if
you are using XRF. Such communication is limited to the CICS-supplied transaction,
CEBT.
3. A console device can be a locally-attached system console, a TSO user defined as a console, or an automated process such as
NetView.
| System consoles
| System consoles are defined to MVS in the SYS1.PARMLIB library, in a
| CONSOLnn member, which defines attributes such as NAME, UNIT, and SYSTEM.
| The name is the most significant attribute, because it is the name that CICS uses to
| identify the console. The name is passed to CICS on an MVS MODIFY command.
| Note that although consoles also have a numeric identifier, this is allocated by MVS
| dynamically during IPL, and its use is not recommended for defining consoles to
| CICS.
| For information about defining console devices to MVS, see the OS/390 MVS
| Initialization and Tuning Reference.
| For information about defining MVS consoles to CICS, see “Defining MVS consoles
| to CICS”.
| Note: The TSO user issuing the CONSOLE command can use the NAME option to
| specify a console name different from the TSO user ID.
| To communicate with a CICS region from TSO or SDSF, you need to install a CICS
| console definition that specifies the TSO user ID (or the name specified on the
| console command) as the console name.
| For information about the TSO CONSOLE command, see the OS/390 TSO/E
| System Programming Command Reference, SC28-1972
| For information about defining TSO users to CICS, see “Defining TSO users as
| console devices” on page 70.
Figure 22. Defining consoles and a TSO user in the CSD using DFHCSDUP
For an example of the DEFINE command required to define a TSO user, see
Figure 22.
For information about defining consoles (or terminals) with preset security, see the
CICS RACF Security Guide.
| Having defined the console devices in the CSD, ensure that their resource
| definitions are installed in the running CICS region. You can install the definitions in
| one of two ways, as follows:
| 1. Include the group list that contains the resource definitions on the GRPLIST
| system initialization parameter in the CICS startup job.
| 2. During CICS execution, install the console device group by using the RDO
| command CEDA INSTALL GROUP(groupname), where groupname is the name
| of the resource group containing the console device definitions.
| DFHLIST, the CICS-defined group list created when you initialize the CSD with the
| DFHCSDUP INITIALIZE command, does not include any resource definitions for
| console devices. However, the CSD is initialized with 2 groups that contain console
| definitions:
| DFH$CNSL
| This group contains definitions for three consoles. The group is intended for
| use with the installation verification procedures and the CICS-supplied
| If you decide to create new terminal definitions for your console devices, you can
| specify the CICS-supplied TYPETERM definition, DFHCONS, on the
| TYPETERM(name) parameter. This TYPETERM definition for console devices is
| generated in the group DFHTYPE when you initialize the CSD.
| For information about TERMINAL definitions, see the CICS Resource Definition
| Guide.
CICS support of persistent sessions includes the support of all LU-LU sessions
except LU0 pipeline and LU6.1 sessions. CICS determines for how long the
sessions should be retained from the PSDINT system initialization parameter. This
is a user-defined time interval. If a failed CICS is restarted within this time, it can
use the retained sessions immediately—there is no need for network flows to rebind
them.
You can change the interval using the CEMT SET VTAM command, or the EXEC
CICS SET VTAM command, but the changed interval is not stored in the CICS
global catalog, and therefore is not restored in an emergency restart.
During emergency restart, CICS restores those sessions pending recovery from the
CICS global catalog and the CICS system log to an “in session” state. This
happens when CICS opens its VTAM ACB.
The end user of a terminal sees different symptoms of a CICS failure following a
restart, depending on whether VTAM persistent sessions, or XRF, are in use:
v If CICS is running without VTAM persistent sessions or XRF, and fails, the user
sees the VTAM logon panel followed by the “good morning” message (if
AUTOCONNECT(YES) is specified for the TYPETERM resource definition).
v If CICS does have persistent session support and fails, the user perception is
that CICS is “hanging”: the screen on display at the time of the failure remains
Unbinding sessions
Sessions held by VTAM in a recovery pending state are not always reestablished
by CICS. CICS (or VTAM) unbinds recovery pending sessions in the following
situations:
v If CICS does not restart within the specified persistent session delay interval.
v If you perform a COLD start after a CICS failure.
v If CICS restarts with XRF=YES (when the failed CICS was running with
XRF=NO).
v If CICS cannot find a terminal control table terminal entry (TCTTE) for a session
(for example, because the terminal was autoinstalled with AIRDELAY=0
specified).
v If a terminal or session is defined with the recovery option (RECOVOPT) set to
UNCONDREL or NONE.
v A connection is defined with the persistent session recovery option
(PSRECOVERY) set to NONE.
In all these situations, the sessions are unbound, and the result is as if CICS has
restarted following a failure without VTAM persistent session support.
There are some other situations where APPC sessions are unbound. For example,
if a bind was in progress at the time of the failure, sessions are unbound.
Without persistent session support, all sessions existing on a CICS system are lost
when that CICS system fails. In any subsequent restart of CICS, the rebinding of
sessions that existed before the failure depends on the terminal’s AUTOCONNECT
option. If AUTOCONNECT is specified for a terminal, the user of that terminal waits
until the GMTRAN transaction has run before being able to continue working. If
AUTOCONNECT is not specified for a terminal, the user of that terminal has no
way of knowing (unless told by support staff) when CICS is operational again
For CICS persistent session support, you need the VTAM persistent LU-LU session
enhancements in VTAM 3.4.1 or later. CICS Transaction Server for OS/390 Release
3 functions with releases of VTAM earlier than 3.4.1, but in the earlier releases
sessions are not retained in a bound state in the event of a CICS failure.
XRF considerations
If you intend operating CICS with the extended recovery facility (XRF), there are
some more things you must consider when setting up your terminal network. For
example, in an XRF environment, SNA VTAM terminals can be XRF-capable. This
means that, if you have specified appropriate options in the TYPETERM definitions,
XRF backup sessions can be established in parallel with the active sessions. (For
guidance about defining extended recovery attributes for terminals, see the CICS
Resource Definition Guide.)
The ability of a terminal to receive this XRF support is not determined by CICS, but
by the terminal connection to CICS through ACF/NCP and ACF/VTAM. CICS gives
each terminal the best support possible, based on the parameters passed to it from
VTAM when the terminal logs on to CICS.
There are extra terminal definition keywords, that enable you to control the manner
in which the XRF-capable terminals are supported by CICS.
|
| Overview
| CICS provides a facility for generating unique sequence numbers for use by
| applications in a Parallel Sysplex environment (for example, to allocate a unique
| number for orders or invoices). This facility is provided by a named counter server,
| which maintains each sequence of numbers as a named counter. Each time a
| sequence number is assigned, the corresponding named counter is incremented
| automatically so that the next request gets the next number in sequence.
| The named counter server is modeled on the other coupling facility servers used by
| CICS, and has many features in common with the coupling facility data table server.
| A named counter server provides a full set of functions to define and use named
| counters. Each named counter consists of:
| v A 16-byte name
| v A current value
| v A minimum value
| v A maximum value.
| The values are internally stored as 8-byte (double word) binary numbers, but the
| user interface allows them to be treated as any length from 1 to 8 bytes, typically 4
| bytes.
| Named counters are stored in a pool of named counters, where each pool is a
| small coupling facility list structure, with keys but no data. The pool name forms part
| of the list structure name. Each named counter is stored as a list structure entry
| keyed on the specified name, and each request for the next value requires only a
| single coupling facility access.
| For information on how to create a list structure for use as a named counter pool,
| see “Defining a list structure” on page 90.
| For information about creating a loadable options table, see “Defining a named
| counter options table” on page 87.
|
| The named counter application programming interface
| You access the named counter through a callable interface, which can be used in
| CICS applications running in CICS key or user key, or used in batch jobs. The
| interface does not depend on CICS services, therefore it can also be used in
| applications running under any release of CICS.
| The named counter interface does not use CICS command-level API, therefore the
| system initialization parameter CMDPROT=YES has no effect. If the interface is
| called from a CICS application program that is excuting in user key, it switches to
| CICS key while processing the request but CICS does not attempt to verify that the
| program has write access to the output parameter fields.
| The first request by a CICS region that addresses a particular pool automatically
| establishes a connection to the server for that pool. This connection is associated
| with the current MVS TCB (which for CICS is the quasi-reentrant (QR) TCB) and
| normally lasts until the TCB terminates at end of job. An application region can
| have only one connection at a time to each named counter pool, and the
| connection can be used only from the TCB under which the connection was
| established. A connection can only be created under another TCB by first
| terminating the existing connection using the NC_FINISH function, and then
| creating a new connection from another TCB.
| Note: The named counter server interface uses MVS name/token services
| internally. A consequence of this is that jobs using the named counter
| interface cannot use MVS checkpoint/restart services (as described in APAR
| OW06685).
| The syntax of the assembler version of the call to the named counter interface
| is as follows:
| CALL DFHNCTR,(function,return_code,pool_selector,counter_name, X
| value_length,current_value,minimum_value,maximum_value, X
| counter_options,update_value,compare_min,compare_max),VL
| The CALL macro must specify the VL option to set the end of list indication, as
| shown in the following example:
| CALL DFHNCTR,(NC_ASSIGN,RC,POOL,NAME,CTRLEN,CTR),VL
| C/C++
| The named counter interface definitions for C/C++ are provided in header file
| DFHNCC. The symbolic constant names are in upper case. The function name
| is dfhnctr, in lower case.
| COBOL
| The named counter interface definitions for COBOL are provided in copybook
| DFHNCCOB.
| COBOL does not allow underscores within names, therefore the symbolic
| names provided in copy book DFHNCCOB use a hyphen instead of an
| underscore (for example NC-ASSIGN and NC-COUNTER-AT-LIMIT).
| Note that the RETURN-CODE special register is set by each call, which affects
| the program overall return code if it is not explicitly set again before the
| program terminates.
| PL/I
| The named counter interface definitions for PL/I are provided in include file
| DFHNCPLI.
| Notes:
| 1. All functions that refer to a named counter require at least the first four
| parameters, but the remaining parameters are optional, and trailing unused
| parameters can be omitted.
| If you do not want to use an imbedded optional parameter, either specify the
| default value or ensure that the parameter list contains a null address for the
| omitted parameter. For an example of a call that omits an optional parameter,
| see “Example of DFHNCTR calls with null parameters” on page 84.
| 2. The NC_FINISH function requires the first three parameters only.
| function
| specifies the function to be performed, as a 32-bit integer, using one of the
| following symbolic constants.
| NC_CREATE Create a new named counter, using the initial value, range
| limits, and default options specified on the current_value,
| minimum_value, maximum_value, update_value and
| counter_options parameters.
| If you omit an optional value parameter, the new named counter
| is created using the default for the omitted value. For example,
| if you omit all the optional parameters, the counter is created
| with an initial value of 0, a minimum value of 0, and a maximum
| value of high values (the double word field is filled with X'FF').
| NC_ASSIGN Assign the current value of the named counter, then increment
| it ready for the next request. When the number assigned equals
| the maximum number specified for the counter, it is
| incremented finally to a value 1 greater than the maximum. This
| ensures that any subsequent NC_ASSIGN requests for the
| named counter fail (with NC_COUNTER_AT_LIMIT) until the
| counter is reset using the NC_REWIND function, or
| automatically rewound by the NC_WRAP counter option (see
| the counter_options parameter).
| This operation can include a conditional test on the current
| value of the named counter, using the compare_min and
| compare_max parameters.
| The server returns the minimum and maximum values if you
| specify these fields on the call parameter list and the request is
| successful.
| You can use the counter_options parameter on the
| NC_ASSIGN request to override the counter options set by the
| NC_CREATE request.
| You can use the update_value parameter to specify the
| increment to be used on this request only for the named
| the named counter server returns 109 and sets the current
| value to 134 ready for the next NC_ASSIGN request, effectively
| assigning numbers in the range 109 through 133. The
| increment can be any value between zero and the limit is
| determined by the minimum and maximum values set for the
| named counter. Thus the increment limit is ((maximum_value
| plus 1) minus minimum value). An increment of zero causes
| NC_ASSIGN to operate the same as NC_INQUIRE, except for
| any comparison options.
| Each return code has a corresponding symbolic constant. See “Return codes”
| on page 85 for details of these.
| pool_selector
| specifies an 8-character pool selection parameter that you use to identify the
| pool in which the named counter resides.
| Depending on the named counter options table in use, you can use the pool
| selector parameter either as an actual pool name, or as a logical pool name
| that is mapped to a real pool name through the options table. The default
| options table assumes:
| v That any pool selection parameter beginning with DFHNC (matching the
| table entry with POOLSEL=DFHNC*) is an actual pool name
| v That any other pool selection parameter (including all blanks) maps to the
| default pool name.
| Note: The default pool name for the call interface is DFHNC001. The default
| pool name for the EXEC CICS API is defined by the NCPLDFT system
| initialization parameter.
| See “Defining a named counter options table” on page 87 for information about
| the pool selection parameter in the DFHNCOPT options table.
| counter_name
| specifies a 16-byte field containing the name of the named counter, padded if
| necessary with trailing spaces.
| When input values are shorter than 8 bytes, they are extended with high-order
| zero bytes to the full 8 bytes used internally. When output values are returned
| in a short value field, the specified number of low-order bytes are returned,
| ignoring any higher-order bytes.
| current_value
| specifies a variable to be used for:
| v Setting the initial sequence number for the named counter
| For the NC_CREATE function this parameter is an input (sender) field and can
| defined as a constant. The default value is low values (binary zeroes). The
| value can either be within the range specified by the following minimum and
| maximum values, or it can be one greater than the maximum value, in which
| case the counter has to be reset using the NC_REWIND function before it is
| used.
| For all other counter functions, this parameter is an output (receiver) field and
| must be defined as a variable.
| minimum_value
| specifies a variable to be used for:
| v Setting the minimum value for the named counter
| v Receiving from the named counter the specified minimum value.
| For the NC_CREATE function this parameter is an input (sender) field and can
| defined as a constant.
| For all other functions, this parameter is an output (receiver) field and must be
| defined as a variable.
| maximum_value
| specifies a variable to be used for:
| v Setting the maximum value for the named counter
| v Receiving from the named counter the specified maximum value.
| For the NC_CREATE function this parameter is an input (sender) field and can
| defined as a constant. If you specify a non-zero value_length parameter but
| omit maximum_value, it defaults to high values for the specified length,
| otherwise it is eight bytes of high values. However, if the minimum value is all
| low values and the maximum value is eight bytes of high values, the maximum
| value is reduced to allow some reserved values to be available to the server for
| internal use.
| For all other functions, this parameter is an output (receiver) field and must be
| defined as a variable.
| counter_options
| specifies an optional fullword field to indicate named counter options that control
| wrapping and increment reducing. The valid options are represented by the
| symbolic values NC_WRAP|NC_NOWRAP and NC_REDUCE|NC_NOREDUCE.
| The default options are NC_NOWRAP and NC_NOREDUCE.
| NC_NOWRAP The server does not automatically rewind the named counter
| back to the minimum value in response to an NC_ASSIGN
| request that fails with the NC_COUNTER_AT_LIMIT condition.
| With NC_NOWRAP in force, and the named counter in the
| NC_COUNTER_AT_LIMIT condition, the NC_ASSIGN function
| is inoperative until the counter is reset by an NC_REWIND
| request (or the counter option reset to NC_WRAP).
| NC_WRAP The server automatically performs an NC_REWIND in response
| an NC_ASSIGN request for a counter that is in the
| NC_COUNTER_AT_LIMIT condition. The server sets the
| The options specified on NC_CREATE are stored with the named counter and
| are used as the defaults for other named counter functions. You can override
| the options on NC_ASSIGN, NC_REWIND or NC_UPDATE requests. If you
| don’t want to specify counter_options on a DFHNCTR call, specify the symbolic
| constant NC_NONE (equal to zero) as the input parameter (or specify a null
| address).
| For NC_UPDATE, this is the new current value for the named counter.
| compare_min
| specifies a value to be compared with the named counter’s current value. If you
| specify a value, this parameter makes the NC_ASSIGN, NC_REWIND or
| NC_UPDATE operation conditional on the current value of the named counter
| being greater than or equal to the specified value. If the comparison is not
| satisfied, the operation is rejected with a counter-out-of-range return code (RC
| 103).
| If you specifying high values (X'FF') for this parameter, the server does not
| perform the comparison. You must specify (X'FF') in all the bytes specified by
| the value_length parameter.
| If the compare_max value is less than the compare_min value, the valid range
| is assumed to wrap round, in which case the current value is considered to be
| in range if it satisfies either comparison, otherwise both comparisons must be
| satisfied.
| DFHNCTR call with null addresses for omitted parameters: In this example, the
| parameters used on the call are defined in the WORKING-STORAGE SECTION, as
| follows:
|| Call parameter COBOL variable Field definition
| function 01 FUNCTION PIC S9(8) COMP VALUE +1.
| return_code 01 NC-RETURN-CODE. PIC S9(8) COMP VALUE +0.
| pool_selector 01 NC-POOL-SELECTOR PIC X(8).
| counter_name 01 NC-COUNTER-NAME PIC X(16).
| value_length 01 NC_VALUE-LENGTH PIC S9(8) COMP VALUE +4.
| current_value 01 NC-CURRENT-VALUE PIC S9(8) VALUE +0.
| minimum_value 01 NC-MIN-VALUE PIC S9(8) VALUE +0.
| maximum_value 01 NC-MAX-VALUE PIC S9(8) VALUE -1.
| counter_options 01 NC-OPTIONS PIC S9(8) COMP VALUE +0.
| update_value 01 NC-UPDATE-VALUE PIC S9(8) VALUE +1.
| compare_min 01 NC-COMP-MIN PIC S9(8) VALUE +0.
| compare_max 01 NC-COMP-MAX PIC S9(8) VALUE +0.
| The variable used for the null address is defined in the LINKAGE SECTION, as
| follows:
| LINKAGE SECTION.
| 01 NULL-PTR USAGE IS POINTER.
| Return codes
| The return codes are divided into ranges (100, 200, 300, and 400) according to
| their severity. Each range of non-zero return codes begins with a dummy return
| code that describes the return code category, to make it easy to check for values in
| each range using a symbolic name.
| In the list that follows, the numeric return code is followed by its symbolic name.
| 0 (NC_OK)
| The request completed normally.
| 100 (NC_COND)
| Return codes in this range indicate that a conditional function did not succeed
| because the condition was not satisfied:
| 101 (NC_COUNTER_AT_LIMIT)
| An NC_ASSIGN function is rejected because the previous request for this
| named counter obtained the maximum value and the counter is now at its
| limit. New counter values cannot be assigned until an NC_REWIND
| function call is issued to reset the counter.
| 102 (NC_COUNTER_NOT_AT_LIMIT)
| An NC_REWIND FUNCTION is rejected because the named counter is not
| at its limit value. This is most likely to occur when another task has lready
| succeeded in resetting the counter with an NC_REWIND.
| 103 (NC_COUNTER_OUT_OF_RANGE)
| The current value of the named counter is not within the range specified on
| the compare_min and compare_max parameters.
| 200 (NC_EXCEPTION)
| Return codes in this range indicate an exception condition that an application
| program should be able to handle:
| 201 (NC_COUNTER_NOT_FOUND)
| The named counter cannot be found.
| 202 (NC_DUPLICATE_COUNTER_NAME)
| An NC_CREATE function is rejected because a named counter with the
| specified name already exists.
| 203 (NC_SERVER_NOT_CONNECTED)
| An NC_FINISH function is rejected because no active connection exists for
| the selected pool.
| 300 (NC_ENVIRONMENT_ERROR)
| Return codes in this range indicate an environment error. These are serious
| errors, normally caused by some external factor, which a program may not to
| be able to handle.
| To avoid the need to maintain multiple versions of the options table, you can use
| table entries to select pools based not only on the pool selection parameter
| specified on the DFHNCTR call, but also on the job name and APPLID of the CICS
| region. You can also specify the name of a user exit program to be called to make
| the pool selection.
| Define an options table using one or more invocations of the DFHNCO macro. Each
| invocation generates an options table entry that defines the pool name or user exit
| program to be used whenever any selection conditions specified on the entry satisfy
| an application program request. The first entry automatically generates the table
| header, including the CSECT statement. Follow the last entry with an END
| statement specifying the table module entry point, DFHNCOPT.
| The program named can be link-edited with the options table, which generates
| a weak external reference (WXTRN), or it can be loaded dynamically the first
| time it is used. The program is called using standard MVS linkage, with a
| standard save area and parameter list pointing to two fields, in the following
| order:
| v The 8-byte actual pool name result field
| v The 8-byte pool selection parameter.
| The end-of-list bit is set in the second parameter address.
| The user exit program indicates its result by setting one of the following return
| codes in register 15:
| 0 Use the pool name that is successfully set, in the first field of the
| parameter list, by the user exit program.
| 4 The program cannot determine the pool name on this invocation.
| Continue options table processing at the next entry, as for the case
| where selection conditions were not met.
| 8 Reject the request (as if POOL=NO was specified).
| With the default options table in use, any pool selector parameter that specifies a
| string beginning with DFHNC is taken to be an actual pool name, indicated by
| POOL=YES in the table entry. Any other value, including a value of all spaces, is
| assigned the default pool name, indicated by the POOL= table entry without a
| POOLSEL parameter.
| Define the structure in the current coupling facility resource management (CFRM)
| policy, specifying the size of the structure and the preference list of coupling
| facilities in which it can be stored. The name of the list structure for a named
| counter pool is formed by adding the prefix DFHNCLS_ to your chosen pool name,
| giving DFHNCLS_poolname.
| The CFRM policy is defined using the utility IXCMIAPU. For an example of this
| utility, see member IXCCFRMP in the SYS1.SAMPLIB library. An example of a
| policy statement for a named counter pool is shown in Figure 24.
|
STRUCTURE NAME(DFHNCLS_PRODNC1)
SIZE(512)
INITSIZE(256)
PREFLIST(FACIL01,FACIL02)
Figure 24. Example of statements defining a coupling facility list structure for named counters
| When you have updated the CFRM new policy with the new structure definition,
| activate the policy using the MVS command:
| SETXCF START,POLICY,POLNAME=policyname,TYPE=CFRM.
| A list structure can be allocated with an initial size and a maximum size, as
| specified by INITSIZE and SIZE respectively in the CFRM policy definition. All
| structure sizes are rounded up to the next multiple of 256KB at allocation time.
| Provided that space is available in the coupling facility, you can use the MVS
| SETXCF command to increase the structure size dynamically from its initial size
| towards its maximum size, making the new space available immediately to any
| currently active servers. If too much space is allocated, you can reduce the
| structure size to free up coupling facility storage for other purposes (which may take
| some time if the coupling facility has to move existing data out of the storage which
| is being freed). Note that if the size is altered in this way, you should also update
| the INITSIZE parameter in the policy to reflect the new size, so that the structure
| will not revert to its original size if it is subsequently recreated or reloaded.
| The space required for a named counter pool depends on the number of different
| named counters you need, but the minimum size should be enough for most needs.
| A minimum-size structure of 256KB can hold approximately one thousand named
| counters, and the next higher size of 512KB can hold some tens of thousands.
| Note that defining the CFRM policy statements for a list structure does not actually
| create the list structure. The structure is created the first time an attempt is made to
| connect to it, which occurs when the first named counter server that refers to the
| corresponding pool is started.
The various CICS facilities and their data sets are dealt with in the following
chapters:
v “Chapter 9. Preparing to set up CICS data sets” on page 93
v “Chapter 10. Defining the temporary storage data set” on page 107
v “Chapter 11. Defining transient data destination data sets” on page 113
v “Chapter 12. Defining CICS log streams” on page 121
v “Chapter 13. Defining the CICS system definition data set” on page 135
v “Chapter 14. Defining and using catalog data sets” on page 159
v “Chapter 15. Defining and using auxiliary trace data sets” on page 171
v “Chapter 16. Defining dump data sets” on page 175
v “Chapter 17. Defining the CICS availability manager data sets” on page 181
v “Chapter 18. Defining user files” on page 189
| v “Chapter 19. Defining the CDBM GROUP command data set” on page 207
| v “Chapter 20. Defining the CMAC messages data set” on page 211
Space calculations are given so that you can calculate the space to allocate to the
data sets, and the data definition statements to define them to the running CICS
region.
CICS utility programs provided for postprocessing of the data sets are described in
the CICS Operations and Utilities Guide.
Table 13 on page 94 summarizes the CICS data sets and their characteristics.
If the data set is shared between an active CICS region and an alternate CICS
region, use the generic APPLID, but if the data set is unique to either the active or
the alternate CICS region, use the specific APPLID. For information about actively
and passively shared data sets, see “Data set considerations when running CICS
with XRF” on page 98.
Table 13. Summary of CICS data sets
Data set DDNAME used Block or Record Data set Other comments
by CICS control format organization
interval size
(bytes)
AUXILIARY DFHAUXT 4096 F Sequential 3 See page 97 for
TRACE (See DFHBUXT information about GTF.
page 171)
| BTS LOCAL DFHLRQ 1024 and 2560 VB VSAM KSDS Required even if you do not
| REQUEST use BTS facilities. BTS is
| QUEUE (See described in the CICS
| the CICS Business Transaction Services
| Business manual.
| Transaction
| Services
| manual)
CAVM DFHXRCTL 4096 minimum 1 VSAM ESDS Required if running CICS with
CONTROL XRF.
(See page 181)
CAVM DFHXRMSG 4096 minimum 1 VSAM ESDS Required if running CICS with
MESSAGE XRF.
(See page 181)
CATALOGS DFHGCD 8192 & 2048 VB VSAM KSDS Both data sets must be
(See page 159) DFHLCD initialized before use (2 ).
| CDBM group DFHDBFK 8192 VB VSAM KSDS Required only if you intend to
| command (See use this function.
| page 207)
CSD (See DFHCSD 8192 VB VSAM KSDS —
page 135)
DUMP (See DFHDMPA 32 760 (tape) V Sequential For CICS transaction dumps
page 175) DFHDMPB or 1 track only; see page 97 for
(DASD) information about system
dumps.
4. The CTGI naming convention is a recommended example of a naming convention that you can use for CICS 4-character names,
and is based on the 4-character CTGI symbol, where:
C identifies an entire CICSplex
T identifies the type of region
G identifies a group of regions
I identifies iterations of regions within a group
Where names are allowed to be up to eight characters long, as for CICS APPLIDs, the general recommendation is that the letters
CICS are used for the first four characters, particularly for production regions.
Notes:
1 These data sets use control interval (CI) processing and therefore the record
format is not relevant.
2 DFHGCD is the CICS global catalog data set, and in an XRF environment it is
passively shared between the active and the alternate CICS regions. DFHLCD is
the CICS local catalog data set, and this is a unique data set; each CICS region
must have its own local catalog. See “Data set considerations when running CICS
with XRF” on page 98 for an explanation of actively and passively shared data sets
in an XRF environment.
3 The CICS utility program, DFHTU530, prints and formats auxiliary trace data.
For information about this CICS utility program, see the CICS Operations and
Utilities Guide.
4 You do not have to specify all the data sets associated with extrapartition
transient data queues in the CICS JCL because of the introduction of dynamic
allocation for of extrapartition transient data queue data sets. See the CICS
Resource Definition Guide for more information.
For multiple extents on multiple volumes, combine both primary and secondary
RECORDS operands with multiple VOLUMES operands:
RECORDS(primary,secondary) -
VOLUMES(volume1,volume2,volume3,.....)
Multiple extents over multiple volumes should be used if there is a probability that a
volume will exhaust its free space before VSAM reaches its limit on extra extents. If
this occurs, VSAM continues to create extra extents on the next volume in the list.
You can generate several copies of these jobs by rerunning the DFHISTAR job,
selecting the jobs that you want to copy. To generate new copies of these jobs, edit
the DFHISTAR job to specify new values for the DSINFO and SELECT parameters.
Only those jobs that you name by the SELECT parameter are regenerated.
For information about these jobs and about generating new versions of them, see
the CICS Transaction Server for OS/390 Installation Guide.
Table 14. CICS data sets created by the DFHCOMDS job
DFHCSD CICS region definition data set
SYSIN SYSIN data set
Recalculate the size of these system data sets, taking into account the increased
volumes of data that CICS generates. For example, for an SDUMP data set you
need at least 25 cylinders of a 3380 device, or the equivalent. For guidance
information about calculating the size of SDUMP data sets, see the OS/390 MVS
Initialization and Tuning Guide manual.
The SDUMP data sets can become full with unwanted SDUMPs that precede
ASRA, ASRB, and ASRD abends (after message DFHAP0001). To prevent this,
suppress such SDUMPs as described on page 176.
If you are collecting CICS interval statistics frequently, or the volume of statistics at
each interval is high, then you must take this into account when sizing your SMF
data sets. Similarly, you must consider the amount of CICS monitoring data that is
being written when CICS monitoring classes are active.
CICS can write records to SMF of up to 32756 bytes, resulting in SMF writing
spanned records to the SMF data sets. For more efficient use of DASD, you should
consider creating the SMF data sets to be used by CICS with a control interval size
of either 16384 bytes (16KB) or 8192 bytes (8KB). If you use other control interval
sizes you must consider the trade-off between efficient use of DASD, SMF data set
I/O performance and the possibility of data being lost due to insufficient SMF
buffers.
If you are running CICS with GTF trace on, make allowance for CICS trace entries
in the GTF data sets.
For background information about SMF, and about other SMF data set
considerations, see the OS/390 MVS System Management Facilities (SMF).
For programming information about CICS monitoring records and their sizes, see
the CICS Customization Guide. For programming information about CICS statistics
records and their sizes, see the CICS Performance Guide. For background
information about GTF, see the OS/390 MVS Diagnosis: Tools and Service Aids
manual.
Even if you intend to run CICS with XRF=NO to begin with, you are advised to think
about data set dispositions with XRF from the start.
It follows that the status and location of the data sets used by CICS become very
important. In particular, consider the following points:
v For a given file name, do the active and alternate CICS regions:
– Refer to separate data sets?
– Refer to the same data set?
v For a given data set, is it required by the alternate CICS region:
– Before takeover occurs?
– After takeover occurs?
v For a given data set, is it allocated:
– At job step initiation?
– Dynamically?
v What facilities of MVS global resource serialization (GRS) or JES3 are being
used?
The allocation of data sets, and how you specify the DISP parameter, are important
factors when running CICS with XRF. The point at which data sets are allocated,
and whether they are shared between active and alternate CICS regions must be
considered. A shared data set, in XRF terms, means one that is required by both
the active and alternate CICS regions, though not necessarily concurrently. (The DD
statements refer to the same data set.) In an XRF environment, CICS classifies
data sets as follows:
v Actively shared
v Passively shared
v Unique
User data sets managed by CICS file control, and DL/I data sets, are also passively
shared.
If this risk proves unacceptable for BDAM and VSAM user files and for DL/I
databases, then consider using dynamic allocation with DISP=OLD.
Note: MVS does not prevent conflicting concurrent use of a data set residing on
shared DASD by two or more jobs running in different MVS images, even
when DISP=OLD is specified. To prevent concurrent use, you can use either
global resource serialization (GRS) or JES3, to provide global data set
enqueuing in a multi-MVS environment. However, when you run CICS with
XRF, CICS always ensures (except for the CSD) that there is no conflicting
concurrent use of data sets by an active CICS region and its alternate CICS
region, even though they are running in different MVS images.
For more information about sharing the CSD, see “Sharing a CSD in a multi-MVS
environment (non-RLS)” on page 144.
BWO is available only for data sets accessed by CICS file control, which includes
the CICS system definition (CSD) data set.
VSAM data sets that are to use this facility must reside on SMS-managed DASD,
and must have an ICF catalog structure. Only VSAM ESDS, RRDS (both fixed and
variable), and KSDS data sets are supported.
Clusters with data sets that are to be opened in RLS mode must have BWO
specified in the cluster definition.
CICS defines a data set as eligible for BWO when a file is defined using RDO. If
BACKUPTYPE=DYNAMIC is specified for a VSAM file, the file is defined as eligible
for BWO when the data set is opened. BACKUPTYPE=STATIC, the default, defines
a file as not eligible for BWO.
The first time a file is opened against a VSAM base cluster data set after a CICS
initial or cold start, CICS checks if BWO has been specified in the ICF catalog. If it
is, it updates information in the file resource definition from the ICF catalog.
If the data sets are updated in RLS mode, BWO is managed entirely by DFSMS.
When a BWO copy is made, DFSMSdss sends a message to all CICS systems on
the sysplex with open ACBs for the sphere. The CICS systems keep track of all
current UOWs that have updated files for the sphere. When all of these have
completed, CICS writes tie-up records and notifies DFSMSdss. The copy is
complete when all CICS systems have responded.
If the data sets are updated in non-RLS mode and if the value specified by
BACKUPTYPE is DYNAMIC, CICS issues a call to DFSMSdfp 3.2 callable services
to update the ICF catalog to indicate that the base cluster data set is eligible for
BWO while it is under the control of CICS.
Any subsequent file opened against the same cluster must have the same
BACKUPTYPE attribute as that of the first file opened. If a mismatch is found, the
subsequent file open fails.
CICS records the fact that a VSAM base cluster data set is eligible for BWO in its
base cluster block. This is remembered when all files have closed against the
VSAM base cluster and across CICS warm and emergency restarts. (It is not
remembered across CICS cold or initial starts.) When CICS is terminated by a
controlled normal shutdown, all CICS files are closed.
When the last file open for update (and defined as eligible for BWO) is closed
against a base cluster data set, the DFSMSdfp callable services update the ICF
catalog to indicate that this data set is no longer eligible for BWO. This prevents
BWO during the batch window between CICS sessions.
Note: During the batch window between CICS sessions it is possible to update
CICS user data sets by batch jobs (although, to maintain data integrity, this
should only be done after a controlled normal shutdown, and never after an
uncontrolled or immediate shutdown).
For a normal CICS shutdown, CICS also needs a quiesced data set 5
backup to be made after the batch updates and before the data set is made
available to a subsequent CICS session so that CICS forward recovery can
start from a consistent point.
5. Quiesced data set: A data set against which all update activity has been quiesced so that DFSMSdss can have exclusive control
while a backup is made.
When a backup copy of a data set is restored via DFHSM and DFDSS, and the
backup was of a BWO type, the ICF catalog is updated to indicate that the data set
needs to be forward recovered before it can be used. CICS checks this at data set
open time and fails an FCT open if the catalog indicates that the data set is
back-level.
The systems administrator must put appropriate procedures into place for BWO and
for forward recovery, but these new procedures should be simpler than those
currently in use. These procedures must include:
v Restoring the BWO backup and running the forward recovery utility to bring the
data set to a point of consistency. (The restore requires that users do not have
access to the file during the recovery process.)
v Restoring and forward recovery of data sets that may have been damaged while
allocated to CICS. This operation may require backout of partially committed
units of work, by CICS emergency restart.
The systems administrator must decide which VSAM user data sets are eligible for
BWO, subject to the restrictions detailed in “Restrictions on BWO” applicable to
heavily-updated KSDS data sets.
If activity keypointing is disabled in your CICS region (by specifying the system
initialization parameter AKPFREQ=0), this has a serious effect on BWO support,
because no tie-up records (TURs) are written to the forward recovery logs, and the
data set recovery point is not updated. Therefore, forward recovery of a BWO
backup must take place from the time that the data set was first opened for update.
This requires that all forward recovery logs are kept since that time so that forward
recovery can take place. If there are many inserts or records that change length, a
lot of forward recovery could be required. If, however, a record is just updated and
the length is unchanged, there is no CI split. For information about TURs and
recovery points, see the CICS Recovery and Restart Guide.
Restrictions on BWO
The following restrictions apply to VSAM KSDS data set types only.
If a VSAM control interval or control area split occurs while a BWO is in progress,
the backup is unreliable and is discarded by DFHSM and DFDSS. During such a
split, certain portions of the data set may be duplicated or not represented at all in
the backup as DFDSS copies sequentially. MVS/DFP 3.2 indicates that a split has
occurred in the ICF catalog. At the end of the backup, DFHSM and DFDSS check
the ICF catalog, and if a split has occurred, or is still in progress, discard the
backup. For this reason, certain heavily-updated VSAM KSDS data sets may not be
eligible for BWO, or might be eligible only during periods of reduced activity (for
example, overnight). For a KSDS data set to be eligible for BWO, the typical time
between control interval or control area splits must be greater than the time taken
for DFHSM and DFDSS to take a backup of the data set.
For more information about these facilities see the following sections.
If a migrated data set has to be recalled, CICS issues message DFHFC0989 to the
system console, to notify the user that a recall is taking place, and to indicate
whether it is from primary or secondary storage.
You define auxiliary temporary storage as a nonindexed VSAM data set. CICS uses
control interval processing when storing or retrieving temporary storage records in
this data set. A control interval usually contains several records. Temporary storage
space within a control interval is reusable.
Temporary storage queues can also reside in queue pools in a coupling facility. This
applies to non-recoverable queues which may be written to and read from different
CICS regions. For more information about temporary storage data sharing, see
“Defining temporary storage pools for temporary storage data sharing” on page 110.
For background information about CICS temporary storage, see the CICS
Application Programming Guide.
Note: You must not define any extra associations for a temporary storage data set.
(Do not, for example, define a PATH.) Doing so causes CICS startup to fail.
Figure 25. Sample job defining an auxiliary temporary storage data set
Note:
Space considerations
The amount of space allocated to temporary storage is expressed in two values that
you must specify:
1. The control interval size
2. The number of control intervals in the data set
If you install BMS with 3270 support, the data length of the record is at least as
large as the 3270 buffer size. For 3270 terminals with the alternate screen size
facility, the data length is the larger of the two sizes.
The total number of bytes allocated for a temporary storage record is rounded up
to a multiple of 64 (for control interval sizes less than, or equal to, 16 384), or a
multiple of 128 (for larger control interval sizes).
XRF considerations
The temporary storage data set is a passively shared data set, owned by the active
CICS region, but allocated to both the active and alternate CICS regions.
STRUCTURE NAME(DFHXQLS_PRODTSQ1)
SIZE(1000)
INITSIZE(500)
PREFLIST(FACIL01,FACIL02)
The name of the list structure for a TS data sharing pool is created by appending
the TS pool name to the prefix DFHXQLS_, giving DFHXQLS_poolname. When
defined, you must activate the CFRM policy using the MVS operator command
SETXCF START.
When a list structure is allocated, it may have an initial size and a maximum size
specified in the CFRM policy. (All structure sizes are rounded up to the next
multiple of 256K at allocation time). Provided that space is available in the coupling
facility, a list structure can be dynamically expanded from its initial size towards its
maximum size, or contracted to free up coupling facility space for other purposes.
Item entry size = (170 + (average item size, rounded up to next 256))
+ 5% extra for control information
A large queue is one for which the total size of the data items exceeds 32K. It is
stored in a separate list in the structure.
The above calculation assumes that the structure is allocated at its maximum size.
If it is allocated at less than its maximum size, the same amount of control
information is still required, so the percentage of space occupied by control
information is correspondingly increased. For example, if a structure is allocated at
one third of its maximum size, the overhead for control information increases to
around fifteen per cent.
Note that defining the CFRM policy statements for a list structure does not actually
create the list structure—this is done by a TS server during its initialization.
For information about defining list structures, see the following MVS publications:
OS/390 MVS Setting Up a Sysplex, GC28-1779
OS/390 MVS Programming: Sysplex Services Guide, GC28-1771
OS/390 MVS Programming: Sysplex Services Reference, GC28-1772
Messages or other data are addressed to a symbolic queue which you define as
either intrapartition or extrapartition using the CEDA transaction. The queues can be
used as indirect destinations to route messages or data to other queues.
For information about coding transient data resources, see the CICS Resource
Definition Guide.
Note: The queue name CCSI has been reserved for the C/370 input data stream
(stdin), but any attempt to read from this stream causes EOF to be returned.
You should include in your CICS region all the queues that CICS uses. Although the
omission of any of the queues does not cause a CICS failure, you lose important
information about your CICS region if CICS cannot write its data to the required
queue. Sample definitions of all the queues that CICS uses can be found in group
DFHDCTG, which is included in list DFHLIST and is unlocked so that you can alter
| the definitions.
| Note: We recommend that you take a backup copy of the changes made to
| DFHDCTG in case maintenance is applied.
| For information about the queues used by CICS, see the CICS Resource Definition
Guide.
For information about the queues that CICS uses for RDO, see “Multiple extents
and multiple volumes” on page 95.
For a way of printing these system messages on a local printer as they occur, see
the transient data write-to-terminal sample program, DFH$TDWT. This sample
program is supplied with the CICS pregenerated system in
CICSTS13.CICS.SDFHLOAD, and the assembler source is in
CICSTS13.CICS.SDFHSAMP. For programming information about DFH$TDWT, see
the CICS Customization Guide.
Figure 27 shows job control statements to define a single extent data set on a
single volume. Instead of defining one extent data set, which might have to be
much larger than your average needs to cater for exceptional cases, you can define
multiple extents and/or multiple volumes. For considerations about using multiple
extents and/or multiple volumes, see “Using multiple extents and multiple volumes”
on page 116.
Figure 27. Sample job to define a transient data intrapartition data set
Alternatively, you can run the CICS-supplied job DFHDEFDS to create the
DFHINTRA data set as one of the data sets for a CICS region. For information
about the DFHDEFDS job, see the CICS Transaction Server for OS/390 Installation
Guide.
Note: You must not define any extra associations for a transient data intrapartition
data set. (Do not, for example, define a PATH.) Doing so causes CICS
startup to fail.
If DFHINTRA opened successfully during a previous startup but fails to open during
a subsequent warm or emergency restart, CICS is terminated.
If CICS initializes without a DFHINTRA data set, any attempts to install intrapartition
data destinations for that run of CICS fails and appropriate error messages are
issued.
Space considerations
Space is allocated to queues in units of a control interval. The first CI is reserved
for CICS use, the remaining CIs are available to hold data. Data records are stored
in CIs according to VSAM standards.
XRF considerations
A transient data intrapartition data set is a passively shared data set, owned by the
active CICS region, but allocated to both active and alternate CICS regions.
Although the alternate CICS region does not open this data set before takeover, it is
allocated at job step initiation, therefore specify DISP=SHR on the DD statement.
You should define transient data extrapartition data sets used as queues for CICS
messages with a record length of 120 bytes and a record format of V or VB.
You could use the DD statements shown in Figure 28 for the extrapartition data set
entries in the sample DCT supplied in CICSTS13.CICS.SDFHLOAD. In these
sample DD statements, the LOGUSR queue is defined as a sequential file on disk,
and CICSTS13.CICS.LOGUSR is a new data set to be cataloged; the MSGUSR
and PLIMSG queues are routed to SYSOUT.
//LOGUSR DD DSN=CICSTS13.CICS.applid.LOGUSR,DISP=(NEW,KEEP),
// DCB=(DSORG=PS,RECFM=V,BLKSIZE=136),
// VOL=SER=volid,UNIT=3380,SPACE=(CYL,2)
//MSGUSR DD SYSOUT=A
//PLIMSG DD SYSOUT=A
Figure 28. Sample JCL to define transient data extrapartition data sets
Note: Change the space allocation given in this sample job stream to suit your own
installation’s needs.
If you create a definition for CXRF in the CSD, CICS does not install the definition.
This is because the CICS entry is hardcoded and cannot be removed or replaced.
Although the CXRF data set has special significance in an alternate CICS region
when you are operating CICS with XRF, it is also available in an active CICS
region, and CICS regions running with XRF=NO.
If, on an initial or cold start, a request is received to write a record to a queue that
has not yet been installed (as part of GRPLIST), the record is written to CXRF.
If an attempt is made to write to an intrapartition queue after the warm keypoint has
been taken, the record is written to CXRF.
Any requests to write to an intrapartition transient data queue after the warm
keypoint has been taken during a normal shutdown are routed to CXRF.
If you want to take advantage of the special CXRF queue, you must include a DD
statement for DFHCXRF. (For example, see Figure 29.) If you omit the DD
statement, transient data write requests redirected to CXRF fail with a NOTOPEN
condition.
//DFHCXRF DD SYSOUT=*
or
//DFHCXRF DD DSN=CICSTS13.CICS.applid.DFHCXRF,DISP=(NEW,KEEP),
// DCB=(DSORG=PS,RECFM=V,BLKSIZE=136),
// VOL=SER=volid,UNIT=3380,SPACE=(TRK,5)
Before takeover occurs, the alternate CICS region assumes that the transient data
queues are defined as indirect, and pointing to CXRF. CXRF is associated with the
data set that has the DD name DFHCXRF.
XRF considerations
Except for DFHCXRF, an alternate CICS region does not open any extrapartition
data sets before takeover. (See “The DFHCXRF data set” on page 118.)
Normally, when data sets are defined for output, you should have separate data
sets for the active and alternate CICS regions; that is, they are unique data sets in
CICS terms.
Whatever you code on the DISP parameter, be aware that data might be lost when
the alternate CICS region takes over from the active CICS region, because
takeover involves abending or canceling the active CICS region.
If you do not have separate data sets, you should code DISP=SHR. Anything else
implies exclusive use of the data set, and for this reason you could not start an
alternate CICS region (in the same MVS image as the active CICS region) until the
active CICS region terminates.
Data written by the active CICS region is lost when the alternate CICS region takes
over and opens the data set.
The system log is used for recovery purposes - for example, during dynamic
transaction backout, or during emergency restart, and is not meant to be used for
any other purpose.
CICS connects to its system log automatically during initialization (unless you
specify a journal model definition that defines the system log as type DUMMY).
You must define a system log if you want to preserve data integrity in the event of
unit of work failures and CICS failures. CICS needs a system log in order to
perform:
v The backout of recoverable resources changed by failed units of work.
v Cold starts when CICS needs to recover conversation state data with remote
partners.
v Warm restarts, when CICS needs to restore the region to its pre-shutdown state.
v Emergency restarts, when CICS needs to restore the region to its pre-shutdown
state as well as recovering transactions to perform the backout of recoverable
resources changed by units of work that were in-flight at the time of shutdown.
If you define JOURNALMODEL resource definitions to define log stream names for
DFHLOG and DFHSHUNT, ensure that the resulting log stream names are unique.
If you have some CICS regions that use the same applid, you must use some other
qualifier in the log stream name to ensure uniqueness.
If you use JOURNALMODEL resource definitions for the system log, these resource
definitions must be defined and added to the appropriate group list (using the CSD
utility program, DFHCSDUP) before INITIAL-starting CICS.
DFHLOG can be TYPE(DUMMY), but you can use this only if you always INITIAL
start your CICS regions and there are no recoverable resources requiring
transaction backout. CICS cannot perform a cold start, or warm or emergency
restart if TYPE(DUMMY) is specified on the JOURNALMODEL definition.
If you do not want to use a system log, perhaps in a test or development region,
define a JOURNALMODEL for DFHLOG with type DUMMY, as shown in the
following example:
DEFINE JOURNALMODEL(DFHLOG) GROUP(CICSLOGS)
JOURNALNAME(DFHLOG)
TYPE(DUMMY)
To start a CICS region without a system log, you must ensure that a
JOURNALMODEL definition, such as the one shown above, is included in the
start-up group list. Use the DFHCSDUP batch utility program to define the required
JOURNALMODEL and to add the group to the group list.
In this example, the literal SHARED is used in place of the default CICS applid,
which would require a unique log stream for each region.
You might want to use JOURNALMODELs to map journals to log streams if the
CICS region userid changes between runs. This could be the case, for example,
where CICS test regions are shared between groups of developers. It would be
wasteful to create log streams with a different high level qualifier for each user and
you might prefer to use the same log streams regardless of which developer starts
up a CICS region. For example, the following generic JOURNALMODEL definition
maps all journals not defined by more explicit definitions to the same log stream
DEFINE GROUP (TEST) DESC('Journals for test CICS regions')
JOURNALMODEL(JRNLS) JOURNALNAME(*) TYPE(MVS)
STREAMNAME(TESTCICS.&APPLID..&JNAME.)
You might want to merge data written by CICS regions using different journal
names to a single log stream.
DEFINE GROUP (TEST) DESC('Merging journals 10 to 19')
JOURNALMODEL(J10TO19) JOURNALNAME(DFHJ1*) TYPE(MVS)
STREAMNAME(&USERID..MERGED.JNLS)
DEFINE GROUP (TEST) DESC('Merging journalnames JNLxxxxx')
JOURNALMODEL(JNLXXXXX) JOURNALNAME(JNL*) TYPE(MVS)
STREAMNAME(&USERID..MERGED.JNLS)
The last qualifier of the stream name is used as the CICS resource name for
dispatcher waits. Therefore, if it is self-explanatory, it can be helpful when
interpreting monitoring information and CICS trace entries.
For a data set open in RLS mode, the MVS logger merges all the forward recovery
log records from the various CICS systems on to the shared forward recovery log.
If you have a forward recovery product that can utilize the log of logs, you should
ensure that all CICS regions sharing the recoverable data sets write to the same
log of logs log stream.
DEFINE GROUP(JRNL) DESC('Merge log of logs')
JOURNALMODEL(DFHLGLOG) JOURNALNAME(DFHLGLOG) TYPE(MVS)
STREAMNAME(&USERID..CICSVR.DFHLGLOG)
If you don’t have a forward recovery product that can utilize the log of logs you can
use a dummy log stream:
DEFINE GROUP(JRNL) DESC('Dummy log of logs')
JOURNALMODEL(DFHLGLOG) JOURNALNAME(DFHLGLOG) TYPE(DUMMY)
Do not share the log of logs between test and production CICS regions, because it
could be misused to compromise the contents of production data sets during a
restore.
Journal naming
System logs
DFHLOG and DFHSHUNT are the journal names for the CICS system log. CICS
automatically creates journal table entries for DFHLOG and DFHSHUNT during
initialization as shown in Table 18.
Table 18. Journal name entry for the CICS primary system log
Journal table entry - CICS Created during system initialization
system log
Name: DFHLOG Always DFHLOG for the primary log
Status: Enabled Set when journal entry created
Type: MVS The default, but it can be defined as DUMMY on
JOURNALMODEL definition (DUMMY = no output)
LSN: log_stream_name By default, log_stream_name resolves to
®userid..&applid..DFHLOG, but this can be user-defined
on a JOURNALMODEL definition
User applications can use a forward recovery log through a user journal name that
maps on to the same log stream name. In this case, the user records are merged
on to the forward recovery log. See Table 19 for an example of this.
Table 19. Example of journal name entry for non-RLS mode forward recovery log
Journal table entry - Entry created during file-open processing
forward recovery log
Name: DFHJ01 Name derived from FWDRECOVLOG identifier. For
example, FWDRECOVLOG(01) = DFHJ01, thus
FWDRECOVLOG(nn) = DFHJnn
Status: Enabled Set when journal entry created
Type: MVS The default, but it can be defined as DUMMY on
JOURNALMODEL definition (DUMMY = no output)
Note: There is no journal table entry for the forward recovery log of an RLS file.
The recovery attributes and LSN are obtained directly from the VSAM
catalog, and the LSN is referenced directly by CICS file control. Therefore
there is no need for indirect mapping through a journal name.
You can also choose to specify the recovery attributes and LSN for a
non-RLS file in the VSAM catalog.
User journals
CICS user journals are identified by their journal names (or number, in the case of
DFHJnn names), which map on to MVS log streams.
You name your user journals using any 1-8 characters that conform to the rules of a
data set qualifier name. Apart from the user journal names that begin with the
letters DFHJ, followed by two numeric characters, you should avoid using names
that begin with DFH. User journal names of the form DFHJnn are supported for
compatibility with earlier CICS releases.
Although 8-character journal names offer considerable flexibility compared with the
DFHJnn name format of previous releases, you are recommended not to create
large numbers of journals (for example, by using the terminal name or userid as
part of a program-generated name).
When used in FILE and PROFILE resource definitions, the journal numbers 1
through 99 map on to the user journal names DFHJ01–99. You can map these
journal names to specific MVS log stream names by specifying JOURNALMODEL
resource definitions, or allow them to default. If you do not specify matching
JOURNALMODEL definitions, by default user journals are mapped to LSNs of the
form userid.applid.DFHJnn.
The CICS log manager needs the name of the log stream that corresponds to a
CICS system log or general log, and the type - whether it is MVS, SMF, or a
dummy. Except for VSAM forward recovery log stream names taken directly from
the ICF catalog, CICS maintains this information in the journal names table,
together with the corresponding log stream token that is returned by the MVS
system logger when CICS successfully connects to the log stream.
JOURNALMODEL definitions
CICS uses JOURNALMODEL definitions to resolve log stream names at the
following times:
System log
During initialization, on an initial start only.
On a cold, warm or emergency restart, CICS retrieves the log stream name
from the CICS global catalog.
General logs (excluding log streams defined in the ICF catalog)
When a journal name is first referenced after the start of CICS, or when it is
first referenced again after its log stream has been disconnected. Log stream
disconnection, requiring further reference to a matching JOURNALMODEL
resource definition, occurs as follows:
User journals
As soon as you issue a DISCARD JOURNALNAME command.
Any further references to the discarded journal name means that CICS
must again resolve the log stream name by looking for a matching
JOURNALMODEL resource definition. You can change the log stream name
for a user journal by installing a modified JOURNALMODEL definition.
Auto journals for files
After you discard the journal name, and all files that are using the log
stream for autojournaling are closed.
Forward recovery logs (excluding those defined in the ICF catalog)
After you discard the journal name, and all files that are using the log
stream for forward recovery logging are closed.
Write request to
journal name
DFHJ06
Figure 30. Looking for a journal model that matches a journal name. CICS searches the
journal model table to find the log stream name that corresponds to the journal name, using
a “best-match” algorithm.
On an initial start, CICS uses the default log stream names unless you provide a
JOURNALMODEL resource definition for the system log.
If there are JOURNALMODEL definitions for your system logs (CICS finds
JOURNALMODEL definitions with JOURNALNAME(DFHLOG) and
JOURNALNAME(DFHSHUNT)), it attempts to connect to the system log streams
named in these definitions. System log stream names must be unique to the CICS
region.
If you define JOURNALMODEL resource definitions for your system logs, ensure
that:
v The log streams named in the definition are defined to the MVS system logger,
or
Before you try to use these default log stream names, ensure that
v The default log streams are defined explicitly to the MVS system logger, or
v Suitable model log streams are defined so that they can be created dynamically.
If these log streams are not available (perhaps they have not been defined to MVS)
or the definitions are not found (perhaps they have not been installed), CICS
attempts to create system log streams using the following default names:
v sysname.DFHLOG.MODEL
v sysname.DFHSHUNT.MODEL
where ‘sysname’ is the name of the MVS system on which CICS is running. Once
these log streams have been created, CICS then connects to them.
1 2
connect to named log stream log stream name (derived either from
path 1 or 2) does not exist
ABEND xxx
Figure 31. How CICS maps the system log (DFHLOG) to a log stream name (LSN) during an
INITIAL start. CICS uses the same process for the secondary system log, DFHSHUNT.
If you define JOURNALMODEL resource definitions for your system logs, ensure
that:
v The log streams named in the definition are defined to the MVS system logger,
or
v Suitable model log streams are defined so that they can be created dynamically.
If CICS cannot connect to the log stream named in the JOURNALMODEL definition,
it attempts to connect to a log stream, using the following default name:
v userid.applid.journalname
Before you try to use this default log stream name, ensure that
v The default log stream is defined explicitly to the MVS system logger, or
v A suitable model log stream is defined so that it can be created dynamically.
If the log stream is not available (perhaps it has not been defined to MVS) or the
definition is not found (perhaps it has not been installed), CICS attempts to create a
log stream using the following default name:
v LSN_QUALIFIER1.LSN_QUALIFIER2.MODEL
where the qualifier fields are based on the JOURNALMODEL definition streamname
attribute, as follows:
v If the log stream being created has a qualified name consisting of only two
names (qualifier1.qualifier2) or has an unqualified name, CICS constructs the
model name as qualifier1.MODEL or name.MODEL.
v If the log stream being created has a qualified name consisting of 3 or more
names (qualifier1.qualifier2....qualifier_n), CICS constructs the model name as
qualifier1.qualifier2.MODEL.
Once the log stream has been created, CICS connects to it.
or
1 Fi rst reference?
Create entry in journal names table for DFHJ02 as follows:
find log stream name (LSN) and type for DFHJ02 from
I f t he d ef i ne fa i l s , CI CS
Figure 32. How a CICS journal is mapped to its log stream name (LSN). The name is
DFHJ02, used here for user journaling and file control autojournaling.
You can use the DFHJUP utility program which uses the SUBSYS=(LOGR... facility
to select, print, or copy data held on MVS system logger log streams. Alternatively,
you can use your own utility to use the SUBSYS=(LOGR... facility.
This chapter also discusses some considerations regarding the use of the CEDA
transaction, particularly when a CSD is being shared by more than one CICS
region.
You may have used the CEDA transaction already, when running the interactive
installation verification procedures (IVPs) after installing CICS. If you ran any of the
IVPs (for example, the jobs called DFHIVPBT or DFHIVPOL), you also used a
CSD. For information about DFHIVPBT and DFHIVPOL, see the CICS Transaction
Server for OS/390 Installation Guide. Note that the CSD created by the IVPs is
limited in size, and initialized with the CICS-supplied resource definitions only.
A CSD is mandatory for some resource definitions. If you are creating a CSD for
the first time, go through the steps listed below under “Summary of steps to create
a CSD”: The remainder of this chapter describes these steps in more detail.
If you are already using a CSD with a previous release of CICS, upgrade your CSD
to include CICS resource definitions new in CICS Transaction Server for OS/390
Release 3. For information about upgrading your CSD, see the CICS Operations
and Utilities Guide.
You can run the DFHCSDUP offline utility as a batch job to read from and write to
the CSD. You should give UPDATE access to the CSD to only those users who are
permitted to use the DFHCSDUP utility.
6. Eligibility for BWO means that DFSMS components can back up the CSD while the data set is open for update.
For information about migrating CICS control tables to RDO using the MIGRATE
command, see the CICS Operations and Utilities Guide.
The INITIALIZE command initializes your CSD with definitions of the CICS-supplied
resources. After initialization, you can migrate resource definitions from your CICS
control tables, and begin defining your resources interactively with CEDA. You use
INITIALIZE only once in the lifetime of the CSD.
The command LIST ALL OBJECTS lists the CICS-supplied resources that are now
in the CSD.
Chapter 13. Defining the CICS system definition data set 137
//DEFINIT JOB accounting information
//DEFCSD EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//AMSDUMP DD SYSOUT=A
//SYSIN DD *
DEFINE CLUSTER -
(NAME(CICSTS13.CICS.applid.DFHCSD) -
VOLUMES(volid) -
KEYS(22 0) - 1
INDEXED -
RECORDS(n1 n2) -
RECORDSIZE(120 500) - 2
FREESPACE(10 10) -
SHAREOPTIONS(2) - 3
LOG(ALL) - 4
LOGSTREAMID(CICSTS13.CICS.CSD.FWDRECOV)) - 4
BWO(NO) 4
DATA -
(NAME(CICSTS13.CICS.applid.DFHCSD.DATA) -
CONTROLINTERVALSIZE(8192)) -
INDEX -
(NAME(CICSTS13.CICS.applid.DFHCSD.INDEX))
/*
//INIT EXEC PGM=DFHCSDUP,REGION=300K
//STEPLIB DD DSN=CICSTS13.CICS.SDFHLOAD,DISP=SHR
//DFHCSD DD DSN=CICSTS13.CICS.applid.DFHCSD,DISP=SHR 5
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSIN DD *
INITIALIZE
LIST ALL OBJECTS
/*
//
Notes:
1 The key length is 22 bytes, and the KEYS parameter must be coded as shown.
2 The average record size of 120 bytes is calculated for a CSD that contains only
the CICS-supplied resource definitions (generated by the INITIALIZE and
UPGRADE commands). If you create a larger proportion of terminal resource
definition entries than are defined in the initial CSD, the average record size is
higher because of the larger size of the terminal-type entries. The TERMINAL and
TYPETERM definition record sizes are listed under “Calculating disk space” on
page 136.
4 If you are using DFSMS 1.3, you can specify the recovery attributes for the
CSD in the ICF catalog instead of using the CSD system initialization parameters. If
you decide to use the CSD in RLS mode, you must define the recovery attributes in
the ICF catalog.
If you specify LOG(ALL), you must also specify LOGSTREAMID to define the
26-character name of the MVS log stream to be used as the forward recovery log. If
you specify recovery attributes in the ICF catalog, and also want to use BWO,
specify LOG(ALL) and BWO(TYPECICS).
For a description of the commands that you can use for copying files, see the
MVS/ESA Integrated Catalog Administration: Access Method Services Reference
manual.
Chapter 13. Defining the CICS system definition data set 139
CSDRLS=YES, or if the recovery attributes are defined in the ICF catalog
on the LOG parameter, in which case LOGSTREAMID from the ICF catalog
is used instead.
CSDINTEG
The level of read integrity to be used for a CSD accessed in RLS-mode.
CSDJID
An identifier for automatic journaling.
CSDLSRNO
A VSAM local shared resource pool. Ignored if CSDRLS=YES.
CSDRECOV
Whether or not the CSD is recoverable. This parameter is ignored if
CSDRLS=YES and CICS uses the LOG parameter from the ICF catalog
instead. If LOG is “undefined”, any attempt to open the CSD in RLS mode
fails.
If CSDRLS=NO, this parameter is used only if LOG in the ICF catalog is
“undefined.” If LOG in the ICG catalog specifies NONE, UNDO, or ALL, the
LOG parameter overrides CSDRECOV.
CSDRLS
Whether the CSD is accessed in RLS or non-RLS mode.
CSDSTRNO
The number of strings for concurrent requests. CSDSTRNO is ignored if
CSDRLS=YES and 1024 is assumed.
These parameters are described in greater detail in “Chapter 21. CICS system
initialization parameters” on page 215.
To optimize the sharing of a CSD, you should observe the following considerations:
For more information, see “Multiple users of the CSD within a CICS region
(non-RLS)” on page 143.
You may then use the CEDB transaction from any region to change the
contents of the CSD, and use CEDA to INSTALL into the invoking region. You
cannot use CEDA to change the CSD in region(s) that do not own the CSD.
If the CSD-owning region fails, the CSD is not available through the CEDB
transaction until emergency restart of the CSD-owning region has completed
(when any backout processing on the CSD is done). If you try to install a
CSD GROUP or LIST that is the target of backout processing, before
emergency restart, you are warned that the GROUP or LIST is internally
locked to another user. Do not run an offline VERIFY in this situation,
because backout processing removes the internal lock when emergency
restart is invoked in the CSD-owning region.
If you do not want to use the above method, but still want the CSD to be defined
as a recoverable resource, then integrity of the CSD cannot be guaranteed. In
this case, you must not specify CSDBKUP=DYNAMIC, because the CSD would
not be suitable for BWO.
v You can define several CICS regions with read/write access to the CSD, but this
should only be considered if the CICS regions run in the same MVS image, and
all are at the latest CICS level.
v If you give several CICS regions read/write access to the same CSD, and those
regions are in the same MVS image, integrity of the CSD is maintained by the
SHAREOPTIONS(2) operand of the VSAM definition, as shown in the
“sample job stream” on page 138.
v If you give several CICS regions read/write access to the same CSD, and those
regions are in different MVS images, the VSAM SHAREOPTIONS(2) operand
does not provide CSD integrity, because the VSAMs for those MVS images do
not know about each other.
For more information about shared CSD access within one MVS image, see
“Sharing a CSD by CICS regions within a single MVS image (non-RLS)” on
page 143
Chapter 13. Defining the CICS system definition data set 141
page 143. For more information about shared CSD access across several MVS
images, see “Sharing a CSD in a multi-MVS environment (non-RLS)” on page 144.
For information about sharing the CSD between different releases of CICS, see
“Sharing the CSD between different releases of CICS” on page 145.
For information about other factors that can restrict access to a CSD, see “Other
factors restricting CSD access” on page 146.
For information about the system initialization parameters for controlling access to
the CSD, see “File processing attributes for the CSD” on page 139.
The number of concurrent requests that may be processed against the CSD is
defined by the CSDSTRNO system initialization parameter. Each user of CEDA (or
CEDB or CEDC) requires two strings, so calculate the CSDSTRNO value by first
estimating the number of users that may require concurrent access to the CSD, and
then multiply the number by two.
CEDA issues a diagnostic message if the CSDSTRNO value is too small to satisfy
the instantaneous demand on the CSD for concurrent requests. A subsequent
attempt to reissue the command succeeds if the conflict has disappeared. If
conflicts continue to occur, increase the CSDSTRNO value.
Note: Read integrity is not guaranteed in a CICS region that has read-only
access to a shared CSD. For example, if one CICS region that has full
Chapter 13. Defining the CICS system definition data set 143
read/write access updates a shared CSD with new or changed definitions,
another CICS region with read-only access might not obtain the updated
information. This could happen if a control interval (CI) already held by a
read-only region (before an update by a read/write region) is the same CI
needed by the read-only region to obtain the updated definitions. In this
situation, VSAM does not reread the data set, because it already holds
the CI. However, you can minimize this VSAM restriction by specifying
CSDLSRNO=NONE, and the minimum values for CSDBUFNI and
CSDBUFND, but at the expense of degraded performance. See
“Specifying read integrity for the CSD” on page 148 for information about
read integrity in a data set accessed in RLS mode.
If you define several CICS regions with read/write access to the CSD, those regions
should all be at the latest level. Only one CICS region with read/write access can
use a CEDA, CEDB, or CEDC transaction to access the CSD, because the VSAM
SHAREOPTIONS(2) definition prevents other regions from opening the CSD.
If you are running CICS with the CSD defined as a recoverable resource
(CSDRECOV=ALL), see “Planning for backup and recovery” on page 150 for some
special considerations.
You can use CEMT to change the file access attributes of the CSD, or you can use
the EXEC CICS SET FILE command in an application program. However, ensure
that the resulting attributes are at least equivalent to those defined either by
CSDACC=READWRITE or CSDACC=READONLY. These system initialization
parameters allow the following operations on the CSD:
CSDACC operand
Operations
READONLY
Read and browse.
READWRITE
Add, delete, update, read and browse.
Because of these limitations, active and alternate CICS regions running in different
MVS images must not share the CSD with other CICS regions, unless you are
using some form of global enqueuing (for example, with global resource
serialization (GRS)).
These multi-MVS restrictions also apply to running the offline utility, DFHCSDUP.
Note the following limitations when the activities listed in Table 21 are attempted
concurrently:
1. You cannot run DFHCSDUP in read/write mode in a batch region if any CICS
region using the same CSD is running one of the CEDA, CEDB, or CEDC
transactions. (The exception is when the CEDx transactions accessing the CSD
are in a region (or regions) for which the CSD is defined as read-only.)
2. None of the CEDx transactions runs if the CSD to be used is being accessed by
the DFHCSDUP utility program in read/write mode. (This restriction does not
apply if the transaction is run in a region for which the CSD is defined as
read-only.)
3. None of the CEDx transactions runs in a CICS region whose CSD is defined for
read-write access if any of the RDO transactions are running in another CICS
region that has the CSD defined for read-write access.
A CICS region starting with an initial or cold start opens the CSD for read access
only during initialization, regardless of the CSDACC operand. This enables a CICS
region to be initialized even if a user on another region or the DFHCSDUP utility
program is updating the CSD at the same time. After the group lists are installed,
CICS leaves the CSD in a closed state.
On a warm or emergency start, the CSD is not opened at all during CICS
initialization if CSDRECOV=NONE is coded as a system initialization parameter.
However, if CSDRECOV=ALL is coded, and backout processing is pending on the
CSD, the CSD is opened during CICS initialization on an emergency start.
Chapter 13. Defining the CICS system definition data set 145
For information about using the CEDA and CEDB ALTER commands to update
resource definitions in compatibility mode, see the CICS Resource Definition Guide.
You can also use the CSD utility program, DFHCSDUP, to update resources that
specify obsolete attributes. A compatibility option is added for this purpose, which
you must specify on the PARM parameter on the EXEC PGM=DFHCSDUP
statement. You indicate the compatibility option by specifying COMPAT or
NOCOMPAT. The default is NOCOMPAT, which means that you cannot update
obsolete attributes.
For earlier releases of CICS that do not provide the DFHDB2 group, you must use
your own resource definitions that specify the resource names appropriate for the
release of CICS and DB2.
For information about upgrading your CSD, and about the compatibility groups in
CICS Transaction Server for OS/390 Release 3, see the CICS Resource Definition
Guide.
Access to the CSD is not released until the RDO transaction using it is ended, so
users of CEDA, CEDB, and CEDC should ensure that a terminal running any of
these transactions is not left unattended. Always end the transaction with PF3 as
soon as possible. Otherwise, users in other regions are unable to open the CSD.
There may be times when you cannot create definitions in a group or list. This
situation arises if an internal lock record exists for the group or list you are trying to
update. If you are running the DFHCSDUP utility program (or a CEDA transaction)
when this occurs, CICS issues a message indicating that the group or list is locked.
The following requirements and rules apply to using the CSD in RLS-mode:
v Your CICS regions must run in an RLS-capable environment. That is, all the
CICS regions must reside in a parallel sysplex, and an SMSVSAM server must
be running in each MVS image that supports one or more CICS regions.
v The CSD must reside in SMS-managed storage.
v You must specify CSDRLS=YES in all CICS regions that are sharing the CSD in
RLS-mode, and RLS must be enabled in each region (by the RLS=YES system
initialization parameter).
v As soon as the first CICS region opens the CSD in RLS mode, it can only be
opened in RLS mode by other CICS regions. If a CICS region attempts to open
the CSD in non-RLS mode when it is open in RLS mode by other regions, the
non-RLS open request fails.
Note: This rule means that you cannot use a CSD in RLS mode on a CICS
release that supports RLS and share it with CICS regions that do not
support RLS. The sharing by non-RLS capable regions means that a CSD
can only be use in non-RLS mode.
v All the rules governing the use of a data set in RLS mode apply also to the
CSD—there are no special rules for the CSD because it is a CICS system data
set.
v Any number of CICS regions can open the CSD in RLS mode and all can use
CEDA to update the data set with full integrity. The CICS regions can reside in
different MVS images, but the MVS images must be in the same sysplex. There
is no need to restrict updating to only one CICS region as in the case of
non-RLS sharing, and you can specify the CSDACC=READWRITE system
initialization parameter for all CICS regions that specify CSDRLS=YES.
Chapter 13. Defining the CICS system definition data set 147
Differences in CSD management between RLS and non-RLS access
Although a CSD accessed in RLS mode is protected by VSAM RLS locking, this
operates at the CICS file control level. It does not change the way the CEDA and
CEDB transactions manage the integrity of CSD groups.
The CEDx transactions protect resource definitions in the same way for RLS mode
and non-RLS mode CSDs. They protect individual resource definitions against
concurrent updates by a series of internal locks on the CSD. The RDO transactions
apply these locks at the group level. While RDO transactions are executing a
command that updates any element in a group, they use the internal lock to prevent
other RDO transactions within a CICS region from updating the same group. The
locks are freed only when the updating command completes execution. Operations
on lists are protected in the same way. However, in an RLS environment, these
internal locks affect all CICS regions that open the CSD in RLS mode. In the
non-RLS case they apply only to the CICS region that has the data set open for
update (which can only be a single region).
The use of a single buffer pool by the SMSVSAM server removes some of the
problems of sharing data that you get with a non-RLS CSD.
You cannot run DFHCSDUP while CICS regions have the CSD open in RLS mode
if the CSD is defined as recoverable. This is because a non-CICS job, such as
DFHCSDUP, is not allowed to open a recoverable data set for output in non-RLS
mode while it is already open in RLS mode. Therefore, before you can run
DFHCSDUP, you must quiesce the CSD by issuing a CEMT, or an EXEC CICS,
SET DATASET(...) QUIESCED command.
For a recoverable CSD, the main factor to consider when planning whether to use
RLS is how much you use DFHCSDUP compared with the CEDx transactions. If
you use DFHCSDUP frequently to update your production CSD, you may decide
Chapter 13. Defining the CICS system definition data set 149
that it is better to use the CSD in non-RLS mode. On the other hand, if you use
DFHCSDUP only occasionally, and you want the ability to update the CSD online
from any CICS region, use RLS.
Alternatively, because the CSD is open for update whenever RDO work is taking
place, it is a good candidate for eligibility for BWO. If the CSD is specified as
eligible for BWO, and the data set is corrupted, you can restore a BWO image of
the CSD using DFSMSdss, then run forward recovery to the point of corruption
using a forward recovery utility.
For a CSD opened in RLS mode, the recovery attributes must be defined in the ICF
catalog entry for the CSD, and CICS uses the forward recovery log’s log stream
name (LSN) from the ICF catalog.
For a CSD opened in non-RLS mode, the recovery attributes can be defined in the
ICF catalog entry for the CSD, or on the CSD system initialization parameters The
forward recovery log’s log stream name (LSN) is retrieved from either CSDFRLOG
or the ICF catalog. If LOG is defined in the catalog, the forward recovery log stream
specified in the catalog is used. If LOG is not defined, the CSDFRLOG journal id is
used to determine the log stream name.
For a CSD opened in non-RLS mode, you can use the system initialization
parameter CSDBKUP=DYNAMIC|STATIC to indicate whether the CSD is eligible for
BWO. Specify CSDBKUP=DYNAMIC for BWO support, or STATIC (the default) for
a “normal” quiesced backup. If you specify BWO support for the CSD you must also
define it as forward recoverable. For more information about BWO, see “Backup
while open (BWO) of VSAM files” on page 101.
For a CSD opened in RLS mode, you must specify all recovery attributes, which
includes backup, in the ICF catalog. BWO backup eligibility is specified using
BWO(TYPECICS).
If you specify forward recovery for the CSD, changes (after images) made by CICS
to the CSD are logged in the forward recovery log stream. Using the latest backup,
and the after images from forward recovery log stream, you can recover all the
changes made by running a recovery program, such as the CICS VSAM forward
recovery utility. After performing forward recovery, you must reenter any CEDA
transactions that were running at the time of failure, as these are effectively backed
out by the forward recovery process. You can find details of these in the CSDL
transient data destination, which is the log for copies of all CEDA commands. See
“RDO command logs” on page 153 for more information.
Recoverability, forward recovery log stream names, and BWO eligibility can be
defined optionally in the ICF catalog for a non-RLS accessed CSD, but must be
defined the ICF catalog if the CSD is accessed in RLS mode.
Notes:
1. When CSDBKUP=DYNAMIC, the CSD is eligible for BWO.
2. Backup and recovery attributes must be specified in the ICF catalog for a CSD
opened in RLS mode (CSDRLS=YES).
3. Backup and recovery attributes can optionally be specified in the ICF catalog for
a CSD opened in non-RLS mode (CSDRLS=NO), but you must still have a
consistent set of parameters as defined in the table above.
Table 23. CSDBKUP and related system initialization parameters during CICS override processing (CSDRLS=NO)
CSDRECOV CSDFRLOG CSDBKUP (see Notes) Result
ALL FRLOG from 01 through 99. Either DYNAMIC or OK
STATIC.
ALL NO Either DYNAMIC or Message DFHPA1944 is
STATIC. issued stating that
CSDRECOV=ALL cannot
be specified without a
CSDFRLOG if
CSDRLS=NO. CICS
initialization is terminated.
BACKOUTONLY or NONE FRLOG from 01 through 99. DYNAMIC Processing continues and
messages DFHPA1929,
stating that CSDBKUP has
defaulted to STATIC, and
DFHPA1930, stating that
CSDFRLOG has been
ignored, are issued.
Chapter 13. Defining the CICS system definition data set 151
Table 23. CSDBKUP and related system initialization parameters during CICS override processing
(CSDRLS=NO) (continued)
CSDRECOV CSDFRLOG CSDBKUP (see Notes) Result
BACKOUTONLY or NONE NO DYNAMIC Processing continues and
message DFHPA1929 is
issued, stating that
CSDBKUP has defaulted to
STATIC.
BACKOUTONLY or NONE NO STATIC OK
BACKOUTONLY or NONE FRLOG from 01 through 99. STATIC Processing continues and
message DFHPA1930
stating that CSDFRLOG
has been ignored is issued.
Notes:
1. When CSDBKUP=DYNAMIC, the CSD is eligible for BWO.
2. Backup and recovery attributes must be specified in the ICF catalog for a CSD
opened in RLS mode (CSDRLS=YES).
3. Backup and recovery attributes can optionally be specified in the ICF catalog for
a CSD opened in non-RLS mode (CSDRLS=NO), but you must still have a
consistent set of parameters as defined in the table above.
Write and test procedures for backing up and recovering your CSD before
beginning to operate a production CICS region.
Forward recovery of the CSD is not possible if CSD updates are made outside
CICS. To enable recovery of the updates made outside CICS, you need to use an
image copy. If you update the CSD from outside CICS, do not use CEDA to update
the CSD until an image copy has been made.
For information about CEDA command syncpoint criteria, see “CEDA command
syncpoint criteria”. For information about sharing the CSD between CICS regions,
see “Sharing and availability of the CSD in non-RLS mode” on page 140. For
information about using the DFHCSDUP utility to access the CSD, see “Accessing
the CSD by the offline utility program, DFHCSDUP”.
Commands that change the contents of the CSD commit or back out changes at
the single command level. The exception to this rule is a generic ALTER command.
A generic ALTER command is committed or backed out at the single resource level.
These situations are analogous to the problems met when using multiple read/write
regions, and are discussed above.
Chapter 13. Defining the CICS system definition data set 153
CAIL Logs autoinstall terminal model entries installed in the TCT, and entries
deleted from the TCT.
CRDI Logs installed resource definitions of programs, transactions, mapsets,
profiles, partition sets, files, and LSR pools.
CSDL Logs RDO commands that affect the CSD.
CSFL Logs file resources installed in the active CICS region. That is, all file
entries installed in the FCT, entries deleted from the FCT, dynamically
installed entries that are discarded, and messages from dynamic allocation
of data sets and from loading CICS data tables.
CSKL Logs transaction and profile resources installed in the active CICS region.
That is, all transaction and profile entries installed in the PCT, entries
deleted from the PCT, and dynamically installed entries that are discarded.
CSPL Logs program resources installed in the active CICS region. That is, all
program entries installed in the PPT, entries deleted from the PPT, and
dynamically installed entries that are discarded.
CSRL Logs changes to the set of partner resources installed in the active CICS
region. That is, all operations that install or discard partner resources.
If you want these RDO command logs sent to the same destination (CSSL) as the
messages, you can use the definitions shown in Figure 34 on page 155. If you like,
you can direct these logs to any other transient data queue, or define them as
extrapartition data sets.
You usually need the CSD DD statement to include DISP=SHR. (See “Sharing
and availability of the CSD in non-RLS mode” on page 140.)
If you include a DD statement for the CSD in the CICS startup job, the CSD is
allocated at the time of CICS job step initiation, and remains allocated for the
duration of the CICS job step.
v You may prefer to take advantage of the CICS dynamic allocation of the CSD. If
so, do not provide a DD statement for the CSD in the startup job stream. If there
is a CSD DD statement, it is used instead of dynamic allocation. To dynamically
allocate the CSD, specify the data set name (DSNAME) and disposition (DISP)
of the CSD, using one of the following methods:
Chapter 13. Defining the CICS system definition data set 155
– The CSDDSN and CSDDISP system initialization parameters
– The CEMT SET FILE command
– The EXEC CICS SET FILE command
CICS then uses the full data set name (DSNAME) to allocate the CSD as part of
OPEN processing. The CSD is automatically deallocated when the last entry
associated with it is closed.
For more information about OPEN processing, see “Chapter 18. Defining user
files” on page 189. For information about the parameters that you can code for
the CSD in the SIT, see “Chapter 21. CICS system initialization parameters” on
page 215.
For information about the CEDA, CEDB, and CEDC transactions, see the CICS
Resource Definition Guide.
Note: If you are using a CSD with an earlier release of CICS, upgrade your CSD
as part of the process of migration. For information about upgrading the
CSD, and about the release compatibility of the CSD after upgrading, see
the CICS Transaction Server for OS/390 Migration Guide.
For information about the DFHCSDUP utility program and the available commands,
see the CICS Operations and Utilities Guide.
The CSD is allocated by MVS when the CICS job step is initiated. This means the
DD statements in the CICS startup job streams defining the CSD for the active and
alternate CICS regions must specify DISP=SHR.
The alternate CICS region does not open the CSD during initialization, or before
takeover occurs. The alternate CICS region does not even open the CSD during
takeover, if the CSD was not changed at any time by the active CICS region. (For
example, the CSD might have been used only to install a group list at CICS startup,
and subsequently by read-only operations.) However, if you use the CEDA
transaction in an active CICS region to alter resource definitions, the CSD might be
opened at takeover, to perform any file backout that is necessary. To enable file
backout to occur, you must define the CSD as a recoverable resource by the
system initialization parameter CSDRECOV; see “File processing attributes for the
CSD” on page 139.
For more information about using the CSD as a recoverable file, see “Planning for
backup and recovery” on page 150.
Chapter 13. Defining the CICS system definition data set 157
158 CICS TS for OS/390: CICS System Definition Guide
Chapter 14. Defining and using catalog data sets
This chapter describes how to define and use the CICS global catalog data set
(GCD), and the CICS local catalog data set (LCD), which CICS needs to catalog
CICS system information. For the rest of this chapter, these data sets are referred
to as the global catalog and the local catalog. (The CICS catalog data sets are not
connected with MVS system catalogs, and contain data that is unique to CICS.)
Notes:
1. You must define and initialize new CICS catalogs for CICS Transaction Server
for OS/390 Release 3.
2. If you redefine either one of the global and local catalogs, it is recommended
that you redefine the other too.
For more information about how CICS uses the catalogs for startup and restart, see
“The role of the CICS catalogs” on page 325.
For further information about what is written to the global catalog, and about how
CICS uses the global catalog for startup and restart, see “The role of the CICS
catalogs” on page 325.
Figure 35. Example job to define and initialize the global catalog
Notes:
1 The data set name in the CLUSTER definition must be the same as the DSN
parameter in the DD statement for the global catalog in the CICS startup job
stream.
2 The primary and secondary extent sizes are shown as n1 and n2 cylinders.
Calculate the size required to meet your installation’s needs, and substitute your
values for n1 and n2.
Whichever IDCAMS parameter you use for the GCD space allocation (CYLINDERS,
TRACKS, or RECORDS), make sure that you specify a secondary extent. CICS
abends if your GCD fills and VSAM cannot create a secondary extent.
4 This job does not specify a RECORDSIZE value for the global catalog, which
therefore defaults to using an average and maximum record size of 4089 bytes,
RECORDSIZE(4089 4089). If your maximum record size is greater than 4089, you
must add a RECORDSIZE parameter to the sample job to specify your own value.
For information about record sizes, see Table 24 on page 163.
This job stream does not specify a BUFFERSPACE parameter, although you can
code an explicit value if you want to define buffers of a specific size.
BUFFERSPACE is the minimum bufferspace permitted; VSAM defaults to a
bufferspace value equal to twice the CI size of the data component, plus the CI size
of the index, which gives a default of 20480 bytes in the example job. A larger
minimum buffer size (bufferspace) may improve cold start and warm restart times,
and may significantly reduce CICS shutdown times.
Another way to define buffer space for the GCD is by means of the AMP parameter
on the DD statement for the GCD in the CICS startup job stream, which you can
use to override the default or defined value. (Note, however, that the BUFSP
parameter defines the maximum bufferspace. If you define a BUFFERSPACE value
on the AMP parameter that is smaller than the BUFFERSPACE value specified in
the DEFINE statement, the BUFFERSPACE value takes precedence.
For performance reasons, CICS defines a STRNO (number of strings) value of 32.
Based on the example job stream in Figure 35 on page 160, the absolute minimum
value of BUFSP is calculated as follows:
BUFND = (STRNO + 1)
BUFNI = STRNO
BUFSP = 33 * 8192 (BUFND * CI size) + 32 * 1024 (BUFNI * CI size) =
303104 bytes
Note: This is the smallest figure that can be used for BUFSP.
The principal factors affecting CICS startup and shutdown times are:
v The number of resources defined in the group list for those definitions managed
by RDO
v The number of resources defined in CICS tables
v The size of the system log
6 The job step INITGCD uses the recovery manager utility program, DFHRMUTL,
to initialize the data set. DFHRMUTL writes a record to the data set, specifying that,
on its next run using this global catalog, if START=AUTO is specified, CICS is to
perform an initial start and not prompt the operator for confirmation. This record is
called the autostart override record.
DFHRMUTL can also be used to override the type of start that would occur on an
automatic startup, to be cold.
For full information about DFHRMUTL, and further examples of its use, see the
CICS Operations and Utilities Guide.
In earlier releases of CICS, IDCAMS was used to write an initial record, using
REPRO, to initialize the global catalog. Although you can still run this step, either
before or after running DFHRMUTL, this practice has been replaced by the use of
DFHRMUTL to initialize the global catalog. See Figure 35 on page 160.
7 It is recommended that you also run the DFHCCUTL utility in this same job.
Run DFHRMUTL first and check its return code before running DFHCCUTL. If you
Instead, to specify that the next start should be cold, use the DFHRMUTL utility with
the SET_AUTO_START=AUTOCOLD option. This has the following advantages:
v You do not have to reset the START system initialization parameter from AUTO
to COLD, and back again.
v Because sufficient information is preserved on the global catalog and the system
log, CICS is able to recover information for remote systems from the log, and to
reply to remote systems in a way that enables them to resynchronize their units
of work.
You can speed up a cold start by using the DFHRMUTL COLD_COPY option to
copy only those records that are needed for the cold start to another catalog data
set. If the return code set by DFHRMUTL indicates that the copy was successful, a
subsequent job-step can copy the new (largely empty) catalog back to the original
catalog data set. The performance gain occurs because, at startup, CICS does not
have to spend time deleting all the definitional records from the catalog. This
technique will also speed up initial starts, for the same reason. Figure 36 on
page 163 is an example of this technique.
Note: Before you use COLD_COPY, you should be certain that you wish to
perform a cold or initial start. As a safeguard, make a backup copy of the
original global catalog before you copy the new catalog output by
DFHRMUTL over it. For more information about the use of the global catalog
in a cold start of CICS, see “Classes of start and restart” on page 325.
Figure 36. DFHRMUTL example—setting the global catalog for a cold start. The
COLD_COPY option is used to improve startup performance. Note that the NEWGCD and
DFHGCD data sets must have been defined with the REUSE attribute.
Space calculations
Each global catalog record has a 28-byte key.
To estimate the amount of space needed in your global catalog to keypoint installed
resource definitions, table entries, and control blocks, use the sizes specified in
Table 24.
Each entry is one VSAM record, and the records for each type of table have
different keys.
The space requirements for a VSAM KSDS such as DFHGCD can vary for different
CICS cold starts. This can occur even if no changes have been made to the CICS
definitions to be stored on the VSAM KSDS. This is because VSAM will utilize the
space in the data set differently depending on whether the data set has just
initialized, or has data from a previous run of CICS. CICS will call VSAM to perform
sequential writes. VSAM honors the ’freespace’ value specified on the data set’s
definition if the keys of the records being added sequentially are higher than the
highest existing key. However, if the data set contains existing records with a higher
key than the ones being inserted, ’freespace’ is only honored once a CI split has
occurred.
The size of the index portion of the data set may also vary depending on the
number of CI and CA splits that have occurred. This affects the index sequence set.
Table 24. Sizes for entries in the global catalog
Installed definition, table entry, or control block Number of bytes per entry
Notes:
2 If you open a VSAM path you get two of these, for BDAM or VSAM base data
sets you get one.
3 You will only have these if you use the VSAM RLS SHCDS option
NONRLSUPDATEPERMITTED. In this case, for each data set that you have
specified NONRLSUPDATEPERMITTED for, you could have an upper limit. This
limit is the number of different file names through which you access the data set
multiplied by the number of tasks that update the data set. You will normally only
have a few, if any, of these control blocks.
4 The TYPETERM and model TERMINAL definitions are present if you are using
autoinstall. They are stored directly in the global catalog when the definitions are
installed, either by a CEDA transaction, or as members of a group installed via a
group list. For example, if you start up CICS with the startup parameter
GRPLIST=DFHLIST, the CICS-supplied TYPETERM and model terminal definitions,
defined in the groups DFHTERM and DFHTYPE, are recorded in the global catalog.
Allow space in your calculations for all autoinstall resources installed in your CICS
region.
5 The value given is for a DWE chained off an LU6.1 session, or an APPC
session.
This is a minimum specification for a global catalog for use by a single CICS region.
Add the relevant AMP subparameters to help improve restart and shutdown time.
The AMP parameter is described in the OS/390 MVS JCL Reference manual, and
an example is shown in the CICS startup job stream in “Chapter 24. CICS startup”
on page 337.
If you are running CICS with XRF, the global catalog is passively shared by the
active and alternate CICS regions, and you must specify DISP=SHR.
The CICS domains use the local catalog to save some of their information between
CICS runs, and to preserve this information across a cold start. For further
guidance information about what is written to the local catalog, and about how CICS
uses the local catalog for startup and restart, see “Classes of start and restart” on
page 325.
The local catalog is a VSAM key-sequenced data set (KSDS). It is not shared by
any other CICS region, such as an alternate CICS in an XRF environment. If you
are running CICS with XRF, you must define a unique local catalog for the active
CICS region, and another for the alternate CICS region.
Unlike the global catalog, which must be defined with enough space to cope with
any increase in installed resource definitions, the size of the local catalog is
relatively static. The following section describes the information held on the local
catalog.
To enable you to initialize the local catalog correctly, with all the records in the
correct sequence, there is a CICS-supplied utility called DFHCCUTL that you run
immediately after you have defined the VSAM data set.
In addition to the information written to the local catalog when you first initialize it,
the loader domain writes a program definition record for each CICS nucleus
module. The number of records varies depending on the level of function you have
included in your CICS region.
| You can add records to the local catalog to enable the CICS self-tuning mechanism
| for storage manager domain subpools. For details of how to do this using the
| CICS-supplied utility program, DFHSMUTL, see the CICS Operations and Utilities
| Guide.
Finally, when you define the VSAM cluster for the local catalog, specify a secondary
extent value as a contingency allowance. See the sample job in Figure 37 on
page 169.
Figure 37. Sample job to define and initialize the local catalog
Notes:
1 If you are defining local catalogs for multiple CICS regions (for example, for
active and alternate CICS regions when running with XRF), you can identify the
clusters uniquely by making the specific APPLID of each CICS one of the data set
qualifiers. For example, you could use the following names for the clusters of active
and alternate CICS regions, where DBDCCIC1 and DBDCCIC2 are the specific
APPLIDs:
DEFINE CLUSTER -
. (NAME( CICSTS13.CICS.DBDCCIC1.DFHLCD)
.
.
DEFINE CLUSTER -
. (NAME( CICSTS13.CICS.DBDCCIC2.DFHLCD)
.
.
2 Space for about 200 records should be adequate for the local catalog, but also
specify space for secondary extents as a contingency allowance.
3 The local catalog records are small by comparison with the global catalog. Use
the record sizes shown, which, in conjunction with the number of records specified,
ensure enough space for the data set.
Several types of tracing are available in CICS to help you with problem
determination, and these are described in the CICS Problem Determination Guide.
Among the various types of trace, the CICS tracing handled by the CICS trace
domain allows you to control the amount of tracing that is done, and also to choose
from any of three destinations for the trace data. Any combination of these three
destinations can be active at any time:
1. The internal trace table, in main storage above the 16MB line in the CICS
address space.
2. The auxiliary trace data sets, defined as BSAM data sets on disk or tape.
3. The MVS generalized trace facility (GTF) data sets.
For information about GTF, see the OS/390 MVS Diagnosis: Tools and Service Aids
manual. For information about using CICS tracing for problem determination, see
the CICS Problem Determination Guide.
The DD names of the auxiliary trace data sets are defined by CICS as DFHAUXT
and DFHBUXT. If you define a single data set only, its DD name must be
DFHAUXT. You can allocate and catalog the auxiliary trace data sets before starting
CICS.
If you use tape for recording auxiliary trace output, use unlabeled tape. Using
standard-labeled tape, whether on a single tape drive or on two tape drives, stops
you processing the contents of any of the volumes with the DFHTU530 utility until
after the CICS step has been completed. If you use standard-labeled tape, make
sure all the output produced in the CICS run fits on the one (or two) volumes
mounted.
For more information about these system initialization parameters, and how to code
them, see “Chapter 21. CICS system initialization parameters” on page 215.
You can also control CICS tracing by means of the CICS-supplied transactions
CETR and CEMT. (Note that you cannot use CETR through an MVS console.) For
guidance information about the CICS control options available with CETR and
CEMT, see the CICS Supplied Transactions manual.
Alternatively, you can run the CICS-supplied job DFHDEFDS to create the auxiliary
trace data sets for an active CICS region or the CICS-supplied job DFHALTDS to
create them for an alternate CICS region. For information about the jobs
DFHDEFDS and DFHALTDS, see the CICS Transaction Server for OS/390
Installation Guide.
Figure 38. Sample job to define auxiliary trace data sets on disk
Notes:
1 The DCB subparameters shown in this sample job specify the required DCB
attributes for the CICS auxiliary trace data sets. As an alternative to this job, you
can specify (NEW,CATLG) on the DD statements in the CICS startup job stream,
omit the DCB parameter, and let CICS open the data sets with the same default
values.
2 Change the space allocations in this sample job stream to suit your
installation’s needs.
Space calculations
Trace entries are of variable length, but the physical record length (block size) of
the data written to the auxiliary trace data sets is fixed at 4096 bytes. As a rough
guide, each block contains an average of 40 entries, although the actual number of
entries depends on the processing being performed.
Chapter 15. Defining and using auxiliary trace data sets 173
If you specify BUFNO greater than 1, you can reduce the I/O overhead involved in
writing auxiliary trace records. A value between 4 and 10 can greatly reduce the I/O
overhead when running with auxiliary trace on.
For auxiliary trace data sets on unlabeled tapes, use the following sample DD
statements:
//DFHAUXT DD DSN=CICSTS13.CICS.applid.DFHAUXT,UNIT=3400,VOL=SER=volid,
// DISP=(NEW,KEEP),LABEL=(,NL)
//DFHBUXT DD DSN=CICSTS13.CICS.applid.DFHBUXT,UNIT=3400,VOL=SER=volid,
// DISP=(NEW,KEEP),LABEL=(,NL)
If you are using tape for the auxiliary data sets, assign tape units and mount the
tapes before entering the command to start auxiliary trace. If you specify
AUXTR=ON as a system initialization parameter, ensure the tape is mounted before
starting CICS.
XRF considerations
The active and the alternate CICS regions must refer to different auxiliary trace data
sets; that is, they must be unique data sets. This means that you can capture
auxiliary trace data for the active CICS region, while the alternate CICS region is
running but before takeover occurs.
For the active CICS region, you use CETR or CEMT to control auxiliary trace data
sets. For the alternate CICS region, you use CEBT. For information about using
these transactions, see the CICS Supplied Transactions manual.
To process the separate trace data sets for active and alternate CICS regions, you
need separate utility jobs for each set of data sets. For information about
DFHTU530, see the CICS Operations and Utilities Guide.
CICS has a dump table facility that enables you to control dumps. The dump table
lets you:
v Specify the type of dump, or dumps, you want CICS to record.
v Suppress dumping entirely.
v Specify the maximum number of dumps to be taken during a CICS run.
v Control whether CICS is to terminate as a result of a failure that results in a
dump.
You can set the options you want in the dump table in two ways:
1. Using the CEMT master terminal command
2. Using the EXEC API commands
When you start CICS for the first time, CICS uses system default values for the
dump table options, and continues to use the system default values until you modify
them with a CEMT or EXEC CICS command. For information about the dump table
options you can set, see the CICS Problem Determination Guide.
Note: The MVS system dump data sets can become full with unwanted SDUMPs
that precede ASRA, ASRB, and ASRD abends (after message DFHAP0001
or DFHSR0001). To prevent this from happening, you can suppress all
SDUMPs preceding ASRA, ASRB and ASRD abends, or you can suppress
some of them. “Suppressing system dumps that precede ASRx abends” on
page 176 tells you how to do this.
System dumps
CICS produces a system dump using the MVS SDUMP macro.
You should use the MERGE function when changing the SDUMP options via the
CHNGDUMP command to ensure that the areas selected by CICS to dump are
included in the MVS dump data set output. If you use the ADD option, it replaces
any options specified by CICS when issuing the SDUMP in many cases. This can
result in partial dumps being taken to the MVS dump data set. MVS always
includes LSQA and TRT in the dump but may exclude the private area if you use
the wrong options in the update by the CHNGDUMP command. You must
thoroughly review your use of the CHNGDUMP command when setting up your
CICS region. For information about the CHNGDUMP command and the effect that
altering its options has on the dump output from CICS, see the OS/390 MVS
Initialization and Tuning Guide.
If you are running CICS with XRF, the surveillance signal of the active CICS region
stops during an MVS SDUMP of the active CICS region’s address space, which
could lead to unnecessary takeovers being initiated, if the ADI (alternate delay
interval) for the alternate is set too low. However, you can prevent SDUMPs of other
address spaces from causing unnecessary takeovers when the alternate CICS is
running on a different MVS image by setting the QUIESCE=NO option for SDUMP,
using the MVS CHNGDUMP command.
If CICS storage protection is active, you can suppress the system dumps caused by
errors in application programs (after message DFHSR0001), while retaining the
dumps caused by errors in CICS code (after message DFHAP0001). To do this, use
either a CEMT SET SYDUMPCODE command, or an EXEC CICS SET
SYSDUMPCODE command to suppress system dumps for system dumpcode
SR0001.
CEMT SET SYDUMPCODE(SR0001) ADD NOSYSDUMP
For more information about the storage protection facilities available in CICS, see
“Storage protection” on page 353.
If you want SDUMPs for one of these transaction abends but not the other, select
the one you want by using either a CEMT TRDUMPCODE or an EXEC CICS
adds an entry to the dump table and ensures that SDUMPs are taken for ASRB
abends. However, in this case the SDUMPs are taken at a later point than SDUMPs
usually taken for system dump code AP0001 and SR0001.
For information about the DFHAP0001 and DFHSR0001 messages, see the CICS
Messages and Codes manual and the CICS Problem Determination Guide.
With two data sets, you can print transaction dumps from one data set while CICS
is running. To do this, first use CEMT SET DUMP SWITCH to switch the data sets.
CICS closes the current data set after any transaction dump being recorded has
been completed, and opens the other data set. You can print the completed data
set with the DFHDU530 dump utility program. For information about the DFHDU530
dump utility program, see the CICS Operations and Utilities Guide.
In addition to switching dump data sets explicitly, the operator can use CEMT SET
DUMP AUTO to cause automatic switching when the current data set becomes full.
(Note that this permits one switch only.) When a transaction dump data set is full,
CICS closes the data set and issues console messages as follows:
DFHDU0303I applid Transaction Dump Data set dataset closed.
DFHDU0304I applid Transaction Dump Data set dataset opened.
DFHDU0305I applid Transaction Dump Data set switched to ddname.
where “x” and “y” can have the value A or B. If you specified DISP=SHR for the
dump data set, you can print the completed data set with the DFHDU530 utility
program and then reissue the command: CEMT SET DUMP AUTO. This again
switches data sets automatically (once only) when the current data set is full.
You can define the CICS dump data sets DFHDMPA and DFHDMPB as temporary
data sets for each CICS run. More commonly, you allocate and catalog them in
advance, reuse them repeatedly, and do not delete them when the CICS job has
You do not need DCB parameters for dump data sets (but see “Copying disk dump
data sets to tape” on page 179 for an exception). When CICS opens the dump data
set, it issues an MVS DEVTYPE macro. This returns the track size for direct access
devices, or 32760 for magnetic tape. The maximum block size used for a
transaction dump is the lesser of the values returned from the DEVTYPE macro
and 4096. As this usually results in a block size of 4096 (because devices generally
have a track size greater than this), CICS writes multiple blocks per track. After
writing each block, MVS returns the amount of space remaining on the current
track. If the space remaining is 256 bytes or more, then the size of the next block
written is the lesser of the values returned by MVS and 4096.
If the space remaining is less than 256 bytes, the next block is written to the next
track.
There are four global user exits that you can use with the transaction dump data
sets:
1. XDUCLSE, after the dump domain has closed a transaction dump data set
2. XDUREQ, before the dump domain takes a transaction dump
3. XDUREQC, after the dump domain takes a transaction dump
4. XDUOUT, before the dump domain writes a record to the transaction dump data
set
For programming information about the global user exits, see the CICS
Customization Guide
Alternatively, you can use the sample data definition statements in Figure 39 on
page 179 to allocate and catalog dump data sets on disk.
Figure 39. Sample job control statements for defining disk dump data sets
Note: Change the space allocations in this sample job stream to suit your own
installation’s needs.
If you are running CICS with XRF, you must allocate different data sets for the
alternate.
If you use tape for recording dump output, use unlabeled tape. Standard-labeled
tape, whether on a single tape drive or on two tape drives, stops you processing
the contents of any of the volumes with the DFHDU530 utility until after the CICS
step has been completed. If you want to use standard-labeled tape, make sure that
all the output produced in the CICS run fits on the one or two volumes mounted.
You cannot catalog dump data sets defined on unlabeled tapes. Your data set
definitions must be in the CICS startup job stream each time CICS is run.
Otherwise, if you do not intend copying dump data sets to tape or disk, it is not
necessary to include DCB parameters when defining dump data sets on disk, as
illustrated in the sample job in Figure 39.
Space calculations
For the initial installation of CICS, a dump data set of between 5 and 10MB should
be enough. When normal operation begins, you can adjust this to suit your own
installation’s requirements.
The following are examples of DD statements for transaction dump data sets on
unlabeled tapes:
Both the active and alternate CICS regions must refer to the same pair of data sets.
You define these data sets, but must not try to initialize them, and you are
recommended to place the data sets on separate volumes. The first time they are
used, CICS recognizes them as a new pair of data sets. If they are new, CICS
initializes them in such a way that, from then on, they can be used only as a pair
with the original generic APPLID and for their original purpose (that is, as either an
XRF message data set or an XRF control data set). If you need to redefine either
data set, for any reason, you must redefine both of them.
You must define a separate pair of data sets for each generic APPLID in use. If a
CICS complex consists of, for example, five regions, five pairs of data sets must be
defined.
You do not need to take backup copies of these data sets because when neither of
the active or alternate CICS regions is running, you can always start with a fresh
pair of data sets.
Why have two data sets? Because of RESERVE commands issued by other MVS
images in a multi-MVS environment, a shared DASD volume may become
inaccessible for periods ranging from milliseconds to perhaps a minute. By making
use of two data sets, placed on different volumes, CAVM can greatly reduce the risk
that, by preventing surveillance signals from being written, normal RESERVE
activity might cause the unnecessary takeover of a CICS region that was running
normally.
If the access paths to the two volumes are separate, CAVM is also less vulnerable
to hardware failures.
Figure 40. Sample job to define the XRF control data set
Notes:
2 The control interval sizes of the XRF control data set and the XRF message
data set must be equal, and at least 4096 bytes.
If you use the sample JCL in Figure 41 on page 184, read the accompanying notes;
the other options shown are suggestions only.
You should define the XRF message data set on a volume that is not subject to
RESERVE activity, and should not locate it where a single failure can make both it
and the XRF control data set inaccessible. This reduces the risk of the surveillance
signal being stopped accidentally while CICS is still running normally.
The XRF message data set is reserved for a short time for formatting when CICS
uses it for the first time.
Chapter 17. Defining the CICS availability manager data sets 183
//CICSMSG JOB 'accounting info',name,MSGCLASS=A
//XRMSG EXEC PGM=IDCAMS
//DDNAME2 DD DISP=OLD,UNIT=3380,VOL=SER=volid2
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
DEFINE CLUSTER -
(NAME(CICSTS13.CICS.applid.DFHXRMSG) -
RECORDSIZE(4089 4089) - 1
CONTROLINTERVALSIZE(4096) - 2
RECORDS(1500) -
NIXD -
SHAREOPTIONS(3,3) - 3
VOL(volid2) -
FILE(DDNAME2)) -
DATA - 4
(NAME(CICSTS13.CICS.applid.DFHXRMSG.DATA))
/*
//
Figure 41. Sample job to define the XRF message data set
Notes:
2 The control interval sizes of the XRF message data set and the XRF control
data set must be equal, and at least 4096 bytes.
If the CI size of the XRF message data set is greater than 4096, the CI buffers
occupy more real storage and virtual storage above the 16MB line, although fewer
I/O operations occur during the “catch-up” phase.
Space calculations
It is difficult to give a simple answer to the question: “How big should my XRF
message data set be?”. A simple answer is that the size required depends on the
length and number of messages that have been sent by the active CICS region but
not yet received by the alternate CICS region.
The XRF message data set is written and read cyclically. When the alternate CICS
region has read a message, that space becomes available for another message on
the next cycle. It is important to make the data set large enough to store the
backlog of messages that accumulates if the alternate CICS region is held up for
any reason. If the data set is too small, you run the risk of the alternate CICS
region being unable to read the data set correctly, and thereby becoming incapable
of taking over. However, the active CICS region does not write messages to the
data set until it has been notified that an alternate CICS region is present (signed
on to CAVM), and able to receive them.
Table 25 lists the sizes of the various messages sent to the data set.
Table 25. Sizes of messages sent to the XRF message data set
Type of TCT entry Bytes per install
The CICS-generated TCT entries (2 only) 629
VTAM terminals 710
Non-3270 devices with pipeline logical units 581 x TASKNO value
and TASKNO= operand (or TASKLIMIT if
RDO) is specified
MVS consoles 389
LUTYPE6.2 connection 2083
LUTYPE6.2 mode 169 + (837 x maximum number of sessions)
LUTYPE6.1 connection 226 + (732 x number of sessions)
IRC 237 + (520 x number of sessions)
IRCBCH 240 + (565 x number of sessions)
For VTAM terminals only, you should also make allowance for the following:
Table 26. Additional space requirements (VTAM terminals only)
Bytes per logon Bytes per logoff Bytes per signon Bytes per sign-off
70 35 45 29
The alternate CICS region issues some messages that can help you with your
sizing. The following messages issued by the alternate CICS region can give you
an idea of the rate of message transfer:
DFHTC1041I applid TERMINAL CONTROL TRACKING STARTED
DFHTC1040I applid TERMINAL CONTROL TRACKING RECORDS RECEIVED
DFHTC1043I applid TERMINAL CONTROL TRACKING ENDED - nnn RECORDS RECEIVED
The following messages may indicate that the XRF message data set is not large
enough:
DFHXG6447I NON CRUCIAL XRF MESSAGE(S) DISCARDED
DFHXA6541I XRF HAS FAILED. THE XRF MESSAGE READER IN THE ALTERNATE
SYSTEM HAS FALLEN TOO FAR BEHIND
Chapter 17. Defining the CICS availability manager data sets 185
Crucial and non-crucial messages
The active CICS region classifies its messages as crucial or non-crucial. An
example of a crucial message is an autoinstall message that the alternate CICS
region must receive if it is to remain eligible to take over. An example of a
non-crucial message is a logon message. The alternate CICS region can tolerate
the loss of such a message, and the loss only results in some degradation at
takeover; no standby session is established for that terminal and it must be logged
on again. Install messages that form part of the initial description are also treated
as non-crucial, because the active CICS region can try to send them again later,
and the alternate CICS region can construct its tables from the CICS catalog if it
does not receive a complete initial description.
The active discards non-crucial messages if it decides that sending them may
overwrite messages that the alternate CICS region has not yet read, thereby
making it ineligible to take over. It issues message DFHXG6447I for the first such
discard. The active CICS region always sends crucial messages. If this causes an
unread message to be overwritten, the alternate CICS region detects it and
terminates after issuing message DFHXA6541I.
Effect of a full XRF message data set on the active CICS region
The active CICS region is not affected by the state of the XRF message data set. It
continues running even when the data set is full; only the alternate CICS region
fails. Further, the XRF message data set is only “full” to the alternate CICS region
that fails; you can start a new alternate CICS region, using the same XRF message
data set, and the active CICS region resends all the messages for the new
alternate CICS region to begin tracking. If the first failure was caused by some
unusual condition, you may not need to increase the size of the XRF message data
set.
However, if messages DFHXG6447I or DFHXA6541I occur too often, you must stop
the active CICS region so that you can change to a larger data set.
Security
To ensure that the integrity and security of your CICS regions and terminal network
are not compromised, you must protect your XRF data sets using RACF. When you
have done so, give each CICS region CONTROL access to its own pair of data
sets. If you are running your XRF systems with an overseer program, make sure
that it has READ access to all the CAVM data sets. All other users must be denied
access to the data sets.
While an alternate CICS region can receive the active CICS region’s surveillance
signals and tracking messages successfully, in addition to writing its own
surveillance signals to either the XRF control data set or the XRF message data
set, it keeps running in spite of some types of I/O error. However, an isolated I/O
error that would have no effect during tracking, may cause failure of the alternate
CICS region if it occurs during takeover.
Note: When the active and alternate CICS regions are running in different MVS
images, they are not necessarily affected in the same way by the failure of a
control unit or channel path that provides access to an CAVM XRF data set.
Chapter 17. Defining the CICS availability manager data sets 187
188 CICS TS for OS/390: CICS System Definition Guide
Chapter 18. Defining user files
This chapter tells you how to define user files and how to access VSAM data sets,
BDAM data sets, data tables, and coupling facility data tables.
| CICS application programs process files, which, to CICS, are logical views of a
| physical data set or data table. For data tables, the file provides a view of the data
| table, which resides either in data space storage or in a coupling facility structure.
| Except in the case of coupling facility data tables, for which an underlying physical
| data set is optional, a data table is also associated with a source data set from
| which the table is loaded. For non-data-table files, the file provides a view of the
| data set.
| A file is identified to CICS by a file name of up to eight characters, and there can
| be many files defined to CICS that refer to the same physical data set or data table.
| This has the following effect, depending on the type of object the file is defining:
| v For non data table files, if more than one file refers to the same data set, each
| file refers to the same physical data.
| v For user-maintained data tables, if more than one file refers to the same data
| set, each file represents a view of a unique data table.
| v For CICS-maintained data tables, if more than one file refers to the same data
| set, only one can be defined as a CMT. The other files access data from the
| CMT created by the CMT file definition.
| v For coupling facility data tables, if more than one file refers to the same data set,
| each file represents a view of a unique coupling facility data table in a CFDT pool
| (unless each file specifies the same tablename and poolname, in which case
| each they provide a separate view of the same table.
| A data set, identified by a data set name (DSNAME) of up to 44 characters, is a
| collection of data held on disk. CICS file control processes only VSAM or BDAM
| data. Any data sets referred to by CICS files must be created and cataloged, so
| that they are known to MVS before any CICS job refers to them. Also, the data sets
| are usually initialized by being preloaded with at least some data before being used
| by CICS transactions.
| You can use coupling facility data tables to share data across a sysplex, using the
| CICS file control API, subject to some restrictions, such as a 16 byte key length.
You can use RLS access mode to share VSAM data sets between CICS
application-owning regions throughout a sysplex. See “VSAM record-level sharing
(RLS)” on page 192 for further information.
|
VSAM data sets
You create a VSAM data set by running the Access Methods Services (AMS) utility
program IDCAMS in a batch job, or by using the TSO DEFINE command in a TSO
session. The DEFINE command specifies to VSAM and MVS the VSAM attributes
and characteristics of your data set. You can also use it to identify the catalog in
which your data set is to be defined.
If required, you can load the data set with data, again using IDCAMS. You use the
AMS REPRO command to copy data from an existing data set into the newly
created one.
You can also load an empty VSAM data set from a CICS transaction. You do this by
defining the data set to CICS (by allocating the data set to a CICS file), and then
writing data to the data set, regardless of its empty state. See “Loading empty
VSAM data sets” on page 191.
When you create a data set, you may define a data set name of up to 44
characters. If you choose not to define a name, VSAM assigns the name for you.
This name, known as the data set name (or DSNAME), uniquely identifies the data
set to your MVS system.
You can define VSAM data sets accessed by user files under CICS file control as
eligible to be backed up while CICS is currently updating these data sets. For more
information about backing up VSAM files open for update, see “Backup while open
(BWO) of VSAM files” on page 101.
Depending on the type of data set, you can identify a record for retrieval by its key
(a unique value in a predefined field in the record), by its relative byte address, or
by its relative record number.
Sometimes you may need to identify and access your records by a secondary or
alternate key. With VSAM, you can build one or more alternate indexes over a
single base data set, so that you do not need to keep multiple copies of the same
information organized in different ways for different applications. Using this method,
you create an alternate index path (or paths), that links the alternate index (or
When you create a path you give it a name of up to 44 characters, in the same way
as a base data set. A CICS application program does not need to know whether it
is accessing your data via a path or a base; except that it may be necessary to
allow for duplicate keys if the alternate index was specified to have non-unique
keys.
Note: Although VSAM imposes some restrictions during initial data set load
processing, when the data-set is said to be in load mode, these do not
affect CICS transactions. For files opened in non-RLS mode, CICS file
control “hides” load mode processing from your application programs. For
files opened in RLS mode against an empty data set, load mode
processing is hidden from CICS by VSAM, and all VSAM requests are
allowed.
Using IDCAMS
If you have a large amount of data to load into a new data set, run the AMS utility
program IDCAMS as a batch job, using the REPRO command to copy data from an
existing data set to the empty data set. When you have loaded the data set with
IDCAMS, it can be used by CICS in the normal way.
Note: A data set in VSAM load mode cannot have alternate indexes in the upgrade
set. If you want to create and load a data set with alternate indexes, you
must use AMS, or some other suitable batch program, to load the data set
and invoke BLDINDEX to create the alternate indexes.
When the first write, or series of writes (mass insert), to the file is completed, CICS
closes the file and leaves it closed and enabled, so that it will be reopened for
normal processing when next referenced. If you attempt to read from a file in load
mode, CICS returns a NOTFOUND condition.
Note: If you define a data set to VSAM with the average and maximum record
lengths equal, and define a file to CICS with fixed length records to
reference that data set, the size of the records written to the data set must
be of the defined size. For example, if a record in a data set has been read
for update, you get errors when rewriting the record if, for example, you:
v Defined the record sizes to VSAM as 250 bytes, with the parameter
RECORDSIZE(250 250)
v Defined the file to CICS with the parameter RECFORM=FIXED
v Loaded the data set with records that are only 200 bytes long
With RLS, CICS regions that share VSAM data sets can reside in one or more MVS
images within an MVS parallel sysplex. This concept, in a parallel sysplex with
VSAM RLS supporting a CICSplex, is illustrated in Figure 42 on page 193.
TO R 1 TO R 2 TO R 3
C o u p lin g
fa c ility
MVS
VSAM
log ge r
d a ta s e ts
d a ta s e ts
Figure 42. Diagram illustrating a parallel sysplex with RLS. This view of RLS shows multiple CICS regions using
VSAM RLS, through the services of an SMSVSAM server in each MVS image.
Without RLS support (RLS=NO system initialization parameter), more than one
CICS region cannot open the same VSAM data set concurrently using a non-RLS
mode (such as LSR or NSR). These access modes mean that to share VSAM data
between CICS regions, you must either:
v Use shared data tables,
or
v Allocate the VSAM data sets to one CICS region, a file-owning region (FOR),
and function ship file requests from the applications to the FOR using either
MRO or APPC connections.
With RLS support, multiple CICS regions can open the same data set concurrently.
To use RLS:
v You need a level of DFSMS that supports RLS, and RLS=YES specified as a
CICS system initialization parameter
v The CICS regions must all run in the same parallel sysplex
v There must be one SMSVSAM server started in each MVS image
v Specify RLSACCESS(YES) in the CICS file resource definition to provide full
update capability for data sets accessed by multiple CICS regions.
For details of all the steps necessary to set up support for VSAM RLS, see the
CICS Transaction Server for OS/390 Installation Guide.
However, with RLS support, data sets can be shared in mixed access mode,
between CICS regions and batch jobs. Mixed access mode means that a data set
is open in RLS mode and a non-RLS mode concurrently by different users.
Although data sets can be open in different modes at different times, all the data
sets within a VSAM sphere normally should be opened in the same mode. (A
sphere is the collection of all the components—the base, index, any alternate
indexes and alternate index paths—associated with a given VSAM base data set.)
However, VSAM does permit mixed-mode operations on a sphere by different
applications, subject to some CICS restrictions. In the following discussion about
mixed-mode operation, references to a data set refer to any component of the
sphere.
CICS restrictions: You can open a file in RLS mode or non-RLS mode in a CICS
region when the referenced data set is already open in a different mode by another
user (CICS region or batch job). However, in addition to the above VSAM rules, a
A BDAM data set must contain data before it is used in a CICS run. You load the
data set using a batch program that writes the records sequentially. An example of
this is Figure 43.
Notes:
//BDAM EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=A
//SYSIN DD DUMMY
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=CICSTS12.bdam.user.file.init,DISP=SHR 1
//SYSUT2 DD DSN=CICSTS12.bdam.user.file,DISP=(,CATLG), 2
// SPACE=(TRK,(1,1)),UNIT=3380,VOL=SER=volid,
// DCB=(RECFM=F,LRECL=80,BLKSIZE=80,DSORG=DA) 3
Figure 43. Sample JCL to create and load a BDAM data set
1 The input data set (called SYSUT1 in this example) should be physically
sequential and have attributes that are compatible with the DCB for the output data
set (called SYSUT2 in this example; see note 3). In particular:
v If RECFM=F is specified for the output data set, then the input data set must
have RECFM=F or RECFM=FB specified, and the value of LRECL should be the
same for both data sets.
v If RECFM=V or RECFM=U is specified for the output data set, then the value of
LRECL for the input data set must not be greater than that specified on the
output data set.
2 When you create a data set, you define a data set name (DSNAME) of up to 44
characters. This data set name uniquely identifies the data set to your MVS system.
3 The DCB parameter for the output data set should specify the following:
v DSORG=DA. This identifies the data set as a BDAM data set.
These options are specified on the DFHFCT TYPE=FILE definition. The CICS
Resource Definition Guide gives information about defining files using DFHFCT
TYPE=FILE options. A data set created by this example, and loaded with data such
as that shown in Figure 44, would have the following attributes specified in its FCT
entry:
v BLKSIZE=80
v LRECL=40
v RECFORM=(FIXED BLOCKED)
v KEYLEN=8
A file is identified by its file name, which you specify when you define the file. CICS
uses this name when an application program, or the master terminal operator using
the CEMT command, refers to the associated data set.
Each file must also be associated with its data set in one of the following ways:
v Using JCL in the startup job stream
v Using the DSNAME and DISP parameters of the FILE resource definitions
v Using dynamic allocation with CEMT
v Using dynamic allocation with an application program
Using JCL
You can define the data set in a DD statement in the JCL of the CICS startup job.
The DD name must be the same as the file name that refers to the data set. For
example, the following DD statements would correspond to file definitions for the file
names VSAM1A and BDAMFILE:
//VSAM1A DD DSN=CICSTS13.CICS.vsam.user.file,DISP=OLD
//BDAMFILE DD DSN=CICSTS13.CICS.bdam.user.file,DISP=SHR
If you define a data set to CICS in this way it is allocated to CICS by MVS at CICS
startup time, and it normally remains allocated until the end of the CICS run. Also,
the physical data set is associated with the installed file definition throughout the
CICS run.
If you use JCL to define a user data set to the CICS system, the DD statement
must not include the FREE=CLOSE operand.
If you use the RLS=CR or RLS=NRI option on your DD statement, it will be ignored.
The access mode for the file (RLS or non-RLS) and any read integrity options must
be specified in the file definition.
When you are running CICS with XRF, you must specify DISP=SHR for data sets
defined in JCL, so that the alternate CICS region can start while the active CICS
region’s job is also in progress.
If you use DSNAME and DISP on the file definition, CICS allocates the data set
dynamically, at the time the first file referencing that data set is opened (that is
immediately before the file is opened). At this stage, CICS associates the file name
with the data set.
For information about using the DSNAME and DISP parameters, see SRT system
recovery table and FCT file control table in the CICS Resource Definition Guide
When you use this command, CICS allocates the data set as part of OPEN
processing as described above. The data set is automatically deallocated when the
last file entry associated with the data set is closed. Before you can dynamically
allocate a file using the CEMT command, the file status must be CLOSED, and also
be DISABLED or UNENABLED.
This method of defining the data set to CICS allows a file definition to be associated
with different data sets at different times. Alternatively, you can close the file and
deallocate the data set and then reallocate and open the same file with a different
DISP setting. For example, you could do this to enable the physical data set to be
shared with a batch program, which reads the data set.
For information about the CEMT SET command, see the CICS Supplied
Transactions manual.
The data set is then dynamically allocated in the same way as if you used the
CEMT master terminal command, but only if the file status is CLOSED, and also
DISABLED or UNENABLED. The file must be closed when you issue a command
to change file attributes.
For programming information about the EXEC CICS SET command, see the CICS
System Programming Reference manual.
Do not use the CICS dynamic allocation transaction, ADYN, which invokes the
sample CICS utility program, DFH99, for dynamic allocation of VSAM and BDAM
user files. Use of the ADYN transaction may conflict with the dynamic allocation
methods used within CICS file control, and can give unpredictable results.
Restrict the use of the ADYN transaction to those data sets not managed by CICS
file control, such as auxiliary trace and CICS transaction dump data sets.
For information about the CICS samples, see the CICS 4.1 Sample Applications
Guide.
You may need to access a single VSAM data set either through the base or through
one or more paths for different access requests. In this case, CICS uses a separate
file definition (that is, a separate file), for each of the access routes. Each file
definition must be associated with the corresponding data set name (a path is also
assigned a data set name). Each file must be open before CICS can access the file
using the attributes in its installed file definition. This is because opening one file for
a data set that is logically defined as two or more files with different attributes does
not mean that the data set is then available for all access routes.
CICS permits more than one file definition to be associated with the same physical
data set name. For example, you may want to define files with different processing
attributes that refer to the same data set.
CICS allows or denies access to data in a file, depending on whether the state of
the file is ENABLED. An enabled file that is closed is opened by CICS
automatically when the first access request is made. The file remains open until an
explicit CLOSE request or until the end of the CICS job.
You can also open a file explicitly by using either of the commands:
CEMT SET FILE(filename) OPEN
EXEC CICS SET FILE(filename) OPEN
When you use one of these commands, the file is opened irrespective of whether
its state is enabled or disabled. You may choose this method to avoid the overhead
associated with opening the file being borne by the first transaction to access the
file.
You can also specify that you want CICS to open a file immediately after
initialization by specifying the RDO OPENTIME(STARTUP) attribute (or the
FILSTAT=OPENED parameter in the DFHFCT macro). If you specify that you want
CICS to open the file after startup, and if the file status is ENABLED or DISABLED,
the CICS file utility transaction CSFU opens the file. (CSFU does not open files that
are defined as UNENABLED: the status of these remains CLOSED, UNENABLED.)
CSFU is initiated automatically, immediately before the completion of CICS
initialization. CICS opens each file with a separate OPEN request. If a user
transaction starts while CSFU is still running, it can reference and open a file that
CSFU has not yet opened; it does not have to wait for CSFU to finish.
The file is closed immediately if there are no transactions using the file at the time
of the request. The file is also disabled as part of the close operation, this form of
disablement showing as UNENABLED on a CEMT display. This prevents
subsequent requests to access the file implicitly reopening it.
If there are users at the time of the close request, the file is not closed immediately.
CICS waits for all current users to complete their use of the file. The file is placed in
an UNENABLING state to deny access to new users but allow existing users to
complete their use of the file. When the last user has finished with the file, the file is
closed and UNENABLED. If a transaction has made recoverable changes to a file
and then suffered a failure during syncpoint, the unit of work is shunted, and the file
can be closed at this point.
Any transactions that are current users of the file are abended and allowed to back
out any changes as necessary, and the file is then closed and UNENABLED. A file
UNENABLED as a result of a CLOSE request can be reenabled subsequently if an
explicit OPEN request is made.
Note: Closing a file using the FORCE option causes tasks of any current users of
the file to be terminated immediately by the CICS task FORCEPURGE
mechanism. Data integrity is not guaranteed with this mechanism. In some
extreme cases (for example, if an error occurs during backout processing),
CICS might terminate abnormally. For this reason, closing files using the
FORCE option should be restricted to exceptional circumstances.
XRF considerations
For both VSAM and BDAM files:
For information about defining CICS data tables, see the CICS Resource Definition
Guide. For programming information about the file control commands of the
application programming interface, see the CICS Application Programming
Reference manual. CICS supports two types of data tables:
v CICS-maintained data tables that CICS keeps in synchronization with their
source data sets.
v User-maintained data tables that are completely detached from their source
data sets after being loaded.
For either type, a global user exit can be used to select which records from the
source data set should be included in the data table.
For programming interface information about global user exits, see the CICS
Customization Guide. For further information on CICS data tables, see the CICS
Shared Data Tables Guide.
A global user exit can be invoked for each record copied into the data table. This
copying is subject to any selection criteria of the user-written exit.
The commands used to open data tables, and the rules and options concerning
their implicit and immediate opening are the same as those described in “Opening
VSAM or BDAM files” on page 199.
For a user-maintained data table, the ACB for the source data set is closed when
loading has been completed. The data set is deallocated if it was originally
dynamically allocated and there are no other ACBs open for it.
The commands used to close data tables, and the rules concerning current users of
a data table are the same as those described in “Closing VSAM or BDAM files” on
page 200.
XRF considerations
After an XRF takeover, a data table must be reloaded from its source data set when
the data table is opened. For a CICS-maintained data table, the effect is to restore
the data table to its final state in the previous active CICS region, because CICS
keeps data tables and source data sets in step. For a user-maintained data table,
the relationship of the current contents of the source data set to the contents of the
data table when the previous active CICS region terminated is
| application-dependent.
|
| Coupling facility data tables
| Coupling facility data tables provide a method of file data sharing, using CICS file
| control, without the need for a file-owning region, and without the need for VSAM
| RLS support. CICS coupling facility data table support is designed to provide rapid
| sharing of working data within a sysplex, with update integrity. The data is held in a
| coupling facility, in a table that is similar in many ways to a shared user-maintained
| data table. This section describes how to define the resources required for coupling
| facility data tables in an MVS coupling facility resource management (CFRM) policy.
| Because read access and write access have similar performance, this form of table
| is particularly useful for scratchpad data. Typical uses might include sharing
| scratchpad data between CICS regions across a sysplex, or sharing of files for
| which changes do not have to be permanently saved. There are many different
| requirements for scratchpad data, and most of these can be implemented using
| coupling facility data tables. Coupling facility data tables are particularly useful for
| grouping data into different tables, where the items can be identified and retrieved
| by their keys. For example, you could use a field in a coupling facility data table to
| maintain the next free order number for use by an order processing application, or
| you could maintain a list of the numbers of lost credit cards in a coupling facility
| data table.
| A coupling facility data table pool is a coupling facility list structure, and access to it
| is provided by a coupling facility data table server. Within each MVS image, there
| must be one CFDT server for each CFDT pool accessed by CICS regions in the
| MVS image. The names of the servers are formed by adding the prefix DFHCF to
| the pool name, giving DFHCF.poolname. Coupling facility data table pools are
| Coupling facility data table servers are protected against misuse by CICS regions
| that call them, thus ensuring system integrity. In particular, protection is provided to
| prevent calling region from being able to modify sensitive parameters to authorized
| functions.
| Likewise, CICS is protected from any side effects if a coupling facility data table
| server fails. If a CICS region issues a file control request to a coupling facility data
| table server that has failed, the resulting MVS abend is trapped and returned to the
| application program as a SYSIDERR condition.
| CICS can optionally load the coupling facility data table automatically from a source
| VSAM (KSDS) data set when it is first opened. Unlike user-maintained data tables,
| with coupling facility data tables you can specify that there is no associated source
| data set, allowing you to create an empty CFDT.
| Your application programs have access to a coupling facility data table as soon as it
| is created, although there are some restrictions on the keys that can be accessed
| while data is being loaded.
| From the application point of view, a pool and its server are similar to a file-owning
| region, and the pool can contain any number of tables provided that each one has
| a unique table name.
| Before a coupling facility data table server can use its pool, the active CFRM policy
| must contain a definition of the list structure to be used for the pool. To achieve this,
| add a statement that specifies the list structure to a selected CFRM policy, and then
| activate the policy.
| The CFRM structure definition specifies the size of the list structure and the
| preference list of coupling facilities in which it can be stored. You create the name
| of the list structure for a coupling facility data table pool by adding the prefix
| DFHCFLS_ to the pool name, giving DFHCFLS_poolname.
| Activating a CFRM policy: When you have defined the list structure in a CFRM
| policy, activate the policy using the MVS command
| SETXCF START,POLICY,POLNAME=policyname,TYPE=CFRM. Note that activating a CFRM
| policy that contains a definition of a list structure does not create the structure. It is
| created the first time an attempt is made to connect to it, which occurs when the
| first coupling facility data table server that refers to the corresponding pool is
| started.
| When the server creates a list structure, it is allocated with an initial size, which can
| be increased up to a maximum size as specified in the CFRM policy. All structure
| sizes are rounded up to the next multiple of 256KB at allocation time. Provided that
| space is available in the coupling facility, you can dynamically expand a list
| structure from its initial size up to its maximum size, or contract it to free up
| coupling facility space for other purposes.
| Storage calculations
| Data entry size = (170 + (average record data size¹))
| + 5% extra for control information
|
| ¹ Average record data size must have a 2-byte prefix added
| and be rounded up to a multiple of 256 bytes.
|
| Total size = 200KB
| + (number of tables x 1KB)
| + (number of records in all tables x data entry size)
|
| The above calculation assumes that the structure is allocated at its maximum size.
| If it is allocated at less than its maximum size, the same amount of control
| information is still required, so the percentage of space occupied by control
| information is correspondingly increased. For example, if a structure is allocated at
| one third of its maximum size, the overhead for control information increases to
| around fifteen per cent.
| For information about the reserved space parameters you can use to enable the
| server to avoid a structure full condition, see “Reserved space parameters” on
| page 391 and “Avoiding structure full conditions” on page 392.
| You can create the DFHDBFK data set by running an IDCAMS job, an example of
| which is shown in Figure 46. You can use this job to load some IMS commands, or
| you can use the maintenance function within the CDBM transaction.
|
//DBFKJOB JOB 'accounting information',name,MSGCLASS=A
//*
//DBFKDEF EXEC PGM=IDCAMS,REGION=1M
//SYSPRINT DD SYSOUT=*
//AMSDUMP DD SYSOUT=*
//SYSIN DD *
DELETE CICSTS13.CICS.DFHDBFK
SET MAXCC=0
DEFINE CLUSTER ( -
NAME( CICSTS13.CICS.DFHDBFK ) -
INDEXED -
RECORDS(100 20) -
KEYS(22,0) -
RECORDSIZE(1428 1428) -
) -
INDEX ( -
NAME( CICSTS13.CICS.DFHDBFK.INDEX ) -
CONTROLINTERVALSIZE(512) -
) -
DATA ( -
NAME( CICSTS13.CICS.DFHDBFK.DATA ) -
CONTROLINTERVALSIZE(2048) -
)
/*
//* The next two job steps are optional.
//*
//DBFKINID EXEC PGM=IDCAMS,REGION=1M
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE CICSTS13.CICS.DBFKINIT
/*
//DBFKINIF EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=A
//SYSUT2 DD DSN=CICSTS13.CICS.DBFKINIT,DISP=(NEW,CATLG),
// UNIT=dbfkunit,VOL=SER=dbfkvol,SPACE=(TRK,(1,1)),
// DCB=(RECFM=FB,LRECL=40,BLKSIZE=6160)
//* Place the definitions you want to load after SYSUT1. For example:
//SYSUT1 DD *
SAMPLE DIS DB DI21PART
SAMPLE STA DB DI21PART
SAMPLE STO DB DI21PART
/*
//SYSIN DD *
GENERATE MAXFLDS=1
RECORD FIELD=(40)
/*
Figure 46. Sample job to define and initialize the DFHDBFK data set (Part 1 of 2)
Figure 46. Sample job to define and initialize the DFHDBFK data set (Part 2 of 2)
|
| Job control statements for CICS execution
| If you define the DFHDBFK data set using the sample JCL shown in Figure 46 on
| page 207, the data definition statement for the CICS execution is as follows:
| //DFHDBFK DD DSN=CICSTS13.CICS.DFHDBFK,DISP=SHR
| Alternatively, if you want to use dynamic file allocation, add the fully-qualified data
| set name to the DFHDBFK file resource definition.
|
| Record layout in the CDBM GROUP command file
| Each record in the DFHDBFK file may be up to 1428 characters long, as follows:
|| field length content description
| 1 12 Group a 12-character field containing your chosen
| name for this group. The acceptable characters
| are A-Z 0-9 $ @ and #. Leading or embedded
| blanks are not allowed, but trailing blanks are
| acceptable.
| 2 10 IMS Command a 10-character field containing any of the IMS
| command verbs that are valid for CDBM (see
| the CICS IMS Database Control Guide for
| details). Leading or embedded blanks are not
| allowed, but trailing blanks are acceptable.
| Note: The validity of the IMS command verb is
| not checked by CDBM. Invalid values will be
| reported by IMS when the command is
| attempted.
Chapter 19. Defining the CDBM GROUP command data set 209
210 CICS TS for OS/390: CICS System Definition Guide
|
Chapter 20. Defining the CMAC messages data set
This chapter describes the VSAM key-sequenced data set (KSDS) called
DFHCMACD. DFHCMACD is used by the CMAC transaction to provide online
descriptions of the CICS messages and codes.
You can create the DFHCMACD data set and load it with the CICS-supplied
messages and codes data by running the DFHCMACI job. Some IBM-supplied
service may include changes to CICS messages and codes, and associated
changes to the DFHCMACD data set. You can apply such service changes to the
DFHCMACD data set by running the DFHCMACU job.
For more information about the DFHCMACI and DFHCMACU jobs, see the CICS
Transaction Server for OS/390 Installation Guide.
Notes:
1. The DFHCMACD data set is accessed by the file CMAC, managed by CICS
File Control. You must create a definition for this file in the CSD or FCT. The
CICS-supplied definition for the CMAC file and other resources needed by the
CICS messages facility are in the CSD group DFHCMAC. The CICS IVPs have
a DD statement for the CMAC file, but for dynamic allocation you should copy
the supplied resource definition for the CMAC file and add the DSNAME option.
2. To use the CICS messages facility in your CICS region, you must create your
own CSD group list to include the CICS-supplied group list DFHLIST, the
DFHCMAC group for the CICS messages facility, and any other groups of
resources that your CICS region needs. You must specify this group list by
using the system initialization parameter GRPLIST when you start up your CICS
region.
3. You should specify the DFHCMAC group of resources for the CICS messages
facility only in those CICS regions that need to use the facility; for example on
some terminal-owning regions, but perhaps not on data-owning regions.
Job control statements to define and load the messages data set
Before its first use, the DFHCMACD data set should be defined and loaded as a
VSAM key sequenced data set (KSDS). The sample job in Figure 47 on page 212
shows you how to do this.
Note: You can define and load the DFHCMACD data set by running the
DFHCMACI job.
Figure 47. Sample job to define and initialize the CMAC data set
You can also specify other system initialization parameters, which cannot be coded
in the SIT. You specify which SIT you want, and other system initialization
parameters (with a few exceptions), in any of three ways:
1. In the PARM parameter of the EXEC PGM=DFHSIP statement
2. In the SYSIN data set defined in the startup job stream
3. Through the system operator’s console
You can also use these methods of input to the system initialization process to
override most of the system initialization parameters assembled in the SIT.
The syntax of the system initialization parameters that can be coded in the DFHSIT
macro is listed in Table 27 on page 216. Except for those parameters marked “SIT
macro only”, all the system initialization parameters can be provided at run time,
although there are restrictions in some cases. The restrictions are explained at the
end of the description of the system initialization parameter to which they apply.
See the CHKSTRM parameter on page 236 for an example of such a restriction.
There are some other CICS system initialization parameters (and options of the
parameters in Table 27 on page 216) that you cannot define in the DFHSIT macro.
(See “Initialization parameters that cannot be coded in the DFHSIT macro” on
page 226.) The parameters that you cannot define in the DFHSIT macro are shown
in Figure 54 on page 227.
For a list of all the system initialization keywords grouped by their functional area,
see “Appendix. System initialization parameters grouped by functional area” on
page 417. This ensures that you do not miss coding an important parameter relating
to a particular CICS function. For details of how to code a parameter, you still have
to refer to the parameter descriptions that are listed alphabetically in this chapter.
To avoid generating specific system initialization tables for each CICS region, a
simple solution is to let CICS load the default, unsuffixed table (DFHSIT) at start-up,
and supply the system initialization parameters for each region in a SYSIN data set.
For more information about the source of the default system initialization table, see
“DFHSIT, the default system initialization table” on page 220.
You must terminate your macro parameters with the following END
statement.
END DFHSITBA
Figure 48. DFHSIT, the pregenerated default system initialization table 1/9
Figure 49. DFHSIT, the pregenerated default system initialization table 2/9
Figure 50. DFHSIT, the pregenerated default system initialization table 3/9
Figure 51. DFHSIT, the pregenerated default system initialization table 4/9
Figure 52. DFHSIT, the pregenerated default system initialization table 5/9
Figure 53. DFHSIT, the pregenerated default system initialization table 6/9
The system initialization parameters that you cannot define in the DFHSIT macro
are shown in Figure 54 on page 227.
Figure 54. System initialization parameters you cannot code in the DFHSIT macro
Notes:
1 When the DFHSIT keyword is specified with more than one value, these values
must be enclosed within parentheses: for example, BMS=(FULL,COLD).
For keywords with the suffix option, if you code YES in the SIT, an unsuffixed
version of the table or program is loaded. For example, TCT=YES results in a table
called DFHTCT being loaded. You can also select an unsuffixed module or table at
CICS startup by specifying keyword=, or keyword=YES. For example, if you code:
FCT=, or FCT=YES
blanks are appended to DFHFCT, and these unsuffixed names are used during
initialization.
3 The Suffix column indicates whether you can code a suffix. (xx indicates that a
suffix can be coded.)
A suffix can be any 1 or 2 characters, but you must not use DY, and you cannot use
NO as a suffix.
If you code a suffix, a table or program with that suffix appended to the standard
name is loaded.
5 The COLD start column indicates whether the resource can be forced to start
COLD. (COLD indicates that the resource can be cold started individually).
i1.TST and cold start Ensure that you cold start temporary storage or the whole
system if you make any change to the TST.
For more information about CICS table and program selection, see “Selecting
versions of CICS programs and tables” on page 317.
6 If you code MCT=NO, the CICS monitoring domain builds dynamically a default
monitoring control table. This ensures that default monitoring control table entries
are always available for use when monitoring is on and a monitoring class is active.
Default values are underscored; for example, TYPE=CSECT. This notation applies
to the SIT macro parameters only.
TYPE={CSECT|DSECT}
specifies the type of SIT to be generated.
CSECT
A regular control section that is normally used.
DSECT
A dummy control section.
ADI={30|number}
specifies the alternate delay interval in seconds for an alternate CICS region
when you are running CICS with XRF. The minimum delay that you can specify
is 5 seconds. This is the time that must elapse between the (apparent) loss of
the surveillance signal in the active CICS region, and any reaction by the
alternate CICS region. The corresponding parameter for the active is PDI. ADI
and PDI need not have the same value.
Note: You must give careful consideration to the values you specify for the
parameters ADI and JESDI so that they do not conflict with your
installation’s policy on PR/SM RESETTIME and the XCF INTERVAL and
OPNOTIFY intervals. You should ensure that the sum of the interval you
specify for ADI plus JESDI exceeds the interval specified by the XCF
INTERVAL and the PR/SM policy interval RESETTIME.
| AICONS={NO|YES|AUTO}
| specifies whether you want autoinstall support for consoles. You can also set
| the state of autoinstall support for consoles dynamically using the CEMT, or
| EXEC CICS, SET AUTOINSTALL command.
| NO This is the default, and specifies that the CICS regions does not
| support autoinstall for consoles.
| YES Specifies that console autoinstall is active and CICS is to call the
| autoinstall control program, as part of the autoinstall process, when an
| undefined console issues an MVS MODIFY command to CICS.
| AUTO Specifies that console autoinstall is active but CICS is not to call the
| autoinstall control program when an undefined console issues an MVS
| MODIFY command to CICS. CICS is to autoinstall undefined consoles
| automatically without any input from the autoinstall control program. The
| 4-character termid required for the console’s TCT entry is generated by
| CICS, beginning with a ¬ (logical not) symbol.
| See the CICS Customization Guide for information about writing an autoinstall
| control program that supports consoles.
Note: You can specify only one user-replaceable program on the AIEXIT
parameter. Which of the CICS-supplied programs (or customized
versions thereof) that you choose depends on what combination of
resources you need to autoinstall.
Note: The AILDELAY parameter does not apply to the following types of
autoinstalled APPC connection, which are not deleted:
v Sync level 2-capable connections (for example, CICS-to-CICS
connections)
v Sync level 1-only, limited resource connections installed on a CICS
that is a member of a generic resource group
hhmmss
A 1 to 6-digit number. The default is 0. For non-LU6.2 terminals and
LU6.2 single-session connections installed via a CINIT, 0 means that
the terminal entry is deleted as soon as the session is ended. For
LU6.2 connections installed via a BIND, 0 means that the connection is
deleted as soon as all sessions are ended, but is reusable if a new
BIND occurs before the deletion starts.
Note: This value does not limit the total number of terminals that can be
autoinstalled. If you have a large number of terminals autoinstalled,
shutdown can fail due to the MXT system initialization parameter being
reached or CICS becoming short on storage. For information about
preventing this possible cause of shutdown failure, see the CICS
Performance Guide.
AIRDELAY={700|hhmmss}
specifies the delay period that elapses after an emergency restart before
autoinstalled terminal and APPC connection entries that are not in session are
deleted. The AIRDELAY parameter also applies when CEMT SET VTAM OPEN
is issued after a VTAM abend and PSTYPE=MNPS is coded. This causes
autoinstalled resources to be deleted, if the session was not restored and has
not been used since the ACB was opened.
Note: The AIRDELAY parameter does not apply to the following types of
autoinstalled APPC connection, which are always written to the CICS
global catalog and recovered during a warm or emergency start:
v Sync level 2-capable connections (for example, CICS-to-CICS
connections)
v Sync level 1-capable, limited resource connections installed on a
CICS that is a member of a generic resource group
hhmmss
A 1-to 6-digit number. If you leave out the leading zeros, they are
supplied. The default is 700, meaning a delay of 7 minutes. A value of 0
means that autoinstalled definitions are not written to the global catalog
and therefore are not restored at an emergency restart.
For guidance about the performance implications of setting different
AIRDELAY values, see the CICS Performance Guide.
Note: If you are running CICS with XRF, set the same value on the AIRDELAY
parameter for both the active and the alternate CICS regions. It is
particularly important, if you want autoinstall sessions to be reestablished
after a takeover, that you avoid coding a zero on this parameter for
either the active or the alternate CICS regions.
Note: If you specify AKPFREQ=0, no activity keypoints are written, with the
following consequences:
v The CICS system log automatic deletion mechanism will not work so
efficiently in this situation. The average system log occupancy would
merely increase, maybe quite dramatically for some users. Without
efficient automatic deletion, the log stream will spill onto secondary
storage, and from there onto tertiary storage (unless you control the
size of the log stream yourself).
v Emergency restarts are not prevented, but the absence of activity
keypoints on the system log affects the performance of emergency
restarts because CICS has to read backwards through the entire log
stream.
v Backout-while-open (BWO) support is seriously affected, because
without activity keypointing, tie-up records are not written to the
forward recovery logs and the data set recovery point is not updated.
Therefore, for forward recovery to take place, all forward recovery logs
must be kept since the data set was first opened for update after the
last image copy. For more information about the effect of AKPFREQ=0
on BWO, see “Effect of disabling activity keypointing” on page 103.
APPLID={DBDCCICS|applid}
specifies the VTAM application identifier for this CICS region.
applid This name, 1 through 8 characters, identifies the CICS region in the
| VTAM network. It must be unique if running in a sysplex. It must match
the name field specified in the APPL statement of the VTAM VBUILD
TYPE=APPL definition. For an example, see the CICS Transaction
Server for OS/390 Installation Guide.
When you define this CICS region to another CICS region, in a
CONNECTION definition you specify the applid as the NETNAME.
When sharing a DL/I database with a batch region, the applid is used
by the batch region to identify the CICS region.
If the CICS region uses XRF, the form of the APPLID parameter is:
APPLID=(generic_applid,specific_applid)
specifies the generic and specific XRF applids for the CICS region.
Both applids must be 1 through 8 characters.
generic_applid
The generic applid for both (active and alternate) the active and the
alternate CICS regions. Therefore, you must specify the same name for
generic_applid on the APPLID system initialization parameter for both
CICS regions. Because IRC uses generic_applid to identify the CICS
regions, there can be no IRC connection for an alternate CICS region
until takeover has occurred and the alternate CICS region becomes the
active CICS region.
The interval specified is the delay before the CXRE transaction runs. CXRE
tries to reacquire any XRF-capable (class 1) terminal session that failed to get a
backup session, or failed the switch for some other reason. CXRE tries to
reacquire other terminals that were in session at the time of the takeover.
Note that the same delay interval applies to the connection of terminals with
AUTOCONNECT(YES) specified in the TYPETERM definition, at a warm or
emergency restart, whether or not you have coded XRF=YES.
AUXTR={OFF|ON}
specifies whether the auxiliary trace destination is to be activated at system
initialization. This parameter controls whether any of the three types of CICS
trace entry are written to the auxiliary trace data set. The three types are: CICS
system trace (see the SYSTR parameter), user trace (see the USERTR
parameter), and exception trace entries (that are always made and are not
controlled by a system initialization parameter).
OFF Do not activate auxiliary trace.
ON Activate auxiliary trace.
For details of internal tracing in main storage, see the INTTR parameter on
page 264.
AUXTRSW={NO|ALL|NEXT}
specifies whether you want the auxiliary trace autoswitch facility.
NO Disables the autoswitch facility.
NEXT Enables the autoswitch facility to switch to the next data set at end of
file of the first data set used for auxiliary trace. Coding NEXT permits
one switch only, and when the second data set is full, auxiliary trace is
switched off.
You need full or standard function BMS, if you are using XRF and have
specified MESSAGE for RECOVNOTIFY on any of your TYPETERM definitions.
MINIMUM
The minimum version of BMS is included.
STANDARD
The standard version of BMS is included.
FULL The full version of BMS is included. This is the default in the SIT.
COLD
CICS deletes delayed messages from temporary storage, and destroys
their interval control elements (ICEs). COLD forces the deletion of
messages regardless of the value in effect for START. If COLD is not
specified, the availability of messages will depend on the values in
effect for the START and TS parameters.
UNALIGN
specifies that all BMS maps assembled before CICS/OS/VS Version 1
Release 6 are unaligned. Results are unpredictable if the stated
alignment does not match the actual alignment.
ALIGN
All BMS maps assembled before CICS/OS/VS Version 1 Release 6 are
aligned.
DDS BMS is to load suffixed versions of map sets and partition sets. BMS
first tries to load a version that has the alternate suffix (if the transaction
uses the alternate screen size). If the load fails, BMS tries to load a
version that has the default map suffix. If this fails too, BMS tries to
load the unsuffixed version. DDS, which stands for “device dependent
suffixing”, is the default.
You need to use map suffixes only if the same transaction is to be run
on terminals with different characteristics (in particular, different screen
sizes). If you do not use suffixed versions of map sets and partition
sets, CICS need not test for them.
NODDS
BMS is not to load suffixed versions of map sets and partition sets.
Specifying NODDS avoids the search for suffixed versions, saving
processor time.
CDSASZE={0K|number}
specifies the size of the CDSA. The default size is 0, indicating that the DSA
size can change dynamically. A non-zero value indicates that the DSA size is
fixed.
number
specify number as an amount of storage in the range 0 to 16777215
bytes in multiples of 262144 bytes (256KB). If the size specified is not a
multiple of 256KB, CICS rounds the value up to the next multiple.
You can specify number in bytes (for example, 4194304), or as a whole
number of kilobytes (for example, 4096K), or a whole number of
megabytes (for example, 4M).
You can also use the CICS-supplied transaction, CSFE, to switch terminal
storage-violation checking on and off.
For information about checking for storage violations, see the CICS Problem
Determination Guide.
You can specify the CHKSTRM parameter in PARM, SYSIN, or CONSOLE only.
| CHKSTSK={CURRENT|NONE}
specifies that task storage-violation checking at startup is to be activated or
deactivated.
CURRENT
All storage areas on the transaction storage chain for the current task
only are to be checked.
NONE Task storage-violation checking is to be deactivated.
You can also use the CICS-supplied transaction, CSFE, to switch task
storage-violation checking on and off.
For information about checking for storage violations, see the CICS Problem
Determination Guide.
Restrictions
You can specify the CHKSTSK parameter in PARM, SYSIN, or CONSOLE only.
CICSSVC={216|number}
specifies the number that you have assigned to the CICS type 3 SVC. The
default number is 216.
A CICS type 3 SVC with the specified (or default) number must be installed in
the LPA. For information about installing the CICS SVC, see the CICS
Transaction Server for OS/390 Installation Guide.
CICS checks if the SVC number supplied corresponds to the correct level of the
CICS Type 3 SVC module, DFHCSVC. If the SVC number does not correspond
to the correct level of DFHCSVC, the following can happen, depending on the
value specified for the PARMERR system initialization parameter:
v CICS is terminated with a system dump
v The operator is allowed to retry using a different SVC number
For details of the PARMERR system initialization parameter, see page 276.
CLSDSTP={NOTIFY|NONOTIFY}
specifies the notification required for an EXEC CICS ISSUE PASS command.
This parameter is applicable to both autoinstalled and non-autoinstalled
terminals. You can use the notification in a user-written node error program to
reestablish the CICS session when a VTAM CLSDST PASS request resulting
from an EXEC CICS ISSUE PASS command fails. For more information about
the EXEC CICS ISSUE PASS command, see the CICS Application
Programming Reference manual.
NOTIFY
CICS requests notification from VTAM when the EXEC CICS ISSUE
PASS command is executed.
NONOTIFY
CICS does not request notification from VTAM.
For information about coding the macros for this table, see the CICS Resource
Definition Guide.
CMDPROT={YES|NO}
specifies that you want to allow, or inhibit, CICS validation of start addresses of
storage referenced as output parameters on EXEC CICS commands.
YES CICS validates the initial byte at the start of any storage that is
referenced as an output parameter on EXEC CICS commands to
ensure that the application program has write access to the storage.
This ensures that CICS does not overwrite storage on behalf of the
application program when the program itself cannot do so. If CICS
detects that an application program has asked CICS to write into an
area to which the application does not have addressability, CICS
abends the task with an AEYD abend.
The level of protection against bad addresses depends on the level of
storage protection in the CICS environment. The various levels of
protection provided when you specify CMDPROT=YES are shown in
Table 30.
NO CICS does not perform any validation of addresses of the storage
referenced by EXEC CICS commands. This means that an application
program could cause CICS to overwrite storage to which the application
program itself does not have write access.
Table 30. Levels of protection provided by CICS validation of application-supplied addresses
Environment Execution key of affected Types of storage referenced by
programs applications that cause AEYD abends
Read-only storage CICS-key and user-key CICS key 0 read-only storage (RDSA
(RENTPGM=PROTECT) and ERDSA).
Subsystem storage protection User-key All CICS-key storage (CDSA and
(STGPROT=YES) ECDSA)
Transaction isolation (TRANISO=YES) User-key and ISOLATE(YES) Task-lifetime storage of all other
transactions
Transaction isolation (TRANISO=YES) User-key and ISOLATE(NO) Task-lifetime storage of all except other
user key and ISOLATE(NO) transactions
Base CICS (all storage is CICS key 8 CICS-key and user-key MVS storage only
storage) (RENTPGM=NOPROTECT;
STGPROT=NO; and TRANISO=NO)
CMDSEC={ASIS|ALWAYS}
specifies whether or not you want CICS to honor the CMDSEC option specified
on a transaction’s resource definition.
ASIS means that CICS honors the CMDSEC option defined in a transaction’s
resource definition. CICS calls its command security checking routine
only when CMDSEC(YES) is specified in a transaction resource
definition.
ALWAYS
CICS overrides the CMDSEC option, and always calls its command
security checking routine to issue the appropriate call to the SAF
interface.
Restrictions
You can specify the CMDSEC parameter in the SIT, PARM, or SYSIN only.
CONFDATA={SHOW|HIDETC}
specifies whether CICS is to suppress (hide) user data that might otherwise
appear in CICS trace entries or in dumps that contain the VTAM receive any
input area (RAIA). This option applies to initial input data received on a VTAM
RECEIVE ANY operation, the initial input data received on an MRO link, and
FEPI screens and RPLAREAs.
This option also applies to the CICS client use of a Virtual Terminal. Data is
traced before and after codepage conversion and is suppressed if HIDETC is
used in combination with CONFDATA YES in the transaction.
SHOW
Data suppression is not in effect. User data is traced regardless of the
CONFDATA option specified in transaction resource definitions. This
option overrides the CONFDATA option in transaction resource
definitions.
HIDETC
CICS is to ‘hide’ user data from CICS trace entries. It also indicates that
VTAM RAIAs are to be suppressed from CICS dumps. The action
actually taken by CICS is subject to the individual CONFDATA attribute
on the transaction resource definition (see Table 31 on page 239).
If you specify CONFDATA=HIDETC, CICS processes VTAM, MRO, and
FEPI user data as follows:
v VTAM: CICS clears the VTAM RAIA containing initial input as soon
as it has been processed, and before the target transaction has been
identified.
The normal trace entries (FC90 and FC91) are created on
completion of the RECEIVE ANY operation with the text “SUPPRESSED
DUE TO CONFDATA=HIDETC IN SIT” replacing all the user data except
the first 4 bytes of normal data, or the first 8 bytes of function
management headers (FMHs).
CICS then identifies the target transaction for the data. If the
transaction definition specifies CONFDATA(NO), CICS traces the
user data that it suppressed from the FC90 trace in the trace entry
AP FC92. This trace entry is not created if the transaction is defined
with CONFDATA(YES).
v MRO: CICS does not trace the initial input received on an MRO link.
Modified data: By waiting until the transaction has been identified to determine
the CONFDATA option, VTAM or MRO data may have been modified (for
example, it may have been translated to upper case).
The interaction between the CONFDATA system initialization parameter and the
CONFDATA attribute on the transaction resource definition is shown in Table 31.
Table 31. Effect of CONFDATA system initialization and transaction definition parameters
CONFDATA on transaction CONFDATA system initialization parameter
SHOW HIDETC
NO Data not suppressed VTAM RAIAs are cleared. Initial input of VTAM
and MRO data is suppressed from the norma
FC90, FC91, DD16, DD23, and DD25 trace
entries. For FC90 and DD16 traces only,
suppressed user data is traced separately in
FC92 trace entry. FEPI screens and
RPLAREAs are traced as normal.
YES Data not suppressed VTAM RAIAs are cleared. All VTAM, MRO, an
FEPI user data is suppressed from trace
entries.
You cannot modify the CONFDATA option while CICS is running. You must
restart CICS to make such a change.
Restrictions
You can specify the CONFDATA parameter in the SIT, PARM, and SYSIN only.
CONFTXT={NO|YES}
specifies whether CICS is to prevent VTAM from tracing user data.
NO CICS does not prevent VTAM from tracing user data.
Restrictions
You can specify the CONFTXT parameter in the SIT, PARM, and SYSIN only.
CSDACC={READWRITE|READONLY}
specifies the type of access to the CSD to be permitted to this CICS region.
Note that this parameter is effective only when you start CICS with a
START=COLD parameter. If you code START=AUTO, and CICS performs a
warm or emergency restart, the file resource definitions for the CSD are
recovered from the CICS global catalog. However, you can redefine the type of
access permitted to the CSD dynamically with a CEMT SET FILE, or an EXEC
CICS SET FILE, command.
READWRITE
Read/write access is allowed, permitting the full range of CEDA, CEDB,
and CEDC functions to be used.
READONLY
Read access only is allowed, limiting the CEDA and CEDB transactions
to only those functions that do not require write access.
CSDBKUP={STATIC|DYNAMIC}
specifies whether or not the CSD is eligible for BWO. If BWO is wanted, specify
CSDBKUP=DYNAMIC.
If you specify a value for CSDBUFND that is less than the required minimum
(the CSDSTRNO value plus 1), VSAM automatically changes the number of
buffers to the number of strings plus 1 when CICS issues the OPEN macro for
the CSD.
If you specify a value for CSDBUFNI that is less than the required minimum
(the CSDSTRNO value), VSAM automatically changes the number of buffers to
the number of strings when CICS issues the OPEN macro for the CSD.
After the CSD read completes, a shared lock remains held until
syncpoint. This guarantees that a CSD record read within an RDO task
Your CSD must be defined to support RLS access: the IMBED option must not
be specified, and recovery attributes must be defined in the VSAM catalog. The
CICS Transaction Server for OS/390 Installation Guide explains the data set
characteristics required to support RLS access. If your CSD does not meet
these requirements, it will fail to open.
If you specify both RLS and local shared resource (CSDLSRNO=number), RLS
takes precedence.
Note: If you define a recoverable CSD for RLS-mode access, you have to
quiesce all RLS activity against the CSD before you can update the CSD
using the batch utility program, DFHCSDUP. You can use the SET
DSNAME QUIESCE command to do this, to ensure that no CEDA,
CEDB, or CEDC transactions can run until you unquiesce the data set
on completion of the batch job.
CSDSTRNO={2|number}
specifies the number of concurrent requests that can be processed against the
CSD. When the number of requests reaches the STRNO value, CICS
automatically queues any additional requests until one of the active requests
terminates.
CICS requires two strings per CSD user, and you can increase the CSDSTRNO
value, in multiples of two, to allow more than one concurrent CEDA user.
See “Multiple users of the CSD within a CICS region (non-RLS)” on page 143
before you code this parameter.
For more information about the DAEOPTION option, see the CICS System
Programming Reference manual.
DATFORM={MMDDYY|DDMMYY|YYMMDD}
specifies the external date display standard that you want to use for CICS date
displays. An appropriate indicator setting is made in the CSA. It is examined by
CICS supplied system service programs that display a Gregorian date. CICS
maintains the date in the form 0CYYDDD in the CSA (where C=0 for years
19xx, 1 for years 20xx, and so on; YY=year of century; and DDD=day of year),
and converts it to the standard you specify for display.
The DATFORM option selects the order in which the date is to be displayed. It
does not select the format of the year. Both YY and YYYY formats are
displayed.
MMDDYY
The date is in the form of month-day-year, MMDDYY and MMDDYYYY.
DDMMYY
The date is in the form of day-month-year, DDMMYY and DDMMYYYY.
YYMMDD
The date is in the form of year-month-day, YYMMDD and YYYYMMDD.
Specifying DBCTLCON=YES means you don’t need to define the DBCTL attach
program in the CICS post-initialization program list table (PLT).
DCT=({YES|xx|NO})
specifies the destination control table suffix. (See page 227.) For information
about defining this table, see the CICS Resource Definition Guide.
This parameter is effective only on a COLD start. CICS does not load a DCT on
a warm or an emergency restart. All transient data destination definitions are
recovered from the global catalog and from information provided by the
recovery manager.
You can use a mixture of RDO and macro definitions for transient data, but you
are recommended to use RDO. This is because there are new attributes
available through RDO that are not available using the DFHDCT macro (for
example, the INDOUBT attributes, WAIT and WAITACTION).
During a cold start, any macro entries defined in the macro load table are
added to the transient data queue directory first. These are followed by RDO
entries using GRPLIST. Any RDO entries being installed during a cold start that
have the same name as entries already installed will be rejected. They can
safely be installed once CICS is fully initialized.
You can no longer use the COLD option on the DCT system initialization
parameter.
The specified userid must be defined to RACF if you are using external security
(that is, you have specified the system initialization parameter SEC=YES).
Restrictions
You can specify the DFLTUSER parameter in the SIT, PARM, or SYSIN only.
DIP={NO|YES}
specifies whether the batch data interchange program, DFHDIP, is to be
included. This supports the batch controller functions of the IBM 3790
Communication System and the IBM 3770 Data Communication System.
(Support is provided for the transmit, print, message, user, and dump data sets
of the 3790 system.) (For the effect of this parameter, see page 227.)
DISMACP={YES|NO}
specifies whether CICS is to disable any transaction that terminates abnormally
with an ASRD or ASRE abend (caused by a user program invoking a CICS
macro, or referencing the CSA, the TCA, or the DB2 RCT).
From the storage size that you specify on the DSALIM parameter, CICS
allocates the following dynamic storage areas:
The user DSA (UDSA)
The user-key storage area for all user-key task-lifetime storage below
the 16MB boundary.
Note: For more flexible control over when mass delete operations take
place, you can use a CEMT SET DELETSHIPPED or EXEC CICS
SET DELETSHIPPED command to reset the interval. (The revised
| Note: See also the DTRPGM parameter, used to name the dynamic routing
| program.
| DTRPGM={DFHDYP|program-name}
| specifies the name of the dynamic routing program to be used for dynamically
| routing:
| v Transactions initiated from user terminals
| v Transactions initiated by eligible terminal-related EXEC CICS START
| commands
| v Eligible program-link requests.
| Note: See also the DSRTPGM parameter, used to name the distributed routing
| program.
| DTRTRAN={CRTX|name|NO}
specifies the name of the transaction definition that you want CICS to use for
dynamic transaction routing. This is intended primarily for use in a CICS
terminal-owning region, although you can also use it in an application-owning
The transaction name is stored in the catalog for recovery during CICS restarts.
CRTX This is the default dynamic transaction definition. It is the name of the
CICS-supplied sample transaction resource definition provided in the
CSD group DFHISC.
name The name of your own dynamic transaction resource definition that you
want CICS to use for dynamic transaction routing.
NO The dynamic transaction routing program is not invoked when a
transaction definition cannot be found.
Note: This does not prevent the CICS kernel from taking SDUMPs.
For more information about SDUMPs, see “System dumps” on page 175.
DUMPDS={AUTO|A|B}
specifies the transaction dump data set that is to be opened during CICS
initialization.
AUTO For all emergency or warm starts, CICS opens the transaction dump
data set that was not in use when the previous CICS run terminated.
This information is obtained from the CICS local catalog.
If you specify AUTO, or let it default, code DD statements for both of
the transaction dump data sets, DFHDMPA and DFHDMPB, in your
CICS startup job stream.
A CICS opens transaction dump data set DFHDMPA.
B CICS opens transaction dump data set DFHDMPB.
For more information about transaction dump data sets, see page 177.
DURETRY={30|number-of-seconds|0}
specifies, in seconds, the total time that CICS is to continue trying to obtain a
system dump using the SDUMP macro. DURETRY allows you to control
whether, and for how long, CICS is to reissue the SDUMP macro if another
address space in the same MVS system is already taking an SDUMP when
CICS issues an SDUMP request.
In the event of an SDUMP failure, CICS responds, depending on the reason for
the failure, as follows:
v If MVS is already taking an SDUMP for another address space, and the
DURETRY parameter is nonzero, CICS issues an MVS STIMERM macro to
wait for five seconds, before retrying the SDUMP. CICS issues a message to
say that it is waiting for five seconds before retrying the SDUMP. After five
seconds CICS issues another message to say that it is retrying the SDUMP
request.
v If the SDUMP fails for any other reason, such as no SYS1.DUMP data sets
being available, or I/O errors preventing completion of the dump, CICS
issues a message to inform you that the SDUMP has failed, and to give the
reason why.
30 30 seconds allows CICS to retry up to 6 times (once every 5 seconds),
if the cause of failure is that another region is taking an SDUMP.
number-of-seconds
Code the total number of seconds (up to 32767) during which you want
CICS to continue retrying the SDUMP macro if the reason for failure is
that another region is taking an SDUMP. CICS retries the SDUMP, once
every five seconds, until successful or until retries have been made
over a period equal to or greater than the DURETRY value.
0 Code a zero value if you do not want CICS to retry the SDUMP macro.
ECDSASZE={0K|number}
specifies the size of the ECDSA. The default size is 0 indicating that the DSA
size can change dynamically. A non-zero value indicates that the DSA size is
fixed.
number
Specify number as an amount of storage in the range 0 to 1073741824
From the storage value that you specify on the EDSALIM parameter, CICS
allocates the following extended dynamic storage areas:
The extended user DSA (EUDSA)
The user-key storage area for all user-key task-lifetime storage above
the 16MB boundary.
The extended read-only DSA (ERDSA)
The key-0 storage area for all reentrant programs and tables above the
16MB boundary.
The extended shared DSA (ESDSA)
The user-key storage area for any non-reentrant user-key
RMODE(ANY) programs, and also for any storage obtained by
programs issuing CICS GETMAIN commands for storage above the
16MB boundary with the SHARED option.
The extended CICS DSA (ECDSA).
The CICS-key storage area for all non-reentrant CICS-key
RMODE(ANY) programs, all CICS-key task-lifetime storage above the
16MB boundary, and CICS control blocks that reside above the 16MB
boundary.
CICS allocates all the DSAs above the 16MB boundary in multiples of 1MB.
Restrictions
This parameter is effective only on a CICS cold or initial start. CICS does not
load an FCT on a warm or emergency restart, and all file resource definitions
are recovered from the global catalog.
For information about coding the macros for this table, see the CICS Resource
Definition Guide.
You can use a mixture of macro definitions and RDO definitions for files in your
CICS region. However, your FCT should contain definitions for only BDAM files
to be loaded on a CICS cold start. Other types of files are loaded from their file
definitions in RDO groups specified in the GRPLIST system initialization
parameter. Any definitions in the FCT other than for BDAM files are ignored.
FEPI={NO|YES}
specifies whether or not you want to use the Front End Programming Interface
feature (FEPI).
NO FEPI support is not required. You should specify NO on this parameter
(or allow it to default) if you do not have the feature installed, or if you
do not require FEPI support.
YES You require FEPI support, and CICS is to start the CSZI transaction.
This book does not contain any information about the installation process for
the Front End Programming Interface feature. Installation information can be
found in the CICS Front End Programming Interface User’s Guide.
The field separator allows you to use transaction identifications of less than four
characters followed by one of the separator characters. When less than four
characters are coded, the parameter is padded with blanks, so that the blank is
then a field separator. None of the specified field separator characters should
be part of a transaction identification; in particular, the use of alphabetic
characters as field separators is not recommended.
The character specified in the FLDSEP parameter must not be the same as any
character specified in the FLDSTRT parameter. This means that it is invalid to
allow both parameters to take the default value.
If you specify FLDSEP in the SIT, the characters must be enclosed in single
quotation marks.
The character specified in the FLDSTRT parameter must not be the same as
any character specified in the FLDSEP parameter. This means that it is invalid
to allow both parameters to take the default value.
Restrictions
If you specify FLDSTRT in the SIT, the parameter must be enclosed in single
quotation marks.
| FORCEQR will apply to all programs defined as threadsafe that are not invoked
| as task-related user exits, global user exits, or user-replaceable modules.
FSSTAFF{YES|NO}
specify this parameter in an application-owning region (AOR) to prevent
transactions initiated by function-shipped EXEC CICS START requests being
started against incorrect terminals.
To prevent this situation, code YES on the FSSTAFF parameter in the AOR.
YES When a START request is received from a terminal-owning region, and
a shipped definition for the terminal named on the request is already
installed in the AOR, the request is always shipped back to a TOR, for
routing, across the link it was received on, irrespective of the TOR
referenced in the remote terminal definition.
If the TOR to which the START request is returned is not the one
referenced in the installed remote terminal definition, a definition of the
terminal is shipped to the AOR, and the autoinstall user program is
called. Your autoinstall user program can then allocate an alias termid
in the AOR, to avoid a conflict with the previously installed remote
definition. For information about writing an autoinstall program to control
the installation of shipped definitions, see the CICS Customization
Guide.
NO When a START request is received from a terminal-owning region, and
a shipped definition for the named terminal is already installed in the
AOR, the request is shipped to the TOR referenced in the definition, for
routing.
Notes:
1. FSSTAFF has no effect:
v On statically-defined (hard-coded) remote terminal definitions in the AOR.
If you use these, START requests are always shipped to the TORs
referenced in the definitions.
v On START requests issued in the local region. It affects only START
requests shipped from other regions.
v When coded on intermediate regions in a transaction-routing path. It is
effective only when coded on an application-owning region.
2. If the AOR contains no remote definition of a terminal named on a shipped
START request, the “terminal not known” global user exits, XICTENF and
XALTENF, are called. For details of these exits, see the CICS Customization
Guide.
You can use apostrophes to punctuate your message, in addition to using them
as message delimiters. However, you must code two successive apostrophes to
represent a single apostrophe in your text. For example,
GMTEXT='User''s logon message text.'
Your message text can be from 1 through 246 characters (bytes), and can
extend over two lines by extending the text to column 80 on the first line, and
continuing in column 1 of the second line. For example, the following might be
used in the SYSIN data set:
* CICS Transaction Server for OS/390 Release 3 SYSTEM *
GMTEXT='An Information Development CICS Terminal-Owning Region (TOR) - C
ICSIDC. This message is to show the use of continuation lines when creating a GM
TEXT parameter in the SYSIN data set' (for first signon
The CSGM transaction displays this as follows (with the time appended to the
end of message):
Signon for CICS Transaction Server for OS/390 Release 3 APPLID CICSHTH1
For any transaction other than CESN that displays the text specified by this
parameter, you must use a TYPETERM with LOGONMSG(YES) for all
terminals requiring the logon message. For information about using
TYPETERM, see the CICS Resource Definition Guide.
Note: When either the CICS CESF transaction, or your own transaction,
attempts to sign off a terminal, the result is subject to the SIGNOFF
attribute of the TYPETERM resource definition for the terminal, as
follows:
SIGNOFF
Effect
YES The terminal is signed off, but not logged off.
NO The terminal remains signed on and logged on.
LOGOFF
The terminal is both signed off and logged off.
| Note: It is the responsibility of the user to see that these rules are kept.
| 5. The first character of the GRNAME cannot be a number.
would register to VTAM with the applid CICSHTH1 and the generic resource
CICSH###. Other LUs in the same sysplex can communicate with the CICS
region either through the generic resource or the applid.
The examples used here are based on a CICS naming convention described in
the MVS Sysplex Application Migration manual.
Note: There are rules that restrict CICS use of the VTAM generic resources
function; for more information see the CICS Intercommunication Guide.
GRPLIST={DFHLIST |name|(name[,name2][,name3][,name4])}
specifies the names (each 1 through 8 characters) of up to four lists of resource
definition groups on the CICS system definition (CSD) file. The resource
definitions in all the groups in the specified lists are loaded during initialization
when CICS performs a cold start. If a warm or emergency start is performed,
the resource definitions are derived from the global catalog, and the GRPLIST
parameter is ignored.
Each name can be either a real group list name or a generic group list name
that incorporates global filename characters (+ and *). If you specify more than
one group list (either by specifically coding two or more group list names or by
coding a group list name with global filename characters), the later group lists
are concatenated onto the first group list. Any duplicate resource definitions in
later group lists override those in earlier group lists.
Use the CEDA command LOCK to protect the lists of resource groups specified
on the GRPLIST parameter.
For example, if you want to use the four group lists CICSHT#1, CICSHTAP,
CICSHT3V, and CICSHTSD, you could specify either of the following system
initialization parameters:
GRPLIST=(CICSHT#1,CICSHTAP,CICSHT3V,CICSHTSD)
GRPLIST=(CICSHT*)
In the first example GRPLIST, the group lists are loaded in the order specified,
and resource definitions installed from the CICSHTSD group list will override
any duplicate definitions installed by the other groups.
In the second example GRPLIST, the group lists are loaded in the order
CICSHT#1, CICSHTAP, CICSHTSD, then CICSHT3V, and resource definitions
installed from the CICSHT3V group list will override any duplicate definitions
installed by the other groups.
and you want to replace the list CICSHT3V with the list ANOLST05, you should
specify the override:
GRPLIST=(CICSHT#1,CICSAP*,ANOLST05,CICSHTSD)
In general, any required resource definitions should appear in one of the group
lists specified on the GRPLIST system initialization parameter.
For information about resource definitions, groups, lists, and the CSD, see the
CICS Resource Definition Guide.
This parameter controls whether any of the three types of CICS trace entry are
written to GTF data sets. The three types are: CICS system trace (see the
SYSTR parameter), user trace (see the USERTR parameter), and exception
trace entries (which are always made and not controlled by a system
initialization parameter).
OFF CICS does not use GTF as a destination for CICS trace data.
ON CICS uses GTF as a destination for CICS trace data. To use the GTF
data sets for CICS trace data, you must have started GTF with the USR
option, in addition to coding GTFTR=ON.
For information about GTF, see the OS/390 MVS Diagnosis: Tools and Service
Aidsmanual, SY28-1085.
HPO={NO|YES}
specifies whether you want to use the VTAM authorized path feature of the high
performance option (HPO). If you code YES, the CICS type 6 SVC must be
link-edited in your MVS nucleus, and defined to MVS in an SVCPARM
statement. If the SVC number is not 215 (the default) you must specify the SVC
number on the SRBSVC parameter.
For information about installing the CICS type 6 SVC in your MVS system, and
about changing the default number, see the CICS Transaction Server for
OS/390 Installation Guide.
Restrictions
You can specify the HPO parameter in the system initialization table only.
ICP=COLD
specifies that you want to cold start the interval control program. See page 227
for further information. If COLD is not specified, the ICP start type will be
determined by the START and TS parameter values.
ICV={1000|number}
specifies the region exit time interval in milliseconds. The ICV system
initialization parameter specifies the maximum time in milliseconds that CICS
releases control to the operating system when there are no transactions ready
to resume processing. This time interval can be any integer in the range 100
through 3600000 milliseconds (specifying an interval up to 60 minutes). A
typical range of operation might be 100 through 2000 milliseconds.
A low value interval can enable much of the CICS nucleus to be retained in
dynamic storage, and not be paged-out at times of low terminal activity. This
reduces the amount of dynamic storage paging necessary for CICS to process
terminal transactions (thus representing a potential reduction in response time),
sometimes at the expense of concurrent batch region throughput. Large
networks with high terminal activity are inclined to run CICS without a need for
this value, except to handle the occasional, but unpredictable, period of
inactivity. These networks can usually function with a large interval (10000 to
3600000 milliseconds). Once a task has been initiated, its requests for terminal
services and the completion of the services are recognized by the system and
this maximum delay interval is overridden.
Note: The region exit time interval process contains a mechanism to ensure
that CICS does not constantly set and cancel timers (thus degrading
performance) while attempting to meet its objectives for a low region exit
time interval. This mechanism can cause CICS to release control to the
operating system for up to 0.5 seconds when the interval has been set at
less than 250; and up to 0.25 seconds more than the region exit time
interval when the interval has been set greater than 250.
ICVR={5000|number}
specifies the default runaway task time interval in milliseconds as a decimal
number. You can specify zero, or a number in the range 500 through 2 700 000,
in multiples of 500. CICS rounds down values that are not multiples of 500. This
is the RUNAWAY interval used by transactions defined with
RUNAWAY=SYSTEM (see the CICS Resource Definition Guide for further
information). CICS may purge a task if it has not given up control after the
RUNAWAY interval for the transaction (or ICVR if the transaction definition
specified RUNAWAY=SYSTEM). If you code ICVR=0, runaway task control is
inoperative for transactions specifying RUNAWAY=SYSTEM in their transaction
definition (that is, tasks do not get purged if they appear to be looping). The
ICVR value is independent of the ICV value, and can be less than the ICV
value. Note that CICS runaway task detection is based upon task time, that is,
the interval is decremented only when the task has control of the processor. For
information about commands that reinitialize the ICVR value, see the CICS
Problem Determination Guide.
ICVTSD={500|number}
specifies the terminal scan delay value. The terminal scan delay facility
determines how quickly CICS deals with some terminal I/O requests made by
applications. The range is 0 through 5000 milliseconds, with a default of
ICVTSD=500.
| Note: You can specify the INITPARM keyword and its parameters more than
| once, see 343.
| INTTR={ON|OFF}
specifies whether the internal CICS trace destination is to be activated at
system initialization.
This parameter controls whether any of the three types of CICS trace entry are
written to the internal trace table. The three types are: CICS system trace (see
the SYSTR parameter), user trace (see the USERTR parameter), and exception
trace entries (which are always made and not controlled by a system
initialization parameter).
ON Activate main storage trace.
OFF Do not activate main storage trace.
IRCSTRT={NO|YES}
specifies whether IRC is to be started up at system initialization. If
IRCSTRT=YES is not coded, IRC can be initialized by issuing a CEMT or
EXEC CICS SET IRC OPEN command.
ISC={NO|YES}
specifies whether the CICS programs required for interregion or intersystem
communication are to be included.
JESDI={30|number} (alternate)
specifies, in a SIT for an alternate XRF system, the JES delay interval, in
seconds, the minimum being 5 seconds. The alternate CICS region has to
ensure that the active CICS region has been canceled before it can take over
the resources owned by the active.
Note: You must give careful consideration to the values you specify for the
parameters ADI and JESDI so that they do not conflict with your
installation’s policy on PR/SM RESETTIME and the XCF INTERVAL and
OPNOTIFY intervals. You should ensure that the sum of the interval you
specify for ADI plus JESDI exceeds the interval specified by the XCF
INTERVAL and the PR/SM policy interval RESETTIME.
KEYFILE=key-database-path-name
Specifies the fully qualified HFS pathname of the key database created by the
gskkyman utility program for this CICS region. When you specify this parameter,
the CICS region userid must be authorised to read the specified HFS file.
| LGDFINT={30|number
| specifies the log defer interval used by CICS Log Manager when determining
| Note: The log defer interval can be modified dynamically by means of the
| CEMT SET SYSTEM command or EXEC CICS SET SYSTEM function,
| specifying the LOGDEFER option. However, it is not recommended that
| this value be modified in a production environment without a prior system
| evaluation and performance analysis of any changed value.
| The CICS Log manager uses the log defer interval value when
| calculating how long to delay a forced journal write request before
| invoking the MVS system logger. This delay is required since MVS
| system logger IXGWRITE processing has a longer pathlength than the
| equivalent BSAM write macro call used in the old-style journal
| management of CICS/ESA R4.1.0 and earlier releases.
| For CICS systems with many tasks issuing forced log write requests,
| these tasks will not be seen to be delayed for periods close to the log
| defer interval value, since on average a forced log write request will be
| issued while a log deferral is already being performed for another task.
| A log defer interval value of less than 30ms will reduce the delay in CICS
| Log manager before invoking the IXGWRITE macro. This will improve
| the transaction response time, but will increase CPU cost for the system
| since CICS will buffer fewer journal requests into a given call to the MVS
| system logger, and so have to invoke the IXGWRITE macro more often.
| Conversely, increasing the log defer interval value above 30 will increase
| the transaction response time since CICS will increase the delay period
| before invoking the IXGWRITE macro. However, more transactions will
| be able to write their own log data into the same log buffer before it is
| written to the MVS system logger and hence the total CPU cost of
| driving IXGWRITE calls will be reduced.
| Note also that the log defer interval value is restored from the SIT after
| any CICS restart.
| LGNMSG={NO|YES}
specifies whether VTAM logon data is to be made available to an application
program.
NO VTAM logon data is not available to an application program.
YES VTAM logon data is available to an application program. The data can
be retrieved with an EXEC CICS EXTRACT LOGONMSG command.
For programming information about this command, see the CICS
Application Programming Reference manual.
You can use this parameter with the GMTRAN parameter to retrieve the
VTAM logon data at the time a terminal is logged on to CICS by VTAM.
LLACOPY={YES|NO|NEWCOPY}
specifies whether CICS is to use the LLACOPY macro or the BLDL macro when
locating modules in the DFHRPL concatenation.
YES CICS always uses the LLACOPY macro when locating modules in the
DFHRPL concatenation.
NO CICS always uses the BLDL macro when locating modules in the
DFHRPL concatenation.
NEWCOPY
CICS uses the LLACOPY only when a NEWCOPY or a PHASEIN is
being performed. At all other times, CICS uses the BLDL macro when
locating modules in the DFHRPL concatenation.
Notes:
1. If you code LLACOPY=NO or LLACOPY=NEWCOPY you can still benefit
from having LLA managed data sets within your DFHRPL concatenation.
Modules will continue to be loaded from VLF if appropriate.
2. If an LLA managed module has been altered, a BLDL macro may not return
the new information and a subsequent load will still return the old copy of
the module. To load the new module, an LLACOPY must be issued against
that module or a MODIFY LLA,REFRESH command must be issued on a
system console.
LPA={NO|YES}
| specifies whether any CICS or user modules can be used from the link pack
areas.
| NO will not load CICS or user modules from the link pack areas.
| YES CICS or usermodules installed in the LPA or in the ELPA can be used
from there, instead of being loaded into the CICS region.
A list of the CICS modules that are read-only, and hence eligible for
residence in the link pack areas (LPA or ELPA), are contained in the
SMP/E USERMOD supplied on the distribution tape in the
CICSTS13.CICS.SDFHSAMP, in a member called DFHœUMOD. For
details of the CICS system initialization parameter PRVMOD that you
can use to override LPA=YES for selected modules, see page 281.
| If the demand for open TCBs is greater than the limit set by MAXOPENTCBS,
| tasks are queued waiting for a TCB.
MCT={NO|YES|xx}
specifies the monitoring control table suffix. (See page 227.) If you specify
MCT=NO, CICS monitoring builds dynamically a default MCT, ensuring that
default monitoring control table entries are always available for use when
monitoring is on and a monitoring class (or classes) is active.
For information about coding the macros for this table, see the manual.
MN={OFF|ON}
specifies whether monitoring is to be switched on or off at initialization, and use
the individual monitoring class parameters to control which monitoring classes
are to be active. (See the MNEVE, MNEXC, and MNPER parameter
descriptions.) The default status is that the CICS monitoring facility is off. The
monitoring status is recorded in the CICS global catalog for use during warm
and emergency restarts.
OFF Switch off monitoring.
ON Switch on monitoring. However, unless at least one individual class is
active, no monitoring records are written. For details of the effect of
monitoring status being on or off, in conjunction with the status of the
various monitoring classes, see the following notes:
Notes:
1. If the monitoring status is ON, CICS accumulates monitoring data
continuously and, depending on the status of each of the monitoring
classes, processes the accumulated data as follows:
v For the performance and exception monitoring classes, CICS writes the
monitoring data for each class that is active to a system management
facilities (SMF) data set.
v For the SYSEVENT monitoring class, CICS notifies the MVS system
resources manager (SRM) of the completion of each transaction. This
data can be reported using the resource measurement facility (RMF), or
written to SMF data sets, depending on the RMF options in force.
For information about the effect of SYSEVENT recording in an MVS
workload manager environment, see the CICS Performance Guide.
If the monitoring status is OFF, CICS does not accumulate or write any
monitoring data, even if any of the monitoring classes are active.
When you change the status of monitoring, the change takes effect
immediately. If you change the monitoring status from OFF to ON,
monitoring starts to accumulate data and write monitoring records to SMF
for all tasks that start after the status change is made for all active
monitoring classes. If the status is changed from ON to OFF, monitoring
stops writing records immediately and does not accumulate monitoring data
for any tasks that start after the status change is made.
3. The monitoring status operand can be manipulated independently of the
class settings. This means that, even if the monitoring status is OFF, you
can change the monitoring class settings and the changes take effect for all
tasks that are started after the monitoring status is next set to ON.
For programming information about controlling CICS monitoring, see the CICS
System Programming Reference manual.
MNCONV={NO|YES}
specifies whether or not conversational tasks are to have separate performance
class records produced for each pair of terminal control I/O requests.
Any clock (including user-defined) that is active at the time such a performance
class record is produced is stopped immediately before the record is written.
After the record is written, such a clock is reset to zero and restarted. Thus a
clock whose activity spans more than one recording interval within the
conversational task appears in multiple records, each showing part of the time,
and the parts adding up to the total time the clock is active. The
high-water-mark fields (which record maximum levels of storage used) are reset
to their current values. All other fields are set to X'00', except for the key fields
(transid, termid). The monitoring converse status is recorded in the CICS global
catalog for use during warm and emergency restarts.
MNEVE={OFF|ON}
specifies whether SYSEVENT monitoring is to be made active during CICS
initialization. The monitoring SYSEVENT status is recorded in the CICS global
catalog for use during warm and emergency restarts.
OFF Set SYSEVENT monitoring to “not active”.
ON Set SYSEVENT monitoring to “active”.
For programming information about exception monitoring records, see the CICS
Customization Guide.
MNFREQ={0|hhmmss}
specifies the interval for which CICS automatically produces a transaction
performance class record for any long-running transaction. The monitoring
frequency value is recorded in the CICS global catalog for use during warm and
emergency restarts.
0 No frequency monitoring is active.
hhmmss
The interval for which monitoring produces automatically a transaction
performance class record for any long-running transaction. Specify a 1
to 6 digit number in the range 001500–240000. Numbers that are fewer
than six digits are padded with leading zeroes.
MNPER={OFF|ON}
specifies whether the monitoring performance class is to be made active during
CICS initialization. The monitoring performance class status is recorded in the
CICS global catalog for use during warm and emergency restarts.
OFF Set the performance monitoring class to “not active”.
ON Set the performance monitoring class to “active”.
For background information on the SYSEVENT class of monitoring data and the
subsystem identification, and about the implications for SYSEVENT recording in
a MVS Workload Manager environment, see the CICS Performance Guide.
MNSYNC={NO|YES}
specifies whether you want CICS to produce a transaction performance class
record when a transaction takes an implicit or explicit syncpoint (unit-of-work).
No action is taken for syncpoint rollbacks. The monitoring syncpoint status is
recorded in the CICS global catalog for use during warm and emergency
restarts.
MNTIME={GMT|LOCAL}
specifies whether you want the time stamp fields in the performance class
monitoring data to be returned to an application using the EXEC CICS
Note: The MQCONN parameter works only if you are using the
MQSeries-supplied program, CSQCCODF, to start the
CICS-MQSeries connection. MQCONN will not work with your
own-written attach program if it has a different name.
For more information about starting a connection to an MQSeries queue
manager, see MQSeries for MVS/ESA: System Management Guide,
SC33-0806.
MROBTCH={1|number}
specifies the number of events that must occur before CICS is posted for
dispatch due to the batching mechanism. The number can be in the range 1
through 255, and the default is 1.
Use this batching mechanism to spread the overhead of dispatching CICS over
several tasks. If the value is greater than 1 and CICS is in a system wait, CICS
is not posted for dispatch until the specified number of events has occurred.
Events include MRO requests from connected systems or DASD I/O. For these
events, CICS is dispatched as soon as one of the following occurs:
v The current batch fills up (the number of events equals MROBTCH)
v An ICV interval expires
Therefore, ensure that the time interval you specify in the ICV parameter is low
enough to prevent undue delay to the system.
If CICS is dispatched for another reason, the current batch is dealt with in that
dispatch of CICS.
You should review the region size specified on the REGION parameter for CICS
address spaces. The increase in CICS use of virtual storage above the 16MB
boundary means that you will probably need to increase the REGION
parameter.
If you are running with transaction isolation active, CICS allocates storage for
task-lifetime storage in multiples of 1MB for user-key tasks that run above the
16MB boundary. (1MB is the minimum unit of storage allocation above the line
for the EUDSA when transaction isolation is active.) However, although storage
is allocated in multiples of 1MB above the 16MB boundary, MVS paging activity
affects only the storage that is actually used (referenced), and unused parts of
the 1MB allocation are not paged.
The subspace group facility uses more real storage, as MVS creates for each
subspace a page and segment table from real storage. The CICS requirement
for real storage varies according to the transaction load at any one time. As a
guideline, each task in the system requires 9KB of real storage, and this should
be multiplied by the number of concurrent tasks that can be in the system at
any one time (governed by the MXT system initialization parameter).
However, automatic DSA sizing removes the need for accurate storage
estimates, with CICS dynamically changing the size of DSAs as demand
requires.
Note: The MXT value does not include CICS system tasks.
NATLANG=(E,x,y,z,...)
specifies the single-character codes for the languages to be supported in this
CICS run, selected from the codes in Table 32 on page 273.
E English, which is the system default (that is, is provided even if you do
not specifically code E).
x,y,z,...
Specify the appropriate letters for the other supported languages that
you require.
For the codes that you specify on this parameter, you must ensure that a
DFHMET1x module (where x is the language code) is in a library in the
STEPLIB DD concatenation of the CICS startup JCL. (For full language support,
you must also provide other DFHMEyyx modules.) For information about using
the message editing utility to create your own DFHMEyyx modules, see the
CICS Operations and Utilities Guide.
The first language code specifies the default language for those elements of
CICS enabled to receive National Language Support (NLS) messages, such as
some destinations used for CICS messages, and the terminals or users not
signed-on with an NLS code. The other language codes are provided to specify
the language to be used for messages sent to terminals that are defined with
the appropriate language support code. For example, coding NATLANG=(F,G,S)
has the same effect as coding NATLANG=(F,G,E,S); that is, in both cases the
default NLS language is French (F), and the languages English, German (G),
and Spanish (S) are supported. (For such support, you would have to create
and install the modules DFHMET1F, DFHMET1G, and DFHMET1S into a library
in the STEPLIB DD concatenation of the CICS startup JCL.)
Note:
The following language module suffixes are not supported by the message editing utility:
v E - English master data sets.
v K - Japanese data sets, where translation is performed by IBM.
v C - Simplified Chinese data sets, where translation is performed by IBM.
The NATLANG code is used as the suffix of the message modules for the associated
language.
| NCPLDFT={DFHNC001|name}
| specifies the name of the default named counter pool to be used by the CICS
| region on calls it makes to a named counter server. If CICS cannot determine,
| from the named counter options table, the pool name required by an EXEC
| CICS named counter command, CICS uses the default name specified on the
| NCPLDFT parameter.
In a warm restart, CICS uses the installed resource definitions saved in the
CICS global catalog at warm shutdown, and therefore the CSD, FCT, and
GRPLIST parameters are ignored. (At CICS startup, you can only modify
installed resource definitions, including file control table entries, or change to a
new FCT, by performing a cold start of CICS with START=COLD.)
For more information about the use of the NEWSIT parameter, see “Classes of
start and restart” on page 325.
Restrictions
You can specify the NEWSIT parameter in PARM, SYSIN, or CONSOLE only.
OFFSITE={NO|YES}
specifies whether CICS is to restart in off-site recovery mode; that is, a restart
is taking place at a remote site.
Note: For a successful off-site restart, the log records of the failed CICS region
must be available at the remote site. CICS does not provide a facility for
shipping log records to a remote backup site, but you can use a suitable
vendor product to perform this function. See the relevant product
documentation for other procedures you need to follow for a remote site
restart.
See the CICS Recovery and Restart Guide for more information about
remote site recovery.
NO CICS will not perform the special restart processing required for remote
site recovery.
YES CICS will perform an off-site restart at a remote site following a disaster
at the primary site. CICS performs this special processing for an off-site
restart, because some information (for example, a VSAM lock structure)
is not available at the remote site.
CICS performs an emergency restart, even if the global catalog
indicates that CICS can do a warm start. OFFSITE=YES is valid with
START=AUTO only, and CICS initialization is terminated if you specify
START=COLD or INITIAL.
Restrictions
You can specify the OFFSITE parameter in PARM, SYSIN, or CONSOLE only.
OPERTIM={120|number}
specifies the write-to-operator timeout value, in the range 0 through 86400
seconds (24 hours). This is the maximum time (in seconds) that CICS waits for
a reply before returning control to this transaction. For information about using
the write-to-operator timeout value, see the CICS Application Programming
Reference manual.
For information about coding the macros for this table, see the CICS Resource
Definition Guide.
PGAICTLG={MODIFY|NONE|ALL}
specifies whether autoinstalled program definitions should be cataloged. While
This reduces the risk of creating a nonunique command. (See Note 1.)
Restrictions
For information about coding the macros for this table, see the CICS Resource
Definition Guide.
PLTPISEC={NONE|CMDSEC|RESSEC|ALL}
specifies whether or not you want CICS to perform command security or
resource security checking for PLT programs during CICS initialization. The PLT
programs run under the authority of the userid specified on PLTPIUSR, which
must be authorized to the appropriate resources defined by PLTPISEC.
NONE You do not want any security checking on PLT initialization programs.
CMDSEC
You want CICS to perform command security checking only.
RESSEC
You want CICS to perform resource security checking only.
ALL You want CICS to perform both command and resource security
checking.
Restrictions You can specify the PLTPISEC parameter in the SIT, PARM, or
SYSIN only.
PLTPIUSR=userid
specifies the userid that CICS is to use for security checking for PLT programs
that run during CICS initialization. All PLT programs run under the authority of
PLT programs are run under the CICS internal transaction, CPLT. Before the
CPLT transaction is attached, CICS performs a surrogate user check against
the CICS region userid (the userid under which the CICS region is executing).
This is to ensure that the CICS region is authorized as a surrogate for the
userid specified on the PLTPIUSR parameter. This ensures that you cannot
arbitrarily specify any PLT userid in any CICS region—each PLT userid must
first be authorized to the appropriate CICS region.
If you do not specify the PLTPIUSR parameter, CICS runs PLTPI programs
under the authority of the CICS region userid, in which case CICS does not
perform a surrogate user check. However, the CICS region userid must be
authorized to all the resources referenced by the PLT programs.
Restrictions You can specify the PLTPIUSR parameter in the SIT, PARM, or
SYSIN only.
PLTSD={NO|xx|YES}
specifies a program list table that contains a list of programs to be executed
during system termination (see page 227).
PRGDLAY={0|hhmm}
specifies the BMS purge delay time interval that is added to the specified
delivery time to determine when a message is to be considered undeliverable
and therefore purged. This time interval is specified in the form “hhmm” (where
“hh” represents hours from 00 to 99 and “mm” represents minutes from 00 to
59). If PRGDLAY is not coded, or is given a zero value, a message remains
eligible for delivery either until it is purged or until temporary storage is cold
started.
Note: If you specify PRGDLAY as a SIT override, you must still specify a
4-character value (for example 0000).
The PRGDLAY facility requires the use of full function BMS. Note also that you
must code a PRGDLAY value if you want the ERRTERM|ERRTERM(name)
parameter on EXEC CICS ROUTE commands to be operative. For
programming information about notification of undelivered messages, see the
CICS Application Programming Reference manual.
The PRGDLAY value determines the interval between terminal page clean-up
operations. A very low value causes the CSPQ transaction to be initiated
continuously, and can have a detrimental effect on task-related resources. A
zero value stops CSPQ initiating terminal page clean-up. However, this can
cause messages to stay in the system forever, resulting in performance
problems with long AID queues or lack of temporary storage. The actual purge
delay time interval specified is dependent on individual system requirements.
PRINT={NO|YES|PA1|PA2|PA3}
specifies the method of requesting printout of the contents of a 3270 screen.
NO Screen copying is not required.
YES Screen copying can be requested by terminal control print requests
only.
When YES, PA1, PA2, or PA3 is specified, transaction CSPP is initiated which
invokes program DFHP3270. The transaction and programs are defined in the
CSD group DFHHARDC. In the case of 3270 and LUTYPE2 logical units, the
resources defined in CSD group DFHVTAMP are required.
The 3270 print-request facility allows either the application program or the
terminal operator to request a printout of data currently displayed on the 3270
display. This facility is not supported for TCAM devices.
For a VTAM 3270 display without the printer-adapter feature, the PRINT request
prints the contents of the display on the first available 3270 printer specified by
PRINTER and ALTPRINTER options of the RDO TERMINAL definition. For a
printer to be considered available, it must be in service and not currently
attached to a task. It is not necessary for the printer to be on the same control
unit.
In an MRO environment, the printer must be owned by the same system as the
VTAM 3270 display.
For the 3275 with the printer-adapter feature, the PRINT request prints the data
currently in the 3275 display buffer on the 3284 Model 3 printer attached to the
3275.
The format of the print operation depends on the size of the display buffer. For
a 40-character wide display, the print format is a 40-byte line, and for an
80-character wide display the format is an 80-byte line.
For the 3270 compatibility mode logical unit of the 3790 (if the logical unit has
the printer-adapter feature specified), the PRINT request prints the contents of
the display on the first printer available to the 3790. The allocation of the printer
to be used is under the control of the 3790.
For 3274, 3276, and LUTYPE2 logical units with the printer-adapter feature, the
PRINT request prints the contents of the display on the first printer available to
the 3270 control unit. The printer to be allocated depends on the printer
authorization matrix.
For the 3270 compatibility mode logical unit without the printer-adapter feature,
see the preceding paragraph on VTAM 3270 displays without the
printer-adapter feature.
The priority aging factor is used to increase the effective priority of a task
according to the amount of time it is held on a ready queue. The value
represents the number of milliseconds that must elapse before the priority of a
waiting task can be adjusted upwards by 1. For example, if you code
PRTYAGE=3000, a task has its priority raised by 1 for every 3000 milliseconds
it is held on the ready queue. Thus a high value for PRTYAGE results in a task
being promoted very slowly up the priority increment range, and a low value
enables a task to have its priority incremented quickly.
If you specify a value of 0, the priority aging algorithm is not used (task
priorities are not modified by age) and tasks on the ready queue are handled
according to the user assigned priority.
PRVMOD={name|(name,name...name)}
specifies the names of those modules that are not to be used from the LPA.
The operand is a list of 1-to 8-character module names. This enables you to
use a private version of a CICS nucleus module in the CICS address space,
and not a version that might be in the LPA. For information about PRVMOD,
see the CICS Transaction Server for OS/390 Installation Guide.
Note: If you require DL/I security checking, you must specify the XPSB system
initialization parameter as XPSB=YES or XSPB=name. For further
information about the XPSB system initialization parameter, see 313.
PSDINT={0|hhmmss}
specifies the persistent session delay interval. This delay interval specifies if,
and for how long, VTAM is to hold sessions in a recovery-pending state if CICS
fails. The value for hours can be in the range 0 through 23; the minutes and
seconds in the range 00 through 59 inclusive.
This value can be overridden during CICS execution (and hence change the
action taken by VTAM if CICS fails).
0 If CICS fails, sessions are terminated. This is the default.
hhmmss
A persistent session delay interval from 1 second up to the maximum of
23 hours 59 minutes and 59 seconds. If CICS fails, VTAM holds
sessions in recovery pending state for up to the interval specified on the
PSDINT system initialization parameter.
Specify a 1-to-6 digit time in hours, minutes and seconds, up to the
maximum time. If you specify less than six digits, CICS pads the value
with leading zeros. Thus a value of 500 is taken as five minutes exactly.
The interval you specify must cover the time from when CICS fails to
when the VTAM ACB is opened by CICS during the subsequent
emergency restart.
VTAM holds all sessions in recovery pending state for up to the interval
specified (unless they are unbound through path failure or VTAM operator
action, or other-system action in the case of intelligent LUs). The PSDINT value
used must take account of the types and numbers of sessions involved.
You must exercise care when specifying large PSDINT values because of the
problems they may give in some environments, in particular:
v Dial-up sessions—real costs may be incurred
v LU6.2 sessions to other host systems—such systems may become stressed
Notes:
1. When specifying a PSDINT value, you must consider the number and, more
particularly, the nature of the sessions involved. If LU6.2 sessions to other
host systems are retained in recovery pending state, the other host systems
may experience excessive queuing delays. This point applies to LU6.1
sessions which are retained until restart (when they are unbound).
2. The PSDINT parameter is incompatible with the XRF=YES parameter. If
XRF=YES is specified, the PSDINT parameter is ignored.
PSTYPE={SNPS|MNPS}
specifies whether CICS is running with VTAM Single Node Persistent Sessions
(SNPS) or Multi Node Persistent Sessions (MNPS). Code this parameter if you
are using VTAM MNPS and you wish to recover sessions when the VTAM ACB
| is opened after a VTAM failure. You should read the VTAM Network
| Implementation Guide to see how VTAM should be set up to use MNPS and
| under what conditions sessions persist for MNPS.
For information about the use of PVDELAY, see the CICS Performance Guide.
RAMAX={256|value}
specifies the size in bytes of the I/O area allocated for each RECEIVE ANY
issued by CICS, in the range 0 through 32767 bytes.
Note: If you are using APPC, do not code a value less than 256; otherwise, the
results are unpredictable.
For information about coding this parameter, see the CICS Performance Guide.
QUIESTIM={240|number}
specifies a timeout value for data set quiesce requests.
In a busy CICSplex, it is possible for the default timeout to expire before the
quiesce request has been processed by all the CICS regions, even though
there is nothing wrong. If the quiesce operation is not completed when the
timeout period expires, SMS VSAM cancels the quiesce. If you find that timeout
is occurring too frequently, increase the timeout value.
Specify the timeout value as a number of seconds. The default value is 240
seconds (4 minutes)
If value1 = 1, value2 = 1
If value1 ≤ 5, value2 = (value1 minus 1)
If value1 ≥ 6 and ≤ 49, value2 = 5
If value1 ≥ 50, value2 is 10 per cent of value1
Note: You should code value1 equal to or greater than value2; if you code
value1 less than value2, CICS forces value2 equal to value1.
or in an HPO system:
| This typically happens only if a protocol error has occurred, and sessions are
| waiting for a response; for example, to a BID SHUTD request from CICS.
| Each session is unbound, the Receive_Any data is lost and the RA RPL is
| reissued thus allowing VTAM activity to continue: Message DFHZC4949 is
| issued for each session affected.
| Consider increasing the size of the RAPOOL before resorting to the use of
| FORCE.
The number of RECEIVE ANYs needed depends on the expected activity of the
system, the average transaction lifetime, and the MAXTASK value specified. For
information about coding this parameter, see the CICS Performance Guide.
RDSASZE={0K|number}
specifies the size of the RDSA. The default size is 0, indicating that the DSA
size can change dynamically. A non-zero value indicates that the DSA size is
fixed.
number
specify number as an amount of storage in the range 0 to 16777215
bytes in multiples of 262144 bytes (256KB). If the size specified is not a
multiple of 256KB, CICS rounds the value up to the next multiple.
You can specify number in bytes (for example, 4194304), or as a whole
number of kilobytes (for example, 4096K), or a whole number of
megabytes (for example, 4M).
RENTPGM={PROTECT|NOPROTECT}
specifies whether you want CICS to allocate the read-only DSAs, RDSA and
ERDSA, from read-only key-0 protected storage. The permitted values are
PROTECT (the default), or NOPROTECT:
PROTECT
CICS obtains the storage for the read-only DSAs from key-0 protected
storage.
NOPROTECT
CICS obtains the storage from CICS-key storage, effectively creating
Restrictions You can specify the RESSEC parameter in the SIT, PARM, or
SYSIN only.
RLS={NO|YES}
specifies whether CICS is to support VSAM record-level sharing (RLS).
NO RLS support is not required in this CICS region. Files whose definitions
specify RLSACCESS(YES) will fail to open, with an error indicating that
RLS access is not supported. You should not specify RLS=NO if you
have files that you want to open in RLS access mode (including the
CSD).
YES RLS support is required in this CICS region. During initialization, CICS
automatically registers with an SMSVSAM control ACB to enable RLS
access to files opened with RLSACCESS(YES).
RLSTOLSR={NO|YES}
specifies whether CICS is to include files that are to be opened in RLS mode
when calculating the number of buffers, strings, and other resources for an LSR
pool. CICS performs this calculation only when you have not explicitly defined
an LSRPOOL resource definition that corresponds to an LSRPOOLID in a file
definition. CICS calculates and builds a default LSR pool only when it is
opening the first file in LSR mode that references the default pool.
NO CICS is not to include files opened in RLS mode, and which also
specify an LSRPOOLID, when it is building default LSR pools. Files
The RLSTOLSR parameter is provided to support files that are normally opened
in RLS mode, but which may be closed and then switched to LSR mode.
If LSR pools are not defined explicitly using LSRPOOL resource definitions,
CICS calculates the resources needed for an LSR pool using default attributes.
CICS performs this calculation when opening the first file that specifies an LSR
pool that is not explicitly defined. To calculate a default LSR pool, CICS scans
all the file entries to count all the files that specify the same LSRPOOLID. The
size of an LSR pool built dynamically in this way remains fixed until all files that
reference the LSR pool are closed. After all files have been closed, another
request to open a file with the same LSRPOOLID causes CICS to recalculate
the size.
If you add files to the system after the LSR calculation has been performed
there may be insufficient storage available to enable CICS to open a file that
specifies a default pool. This situation could occur if files are opened initially in
RLS mode and later closed and reopened in LSR mode. There are two ways to
ensure that enough resources are built into the LSR pool to support subsequent
switches of files from RLS to LSR:
1. You can explicitly define LSRPOOL resource definitions that correspond to
the LSRPOOLIDs on file definitions, removing the need for CICS to
calculate default values.
2. You can specify RLSTOLSR=YES to force CICS to include RLS files when
calculating defaults.
RMTRAN=({def.CSGM|name1}[,{def.CSGM |name2}])
specifies the name of the transaction that you want an alternate CICS to initiate
when logged-on class 1 terminals, which are defined with the attribute
RECOVNOTIFY(TRANSACTION) specified, are switched following a takeover.
This parameter is applicable only on an alternate CICS region.
If you do not specify a name here, CICS uses the CSGM transaction, the
default CICS good morning transaction.
If you are running CICS with XRF=YES, and you are using DBCTL, you must
specify an RST if you want XRF support for DBCTL. For information about the
use of the RST in a CICS-DBCTL environment with XRF=YES, see the CICS
IMS Database Control Guide .
| RUWAPOOL={NO|YES}
| specifies the option for allocating a storage pool the first time an LE-conforming
| program runs in a task. .
| NO CICS disables the option and provides no RUWA storage pool. Every
| EXEC CICS LINK to an LE-conforming application results in a
| GETMAIN for RUWA storage.
| YES CICS creates a pool of storage the first time an LE-conforming program
| runs in a task. This provides an available storage pool that reduces the
| need to GETMAIN and FREEMAIN run-unit work areas (RUWAs) for
| every EXEC CICS LINK request.
| SDSASZE={0K|number}
specifies the size of the SDSA. The default size is 0, indicating that the DSA
size can change dynamically. A non-zero value indicates that the DSA size is
fixed.
number
specify number as an amount of storage in the range 0 to 16777215
bytes in multiples of 262144 bytes (256KB). If the size specified is not a
multiple of 256KB, CICS rounds the value up to the next multiple.
You can specify number in bytes (for example, 4194304), or as a whole
number of kilobytes (for example, 4096K), or a whole number of
megabytes (for example, 4M).
SDTRAN={CESD|name_of_shutdown_tran|NO}
specifies the name of the shutdown transaction to be started at the beginning of
normal and immediate shutdown.
Note: You must also ensure that the default userid (CICSUSER or
another userid specified on the DFLTUSER system initialization
parameter) has been defined to RACF.
Note: With MRO bind-time security, even if you specify SEC=NO, the
CICS region userid is still sent to the secondary CICS region,
and bind-time checking is still carried out in the secondary CICS
region. For information about MRO bind-time security, see the
CICS RACF Security Guide.
Define whether to use RACF for resource level checking by using the XDCT,
XFCT, XJCT, XPCT, XPPT, XPSB, and XTST system initialization parameters.
Define whether to use RACF for transaction-attach security checking by using
For programming information about the use of external security for CICS
system commands, see the CICS System Programming Reference manual.
Table 33. Results of RACF authorization requests (with SEC=YES)
Access Permission defined to Access intent in application
RACF for CICS user
READ UPDATE
NONE Refused Refused
READ Permitted Refused
UPDATE Permitted Permitted
Restrictions
You can specify the SEC parameter in the SIT, PARM, or SYSIN only.
SECPRFX={NO|YES}
specifies whether CICS is to prefix the resource names in any authorization
requests to RACF with a prefix corresponding to the RACF userid for the CICS
region. The prefix to be used (the userid for the CICS region) is obtained by the
DFHIRP module.
NO CICS does not prefix the resource names in any authorization requests
to RACF.
YES The RACF userid is used as the prefix for CICS resources defined to
RACF. CICS prefixes the resource name in any authorization requests
to RACF with a prefix corresponding to the RACF userid of the CICS
region, obtaining the userid as follows:
v If you start CICS as a job, the prefix corresponds to the USER
operand coded on the JOB statement of the CICS startup job stream.
v If you start CICS as a started task, the prefix corresponds to the
RACF userid associated with the name of the start procedure in the
RACF ICHRIN03 table.
v If you start a CICS job without an associated RACF userid, the prefix
defaults to CICS.
For information about using the PREFIX option, see the CICS RACF
Security Guide.
Restrictions You can specify the SECPRFX parameter in the SIT, PARM, or
SYSIN only.
The SECPRFX parameter is effective only if you specify YES for the SEC
system initialization parameter.
SIT=xx
specifies the suffix, if any, of the system initialization table that you want CICS
to load at the start of initialization. If you omit this parameter, CICS loads the
unsuffixed table, DFHSIT, which is pregenerated with all the default values. This
Note: If full function BMS is used, all PA keys and PF keys are interpreted for
page retrieval commands, even if some of these keys are not defined.
SNSCOPE={NONE|CICS|MVSIMAGE|SYSPLEX}
specifies whether a userid can be signed on to CICS more than once, within the
scope of:
v A single CICS region
v A single MVS image
v A sysplex
The signon SCOPE is enforced with the MVS ENQ macro where there is a limit
on the number of outstanding MVS ENQs per address space. If this limit is
exceeded, the MVS ENQ is rejected and CICS is unable to detect if the user is
already signed on. When this happens, the signon request is rejected with
message DFHCE3587. See the OS/390 MVS Programming: Authorized
Assembler Services Guide for guidance on increasing the MVS ENQ limit.
NONE Each userid can be used to sign on for any number of sessions on any
CICS region. This is the compatibility option, providing the same signon
scope as in releases of CICS before CICS Transaction Server for
OS/390 Release 3.
CICS Each userid can be signed on once only in the same CICS region. A
signon request is rejected if the userid is already signed on to the same
CICS region. However, the userid can be used to signon to another
CICS region in the same, or another, MVS image.
MVSIMAGE
Each userid can be signed on once only, and to only one of the set of
CICS regions in the same MVS image that also specify
SNSCOPE=MVSIMAGE. A signon request is rejected if the user is
already signed on to another CICS region in the same MVS image.
SYSPLEX
Each userid can be signed on once only, and to only one of the set of
CICS regions within an MVS sysplex that also specify
SNSCOPE=SYSPLEX. A signon is rejected if the user is already signed
on to another CICS region in the same MVS sysplex.
Restrictions You can specify the SNSCOPE parameter in the SIT, PARM, or
SYSIN only.
SPCTR={(def.1,2 |1[,2][,3])|ALL|OFF}
specifies the level of tracing for all CICS components used by a transaction,
terminal, or both, selected for special tracing. If you want to set different tracing
levels for an individual component of CICS, use the SPCTRxx system
initialization parameter. You can select up to three levels of tracing, but some
CICS components do not have trace points at all these levels. For a list of all
the available trace points and their level numbers, see the CICS User’s
Handbook. For information about the differences between special and standard
CICS tracing, see the CICS Problem Determination Guide.
number
The level numbers for the level of special tracing you want for all CICS
components. The options are: 1, (1,2), or (1,2,3). The default, (1,2),
specifies special tracing for levels 1 and 2 for all CICS components.
ALL Enables the special tracing facility for all available levels.
OFF Disables the special tracing facility.
SPCTRxx={(def.1,2 |1[,2][,3])|ALL|OFF}
specifies the level of tracing for a particular CICS component used by a
transaction, terminal, or both, selected for special tracing. You identify the
component by coding a value for xx in the keyword. You code one SPCTRxx
keyword for each component you want to define selectively. For a CICS
component being specially traced that does not have its trace level set by
SPCTRxx, the trace level is that set by SPCTR (which, in turn, defaults to
(1,2)). You can select up to three levels of tracing, but some CICS components
do not have trace points at all these levels. The CICS component codes that
you can specify for xx on the SPCTRxx keyword are shown in Table 34:
Table 34. CICS component names and abbreviations
Code Component name Code Component name
AP Application domain BM Basic mapping support
BF Built-in function BR 3270 Bridge
Note: The component codes BF, BM, CP, DC, DI, EI, FC, IC, IS, KC, PC, SC,
SP, TC, TD, and UE are sub-components of the AP domain. As such, the
corresponding trace entries will be produced with a point ID of AP nnnn.
For details of using trace, see the CICS Problem Determination Guide.
number
The level numbers for the level of special tracing you want for the CICS
component indicated by xx. The options are: 1, (1,2), or (1,2,3).
ALL You want all the available levels of special CICS tracing switched on for
the specified component.
OFF Switches off all levels of special CICS tracing for the CICS component
indicated by xx.
Note: If you use the CICS spool interface, this makes use of the MVS exit
IDFDOIXT, which is provided in the SYS1.LINKLIB library. If you have a
For further information about the MVS exit IEFDOIXT, see the OS/390 MVS
Installation Exits, SC28-1753.
SRBSVC={215|number}
specifies the number that you have assigned to the CICS type 6 SVC. The
default number is 215.
For information on changing the SVC number, see Installing the CICS Type3
SVC and Selecting the high-performance option in the CICS Transaction Server
for OS/390 Installation Guide. A CICS type 6 SVC with the specified (or default)
number must have been link-edited with the system nucleus.
| SRT={1$|YES|NO|xx}
specifies the system recovery table suffix (see page 227.) For information about
coding the macros for this table, see the CICS Resource Definition Guide
manual.
If SRT=NO is coded, the system recovery program (DFHSRP) does not attempt
to recover from a program check or from an operating system abend. However,
CICS issues ESPIE macros to intercept program checks to perform clean-up
operations before CICS terminates. Therefore, an SRT must be provided if
recovery from either program checks or abnormal terminations, or both, is
required.
| SSLDELAY={600|number}
| specifies the SSL time delay in seconds in the range 0 thru 86400.
START=({AUTO|INITIAL|COLD|STANDBY}[,ALL])
specifies the type of start for the system initialization program. The value
specified for START, or the default of AUTO, becomes the default value for
each resource.
AUTO CICS performs a warm, emergency, cold or initial start, according to the
status of two control records on the global catalog:
v The recovery manager (RM) control record written by the previous
execution of CICS
v The RM autostart override record written by a run of the recovery
manager utility program, DFHRMUTL
Note: If the global catalog does not contain the RM control record:
v If it contains an RM autostart override record with option
AUTOINIT, CICS performs an initial start.
v If it does not contain an RM autostart override record with
option AUTOINIT, CICS does not start.
You may choose to leave the START parameter set to AUTO for all
types of startup other than XRF standby, and use the DFHRMUTL
program to reset the startup mode to COLD or INITIAL when necessary,
using SET_AUTO_START=AUTOCOLD or
SET_AUTO_START=AUTOINIT, respectively. For information about the
DFHRMUTL utility program, see the .
INITIAL
The status of CICS resource definitions saved in the global catalog at
the previous shutdown is ignored, and all resource definitions are
reinstalled, either from the CSD or CICS control tables.
You should rarely need to specify START=INITIAL; if you simply want to
reinstall definitions of local resources from the CSD, use START=COLD
instead.
Examples of times when an initial start is necessary are:
v When bringing up a new CICS system for the first time.
v After a serious software failure, when the system log has been
corrupted.
v If the global catalog is cleared or initialized.
v When you want to run CICS with a dummy system log. (If the system
log is defined as a dummy, it is ignored.)
COLD
The status of CICS resource definitions saved in the global catalog at
the previous shutdown is ignored, and all resource definitions (except
those for the system log) are reinstalled, either from the CSD or CICS
control tables.
Resynchronization information in the global catalog relating to remote
systems or to RMI-connected resource managers is preserved. The
CICS system log is scanned during startup, and information regarding
unit of work obligations to remote systems, or to non-CICS resource
managers (such as DB2) connected through the RMI, is preserved.
(That is, any decisions about the outcome of local UOWs, needed to
allow remote systems or RMI resource managers to resynchronize their
resources, are preserved.)
Note that, on a cold start, the following are not preserved:
v Updates to local resources that were not fully committed or backed
out during the previous execution, even if the updates were part of a
distributed unit of work.
v Resynchronization information for remote systems connected by
LU6.1 links, or for earlier releases of CICS systems connected by
MRO.
For more information about the types of CICS startup, see “Classes of start and
restart” on page 325.
STARTER={NO|YES}
specifies whether the generation of starter system modules (with $ and #
suffixes) is permitted, and various MNOTES are to be suppressed. This
parameter should only be used when service is being performed on starter
system modules.
Restrictions You can specify the STARTER parameter in the SIT only.
STATRCD=OFF|ON
specifies the interval statistics recording status at CICS initialization. This status
is recorded in the CICS global catalog for use during warm and emergency
restarts. Statistics collected are written to the SMF data set.
OFF Interval statistics are not collected (no action is taken at the end of an
interval).
End-of-day, Unsolicited and Requested statistics are written to SMF
regardless of the STATRCD setting.
ON Interval statistics are collected.
On a cold start of a CICS region, interval statistics are recorded by
default at three-hourly intervals. All intervals are timed using the
end-of-day time (midnight is the default) as a base starting time (not
CICS startup time). This means that the default settings give collections
at 00.00, 03.00, 06.00, 09.00, and so on, regardless of the time that
you start CICS.
On a warm or emergency restart the statistics recording status is
restored from the CICS global catalog.
You can change the statistics recording status at any time as follows:
v During a warm or emergency restart by coding the STATRCD system
initialization parameter.
v While CICS is running by using the CEMT or EXEC CICS SET STATISTICS
command.
Whatever the value of the STATRCD system initialization parameter, you can
ask for requested statistics and requested reset statistics to be collected. You
For information about using these CEMT commands, see the CICS Supplied
Transactions manual. For programming information about the EXEC CICS
PERFORM commands, see the CICS System Programming Reference manual.
| For information about the statistics utility program DFHSTUP, or recording
| statistics in the sample program hlq.SAMPLIB, see the CICS Operations and
| Utilities Guide.For information about the sample programs, see the CICS
| Operations and Utilities Guide
STGPROT={NO|YES}
specifies whether you want storage protection in the CICS region. The
permitted values are NO (the default), or YES:
NO If you specify NO, or allow this parameter to default, CICS does not
operate any storage protection, and runs in a single storage key as in
earlier releases. See Table 37 on page 356 for a summary of how
STGPROT=NO affects the storage allocation for the dynamic storage
areas.
YES If you specify YES, and if you have the required hardware and
software, CICS operates with storage protection, and observes the
storage keys and execution keys that you specify in various system and
resource definitions. See Table 37 on page 356 for a summary of how
STGPROT=YES affects the storage allocation for the dynamic storage
areas.
If you do not have the required hardware and software support, CICS
issues an information message during initialization, and operates
without storage protection.
STGRCVY={NO|YES}
specifies whether CICS should try to recover from a storage violation.
NO CICS does not try to repair any storage violation that it detects.
YES CICS tries to repair any storage violation that it detects.
In both cases, CICS continues unless you have specified in the dump table that
CICS should terminate.
In normal operation, CICS sets up four task-lifetime storage subpools for each
task. Each element in the subpool starts and ends with a ‘check zone’ that
includes the subpool name. At each freemain, and at end-of-task, CICS checks
the check zones and abends the task if either has been overwritten.
Terminal input-output areas (TIOAs) have similar check zones, which are set up
with identical values. At each freemain of a TIOA, CICS checks the check
zones and abends the task if they are not identical.
Note: Before globally activating tracing levels 3 and ALL for the storage
manager (SM) component, read the warning given in the description for the
STNTRxx system initialization parameter.
number
Code the level number(s) for the level of standard tracing you want for
all CICS components. The options are: 1, (1,2), or (1,2,3). The default,
1, specifies standard tracing for level 1 for all CICS components.
ALL Enables standard tracing for all levels.
OFF Disables standard tracing.
For information about the differences between special and standard CICS
tracing, see the CICS Problem Determination Guide.
STNTRxx={(1,21[,2][,3])|ALL|OFF}
specifies the level of standard tracing you require for a particular CICS
component. You identify the component by coding a value for xx in the keyword.
You code one STNTRxx keyword for each component you want to define
selectively. For a CICS component being specially traced that does not have its
trace level set by STNTRxx, the trace level is that set by STNTR (which, in turn,
defaults to 1). You can select up to three levels of tracing, but some CICS
components do not have trace points at all these levels.
The CICS component codes that you can specify for xx on this STNTRxx
keyword are shown in Table 34 on page 291.
number
The level number(s) for the level of standard tracing you want for the
CICS component indicated by xx. The options are: 1, (1,2), or (1,2,3).
ALL You want all the available levels of standard tracing switched on for the
specified component.
OFF Switches off all levels of standard CICS tracing for the CICS component
indicated by xx.
Note: If you select tracing levels 3 or ALL for the storage manager (SM)
component, or the temporary storage domain (TS), the performance of
your CICS system will be degraded. This is because options 3 and ALL
switch on levels of trace that are also used for field engineering
purposes. See the CICS Problem Determination Guide for information
about the effects of trace levels 3 and ALL.
The first 6 characters of the name of the SIT are fixed as DFHSIT. You can
specify the last two characters of the name, using the SUFFIX parameter.
Because the SIT does not have a TYPE=INITIAL macro statement like other
CICS resource control tables, you specify its SUFFIX on the TYPE=CSECT
macro statement.
The suffix allows you to have more than one version of the SIT. Any one or two
characters (other than NO and DY) are valid. You select the version of the table
to be loaded into the system during system initialization by coding SIT=xx,
either in the PARM parameter or the SYSIN data set. (You can, in some
circumstances, specify the SIT using the system console, but this is not
recommended.)
Restrictions You can specify the SUFFIX parameter in the SIT only.
SYDUMAX={999|number}
specifies the limit on the number of system dumps that may be taken per dump
table entry. If this number is exceeded, subsequent system dumps for that
particular entry will be suppressed.
number
A number in the range 0 through 999. The default, 999, enables an
unlimited number of dumps to be taken.
SYSIDNT={CICS|name}
specifies a 1-to 4-character name that is known only to your CICS region. If
your CICS region also communicates with other CICS regions, the name you
choose for this parameter to identify your local CICS region must not be the
same name as an installed CONNECTION resource definition for a remote
region.
The value for SYSIDNT, whether specified in the SIT or as an override, can
only be updated on a cold start. After a warm start or emergency restart, the
value of SYSIDNT is that specified in the last cold start.
For information about the SYSIDNT of a local CICS region, see the CICS
Intercommunication Guide.
SYSTR={ON|OFF}
specifies the setting of the master system trace flag.
ON The master trace flag is set, causing CICS to write trace entries of
system activity for the individual CICS components. Trace entries are
captured and written only for those components for which the trace
Note: Setting the master trace flag OFF affects only standard tracing
and has no effect on special tracing, which is controlled
separately by SPCTR or SPCTRxx trace levels and the CETR
transaction.
See the CICS Problem Determination Guide for more information about
controlling CICS trace.
TAKEOVR={MANUAL|AUTO|COMMAND} (alternate)
Use this parameter in the SIT for an alternate CICS region. It specifies the
action to be taken by the alternate CICS region, following the (apparent) loss of
the surveillance signal in the active CICS region. In doing this, it also specifies
the level of operator involvement.
If both active and alternate CICS regions are running under different MVS
images in the same sysplex, and an MVS failure occurs in the MVS image of
the active CICS region, the TAKEOVR option is overridden.
v If the MVS images are running in a PR/SM environment, CICS XRF takeover
to an alternate CICS region on a separate MVS image completes without the
need for any operator intervention.
v If the MVS images are not running in a PR/SM environment, the CICS
takeover is still initiated automatically, but needs operator intervention to
complete, because XCF outputs a WTOR (IXC402D). Sysplex partitioning
does not complete until the operator replies to this message, and CICS waits
for sysplex partitioning to complete before completing the XRF takeover.
MANUAL
The operator is asked to approve a takeover if the alternate CICS
region cannot detect the surveillance signal of the active CICS region.
The alternate CICS region does not ask the operator for approval if the
active CICS region signs off abnormally, or if there is an operator or
program command for takeover. In these cases, there is no doubt that
the alternate CICS region should take over, and manual involvement by
the operator would be an unnecessary overhead in the takeover
process.
You could use this option, for instance, to ensure manual takeover of a
master or coordinator region in MRO.
AUTO No operator approval, or intervention, is needed for a takeover.
COMMAND
Takeover occurs only when a CEBT PERFORM TAKEOVER command
is received by the alternate CICS region. It ensures, for instance, that a
dependent alternate CICS region, in MRO, is activated only if it receives
the command from the operator, or from a master or coordinator region.
TBEXITS=([name1][,name2][,name3] [,name4][,name5][,name6])
specifies the names of your backout exit programs for use during emergency
The order in which you code the names is significant. If you do not want to use
all the exits, code commas in place of the names you omit. For example:
TBEXITS=(,,EXITF,EXITV)
The program names for name1 through name6 apply to global user exit points
as follows:
v name1 and name2 are the names of programs to be invoked at the XRCINIT
and XRCINPT global user exit points (but note that XRCINIT and XRCINPT
are invoked only for user log records).
v name3 is the name of the program to be invoked at the file control backout
failure global user exit point, XFCBFAIL.
v name4 is the name of the program to be invoked at the file control logical
delete global user exit point, XFCLDEL.
v name5 is the name of the program to be invoked at the file control backout
override global user exit point, XFCBOVER.
v name6 is the name of the program to be invoked at the file control backout
override global user exit point, XFCBOUT.
This exit is invoked (if required) during backout of a unit of work, regardless of
whether the backout is taking place at emergency restart, or at any other time.
The XFCBFAIL, XFCLDEL, and XFCBOVER global user exit programs are
enabled on all types of CICS start if they are named on the TBEXITS system
initialization parameter.
If no backout exit programs are required, you can do one of the following:
v Omit the TBEXITS system initialization parameter altogether
v Code the parameter as TBEXITS=(,,,,,)
TCAM={NO|YES}
specifies whether TCAM support is to be included.
NO TCAM support is not to be included.
YES TCAM support is to be included.
TCP={YES|NO}
specifies whether the pregenerated non-VTAM terminal control program,
DFHTCP, is to be included.
You must code TCP=YES if you intend using card reader/line printer
(sequential) devices.
| TCPIP={NO|YES}
| specifies whether CICS TCPIP services are to be activated at CICS startup.
| The default is NO, meaning that these services cannot be enabled. If TCPIP is
| set to YES, the HTTP and IIOP services can process work.
TCSACTN={NONE|UNBIND|FORCE}
specifies the required action that CICS terminal control should take if the
terminal control shutdown wait threshold expires. For details of the wait
threshold, see the TCSWAIT system initialization parameter. TCSACTN only
takes effect when TCSWAIT is coded with a value in the range 1 through 99.
This parameter only applies to VTAM terminals (including LU Type 6.2
If you reassemble the TCT after starting CICS, any changes are applied when
you next start CICS, even if it is a warm or emergency startup.
If you have VTAM-connected terminals only, you can specify TCT=NO. If you do
this, note that a dummy TCT, called DFHTCTDY, is loaded during system
initialization. For more information about DFHTCTDY, see page 318. (If you
code TCT=NO, you must specify a CSD group list in the GRPLIST parameter.)
TCTUAKEY={USER|CICS}
specifies the storage key for the terminal control table user areas (TCTUAs) if
you are operating CICS with storage protection (STGPROT=YES). The
permitted values are USER (the default), or CICS:
USER CICS obtains the amount of storage for TCTUAs in user key. This
allows a user program executing in any key to modify the TCTUA.
CICS CICS obtains the amount of storage in CICS key. This means that only
programs executing in CICS key can modify the TCTUA, and user-key
programs have read-only access.
See “The terminal control table user areas” on page 354 for more information
about TCTUAs.
TCTUALOC={BELOW|ANY}
specifies where terminal user areas (TCTUA) are to be stored.
BELOW
The TCTUAs are stored below the 16MB line.
ANY The TCTUAs are stored anywhere in virtual storage. CICS stores
TCTUAs above the 16MB line if possible.
For more information about TCTUAs, see “Accessing the CSD by the offline
utility program, DFHCSDUP” on page 153.
For details about defining terminals using RDO, see the CICS Resource
Definition Guide.
TD=({3|decimal-value-1}[,{ 3|decimal-value-2}])
specifies the number of VSAM buffers and strings to be used for intrapartition
transient data (TD).
decimal-value-1
The number of buffers to be allocated for the use of intrapartition
transient data. The value must be in the range 1 through 32 767. The
default value is 3.
CICS obtains, above the 16MB line, storage for the TD buffers in units
of the page size (4KB). Because CICS optimizes the use of the storage
obtained, TD may allocate more buffers than you specify, depending on
the control interval (CI) size you have defined for the intrapartition data
set.
For example, if the CI size is 1536, and you specify 3 buffers (the
default number), CICS actually allocates 5 buffers. This is because 2
pages (8192 bytes) are required to obtain sufficient storage for three
1536-byte buffers, a total of only 4608 bytes, which would leave 3584
bytes of spare storage in the second page. In this case, CICS allocates
another 2 buffers (3072 bytes) to minimize the amount of unused
storage. In this way CICS makes use of storage that would otherwise
be unavailable for any other purpose.
decimal-value-2
The number of VSAM strings to be allocated for the use of intrapartition
transient data. The value must be in the range 1 through 255, and must
not exceed the value specified in decimal-value-1. The default value is
3.
The operands of the TD parameter are positional. You must code commas to
indicate missing operands if others follow. For example, TD=(,2) specifies the
number of strings and allows the number of buffers to default.
Trace entries are of variable lengths, but the average length is approximately
100 bytes.
Note: To switch on internal tracing, use the INTTR parameter; for a description
of INTTR, see page 264.
| TRTRANSZ={16|number-of-kilobytes}
specifies the size in kilobytes of the transaction dump trace table. (1KB = 1024
bytes.)
The operands of the TS parameter are positional. You must code commas to
indicate missing operands if others follow. For example, TS=(,8) specifies the
number of buffers and allows the other operands to default.
TST={NO|YES|xx}
specifies the temporary storage table suffix. (See page 227.)
For information about coding the macros for this table, see the CICS Resource
Definition Guide
UDSASZE={0K|number}
specifies the size of the UDSA. The default size is 0, indicating that the DSA
size can change dynamically. A non-zero value indicates that the DSA size is
fixed.
number
specify number as an amount of storage in the range 0 to 16777215
bytes in multiples of 262144 bytes (256KB). If the size specified is not a
multiple of 256KB (or 1MB if transaction isolation is active), CICS
rounds the value up to the next multiple.
You can specify number in bytes (for example, 4194304), or as a whole
number of kilobytes (for example, 4096K), or a whole number of
megabytes (for example, 4M).
The value you code can be from 1 to 8 characters long, and must consist of
uppercase letters (A through Z), or numbers in the range 0 through 9. The first
character must be a letter.
USERTR={ON|OFF}
specifies whether the master user trace flag is to be set on or off. If the user
trace flag is off, the user trace facility is disabled, and EXEC CICS ENTER
TRACENUM commands receive an INVREQ condition if EXCEPTION is not
specified. If the program does not handle this condition the transaction will
abend AEIP.
For programming information about the user trace facility using EXEC CICS
ENTER TRACENUM commands, see the CICS Application Programming
Reference manual.
USRDELAY={30|number}
specifies the maximum time, in the range 0 through 10080 minutes (up to 7
days), that an eligible userid and its associated attributes are to be retained in
the user table if the userid is unused. An entry in the user table for a userid that
is retained during the delay period can be reused.
The userids eligible for reuse within the USRDELAY period are any that are:
v Received from remote systems.
v Specified on SECURITYNAME in CONNECTION definitions.
v Specified on USERID in SESSIONS definitions.
v Specified on USERID in the definition of an intrapartition transient data
queue.
v Specified on USERID on START commands.
Within the USRDELAY period, a userid in any one of these categories can be
reused in one of the other categories, provided the request for reuse is qualified
| with the same qualifiers. If a userid is qualified by a different group id, APPLID,
| or terminal id, a retained entry is not reused (except when changing the
| terminal ID on LU6.2 when the retained entry is used).
If a userid is unused for more than the USRDELAY limit, it is removed from the
system, and the message DFHUS0200 is issued. You can suppress this
message in an XMEOUT global user exit program. If you specify
USRDELAY=0, all eligible userids are deleted immediately after use, and the
message DFHUS0200 is not issued. Do not code USRDELAY=0 if this CICS
region communicates with other CICS regions and:
v ATTACHSEC=IDENTIFY is specified on the CONNECTION definitions for the
connections used,
and
v The connections used carry high volumes of transaction routing or function
shipping activity.
You should specify a value that gives the optimum level of performance for your
CICS environment.
Note: If a value, other than 0, is specified for USRDELAY, the ability to change
the user’s attributes or revoke the userid becomes more difficult because
the userid and its attributes are retained in the region until the
USRDELAY value has expired. For example, if you have specified
USRDELAY=30 for a userid, but that userid continues to run transactions
every 25 minutes, the USRDELAY value will never expire and any
changes made to the userid will never come into effect.
For more information about the use of USRDELAY, see the CICS Performance
Guide .
VTAM={YES|NO}
specifies whether the VTAM access method is to be used. The default is
VTAM=YES.
VTPREFIX={\|character}
specifies the first character to be used for the terminal identifiers (termids) of
autoinstalled virtual terminals. Virtual terminals are used by the External
Presentation Interface (EPI) and terminal emulator functions of the CICS Client
products.
By specifying a prefix, you can ensure that the termids of Client terminals
autoinstalled on this system are unique in your transaction routing network. This
prevents the conflicts that could occur if two or more terminal-owning regions
(TORs) ship definitions of Client virtual terminals to the same application-owning
region (AOR).
For further information about Client virtual terminals, see the CICS
Intercommunication Guide manual.
WEBDELAY=(5|time_out,60|keep_time)
Specifies two Web delay periods:
1. A time-out period. The maximum time, in minutes, in the range 1-60, that a
transaction started through the Web 3270 bridge interface, is allowed to
remain in terminal wait state before it is automatically purged by CICS.
2. The terminal keep time. The time, in minutes, in the range 1-6000, during
which state data is kept for a CICS Web 3270 bridge transaction, before
CICS performs clean-up.
WRKAREA={512|number}
specifies the number of bytes to be allocated to the common work area (CWA).
This area, for use by your installation, is initially set to binary zeros, and is
available to all programs. It is not used by CICS. The maximum size for the
work area is 3584 bytes.
XAPPC={NO|YES}
specifies whether RACF session security can be used when establishing APPC
sessions.
NO RACF session security cannot be used. Only the BINDPASSWORD
(defined to CICS for an APPC connection) is checked.
YES RACF session security can be used.
If you specify BINDSECURITY=YES for a particular APPC connection,
a request to RACF is issued to extract the security profile. If the profile
exists, it is used to bind the session. If it does not exist, only the
BINDPASSWORD (defined to CICS for the connection) is checked.
Restrictions You can specify the XAPPC parameter in the SIT, PARM, or
SYSIN only.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the CMDSEC(YES) option
on the transaction resource definition.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
YES CICS calls RACF, using the default class name of CICSCMD prefixed
by C or V, to check whether the userid associated with a transaction is
authorized to use a CICS command for the specified resource. The
resource class name is CCICSCMD and the grouping class name is
VCICSCMD.
name CICS calls RACF, using the specified resource class name prefixed by
C or V, to verify that the userid associated with a transaction is
authorized to use a CICS command for the specified resource. The
resource class name is Cname and the grouping class name is Vname.
The resource class name specified must be 1 through 7 characters.
NO CICS does not perform any command security checks, allowing any
user to use commands that would be subject to those checks.
Restrictions You can specify the XCMD parameter in the SIT, PARM, or SYSIN
only.
XDB2={NO|name}
specifies whether you want CICS to perform DB2ENTRY security checking.
NO CICS does not perform any DB2 resource security checks.
name CICS calls RACF, using the specified general resource class name, to
check whether the userid associated with the CICS DB2 transaction is
authorized to access the DB2ENTRY referenced by the transaction.
Unlike the other Xaaa system initialization parameters, this DB2 security
parameter does not provide a YES option that implies a default CICS
resource class name for DB2ENTRY resources. You have to specify
your own DB2 resource class name.
XDCT={YES|name|NO}
specifies whether you want CICS to perform transient data resource security
checking. If you specify YES or a RACF resource class name, CICS calls
RACF to verify that the userid associated with a transaction is authorized to
access the transient data destination. Such checking is performed every time a
transaction tries to access a transient data destination.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC(YES) option
on the transaction resource definition.
Restrictions You can specify the XDCT parameter in the SIT, PARM, or SYSIN
only.
XFCT={YES|name|NO}
specifies whether you want CICS to perform file resource security checking, and
optionally specifies the RACF resource class name in which you have defined
the file resource security profiles. If you specify YES, or a RACF resource class
name, CICS calls RACF to verify that the userid associated with a transaction is
authorized to access File Control-managed files. Such checking is performed
every time a transaction tries to access a file managed by CICS File Control.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC(YES) option
on the resource definitions.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
YES CICS calls RACF, using the default CICS resource class name of
CICSFCT prefixed by F or H, to verify that the userid associated with a
transaction is authorized to access files reference by the transaction.
The resource class name is FCICSFCT and the grouping class name is
HCICSFCT.
name CICS calls RACF, using the specified resource class name, to verify
that the userid associated with a transaction is authorized to access
files referenced by the transaction. The resource class name is Fname
and the grouping class name is Hname.
The resource class name specified must be 1 through 7 characters.
NO CICS does not perform any file resource security checks, allowing any
user to access any file.
Restrictions You can specify the XFCT parameter in the SIT, PARM, or SYSIN
only.
XJCT={YES|name|NO}
specifies whether you want CICS to perform journal resource security checking.
If you specify YES, or a RACF resource class name, CICS calls RACF to verify
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC is active for
the resource definitions.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
YES CICS calls RACF using the default CICS resource class name of
CICSJCT prefixed by a J or K, to check whether the userid associated
with a transaction is authorized to access CICS journals referenced by
the transaction. The resource class name is JCICSJCT and the
grouping class name is KCICSJCT.
name CICS calls RACF, using the specified resource class name prefixed by
J or K, to verify that the userid associated with a transaction is
authorized to access CICS journals.
The resource class name specified must be 1 through 7 characters.
NO CICS does not perform any journal resource security checks, allowing
any user to access any CICS journal.
Restrictions You can specify the XJCT parameter in the SIT, PARM, or SYSIN
only.
XLT={NO|xx|YES}
specifies a suffix for the transaction list table. (See page 227.) The table
contains a list of transactions that can be attached during the first quiesce stage
of system termination.
YES The default transaction list table, DFHXLT, is used.
xx The transaction list table DFHXLTxx is used.
NO A transaction list table is not used.
For guidance information about coding the macros for this table, see the CICS
Resource Definition Guide
XPCT={YES|name|NO}
specifies whether you want CICS to perform started transaction resource
security checking, and optionally specifies the name of the RACF resource
class name in which you have defined the started task security profiles. If you
specify YES, or a RACF resource class name, CICS calls RACF to verify that
the userid associated with a transaction is authorized to use started
transactions and related EXEC CICS commands. Such checking is performed
every time a transaction tries to use a started transaction or one of the EXEC
CICS commands: COLLECT STATISTICS TRANSACTION, DISCARD
TRANSACTION, INQUIRE TRANSACTION, or SET TRANSACTION.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC(YES) option
on the resource definitions.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
Restrictions You can specify the XPCT parameter in the SIT, PARM, or SYSIN
only.
XPPT={YES|name|NO}
specifies that CICS is to perform application program resource security checks,
and optionally specifies the RACF resource class name in which you have
defined the program resource security profiles. Such checking is performed
every time a transaction tries to invoke another program by using one of the
CICS commands: LINK, LOAD, or XCTL.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC(YES) option
on the resource definitions.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
YES CICS calls RACF, using the default resource class name prefixed by M
or N, to verify that the userid associated with a transaction is authorized
to use LINK, LOAD, or XCTL commands to invoke other programs. The
resource class name is MCICSPPT and the grouping class name is
NCICSPPT.
name CICS calls RACF, with the specified resource class name prefixed by M
or N, to verify that the userid associated with a transaction is authorized
to use LINK, LOAD, or XCTL commands to invoke other programs. The
resource class name is Mname and the grouping class name is Nname.
The resource class name specified must be 1 through 7 characters.
NO CICS does not perform any application program authority checks,
allowing any user to use LINK, LOAD, or XCTL commands to invoke
other programs.
Restrictions You can specify the XPPT parameter in the SIT, PARM, or SYSIN
only.
For information about preparing for and using security with CICS, see the CICS
RACF Security Guide.
YES CICS calls RACF, using the default resource class name CICSPSB
prefixed by P or Q, to verify that the userid associated with a
transaction is authorized to access PSBs. The resource class name is
PCICSPSB and the grouping class name is QCICSPSB.
name CICS calls RACF, using the specified resource class name prefixed by
P or Q, to verify that the userid associated with a transaction is
authorized to access PSBs. The resource class name is Pname and the
grouping class name is Qname.
The resource class name specified must be 1 through 7 characters.
NO CICS does not perform any PSB resource security checks, allowing any
user to access any PSB.
Restrictions You can specify the XPSB parameter in the SIT, PARM, or SYSIN
only.
XRF={NO|YES} (active and alternate)
specifies whether XRF support is to be included in the CICS region. If the CICS
region is started with the START=STANDBY system initialization parameter
specified, the CICS region is the alternate CICS region. If the CICS region is
started with the START=AUTO, START=INITIAL or START=COLD system
initialization parameter specified, the CICS region is the active CICS region.
The active CICS region signs on as such to the CICS availability manager. For
background information about XRF, see the CICS/ESA 3.3 XRF Guide .
XRFSOFF={NOFORCE|FORCE}
specifies whether all users signed-on to the active CICS region are to remain
signed-on following a takeover. This parameter is only applicable if you also
code XRF=YES as a system initialization parameter.
NOFORCE
Allow CICS to determine sign-off according to the option set in either of:
v The CICS segment of the RACF database
v The TYPETERM definition for the user’s terminal
If you have specified NOFORCE in the RACF database, and in the terminals’
TYPETERM definitions, and the takeover takes longer than the time specified in
the XRFSTME, all users who are still signed-on after takeover are signed off.
5 Five minutes is the default value in the DFHSIT macro.
decimal-value
A value in the range 0 through 60 for the number of minutes CICS
permits users to remain signed on during the takeover period. The
takeover period is the time from when the takeover is initiated to the
time at which CICS is ready to process user transactions. If the
takeover takes longer than the specified period, all users signed-on at
the time the takeover was initiated are signed-off.
A value of 0 specifies that there is no time-out delay, and terminals are
signed off as soon as takeover commences, which means that
XRFSTME=0 has the same effect as coding XRFSOFF=FORCE.
For non-XRF-capable terminals, take into account any AUTCONN delay period
when setting the value for XRFSTME. (See the description of the AUTCONN
parameter on page 233.) You may need to increase the XRFSTME value to
allow for the delay to the start of the CXRE transaction imposed by the
AUTCONN parameter; otherwise, terminals may be signed-off too early. For
example:
Alternate Control is
CICS region given to
initiates alternate
takeover CICS region
here: here:
AUTCONN
period
XRFSTME
period
Alternate Control is
CICS region given to
initiates alternate CXRE transaction can begin
takeover CICS region to reacquire terminals now,
here: here: before XRFSTME expires.
AUTCONN
delay period
XRFSTME
period
XTRAN={YES|name|NO}
specifies whether you want CICS to perform transaction-attach security
checking, and optionally specifies the RACF resource class name in which you
have defined the transaction security profiles. If you specify YES, or a RACF
resource class name, CICS calls RACF to verify that the userid associated with
the transaction is permitted to run the transaction.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter and specified the RESSEC(YES) option
on the resource definitions.
YES CICS calls RACF, using the default CICS resource class name of
CICSTRN prefixed by T or G, to verify that the userid associated
with the transaction is authorized to run the transaction. The
resource class name is TCICSTRN and the grouping class name
is GCICSTRN.
name CICS calls RACF, using the specified resource class name
prefixed by T or G, to verify that the userid associated with the
transaction is authorized to run the transaction. The resource
class name is Tname and the corresponding grouping class
name is Gname.
The name specified must be 1 through 7 characters.
NO CICS does not perform any transaction-attach security checks,
allowing any user to run any transaction.
Note: The checking is performed only if you have specified YES for the SEC
system initialization parameter.
Restrictions You can specify the XTRAN parameter in the SIT, PARM, or
SYSIN only.
XTST={YES|name|NO}
specifies whether you want CICS to perform temporary storage security
checking, and optionally specifies the RACF resource class name in which you
have defined the temporary storage security profiles. If you specify YES, or a
RACF resource class name, CICS calls RACF to verify that the userid
associated with a temporary storage request is authorized to access the
referenced temporary storage queue.
Restrictions You can specify the XTST parameter in the SIT, PARM, or SYSIN
only.
XUSER={YES|NO}
specifies whether CICS is to perform surrogate user checks.
YES CICS is to perform surrogate user checking in all those situations that
permit such checks to be made (for example, on EXEC CICS START
commands without an associated terminal). Surrogate user security
checking is also performed by CICS against userids installing or
modifying DB2 resource definitions that specify AUTHID or
COMAUTHID.
Restrictions You can specify the XUSER parameter in the SIT, PARM, or
SYSIN only.
You can also specify that a program is not needed (see “Excluding unwanted
programs” for details).
You can use these methods only for the programs referred to in this section and in
“Excluding unwanted programs”, by coding system initialization parameters.
Specifying programname=NO
If you code programname=NO as a system initialization parameter, (for example,
DIP=NO), you exclude the named management program at CICS system
initialization.
Note: In the case of DIP, you get a dummy version of the management program,
which is supplied on the distribution tape with a suffix of DY.
The system recovery table (SRT) can be used in this way, and the associated
system recovery program (SRP) will be excluded.
When you specify TCT=NO, CICS loads a dummy TCT named DFHTCTDY. A
pregenerated dummy table of this name is supplied in
CICSTS13.CICS.SDFHLOAD, and the source statements of DFHTCTDY are
supplied in CICSTS13.CICS.SDFHSAMP. If you specify TCT=NO, a generated table
of this name must be available in a library of the DFHRPL concatenation when you
start CICS.
The dummy TCT provides only the CICS and VTAM control blocks that you need if
you are using VTAM terminals and using the CSD for storing terminal definitions.
You define your VTAM terminals using the RDO transaction, CEDA, or the DEFINE
command of the CSD batch utility program, DFHCSDUP.
Specifying function=NO
If you code function=NO as a system initialization parameter (for example,
XRF=NO), you exclude the management program associated with the named
function at CICS system initialization.
You can exclude intersystem communication (ISC), the 3270 print-request facility,
the system spooling interface, TCAM support, or the extended recovery facility
(XRF), in this way.
You can modify many of the system initialization parameters dynamically at the
beginning of CICS initialization by providing system initialization parameters in the
startup job stream, or through the system console. There are also some system
initialization parameters that you cannot code in the SIT, and can only supply at
startup time. You specify system initialization parameters at startup time in any of
three ways:
1. In the PARM parameter of the EXEC PGM=DFHSIP statement
2. In the SYSIN data set defined in the startup job stream
3. Through the system operator’s console
You can use just one of these methods, or two, or all three. However, parameter
manager domain processes these three sources of input in strict sequence, as
follows:
1. The PARM parameter
2. The SYSIN data set (but only if SYSIN is coded in the PARM parameter; see
page 321)
3. The console (but only if CONSOLE is coded in either the PARM parameter or in
the SYSIN data set; see page 321)
Note: If you supply duplicate system initialization parameters, either through the
same or a different medium, CICS takes the last one that it reads. For
example, if you specify MCT=1$ in the PARM parameter, MCT=2$ in the
SYSIN data set, and finally enter MCT=3$ through the console, CICS loads
DFHMCT3$.
For details of the parameters that are ignored when you specify NEWSIT=YES,
see the NEWSIT parameter description on page 275.
Note: The trace domain is an exception to the above rules in that it always cold
starts. Trace does not save its status at CICS shutdown like the other
domains, and regardless of the type of startup, it requests all of its system
initialization parameters from the parameter manager domain.
Where to code SYSIN: You can code SYSIN (or SI) only in the PARM
parameter of the EXEC PGM=DFHSIP statement. The keyword can appear
once only and must be at the end of the PARM parameter. CICS does not read
SYSIN until it has finished scanning all of the PARM parameter, or until it
reaches a .END before the end of the PARM parameter. (See .END on page
321.)
Examples:
//stepname EXEC PGM=DFHSIP,PARM='SIT=6$,SYSIN,.END'
//stepname EXEC PGM=DFHSIP,PARM='SIT=6$,DLI=YES,SYSIN,.END'
CONSOLE (CN)
This keyword tells CICS to read initialization parameters from the console. CICS
prompts you with message DFHPA1104 when it is ready to read parameters
from the console.
Where to code CONSOLE: You can code CONSOLE (or CN) in the PARM
parameter of the EXEC PGM=DFHSIP statement or in the SYSIN data set. This
keyword can appear either at the end of the PARM parameter or in the SYSIN
data set, but code it in one place only.
If you code CONSOLE (or CN) in the PARM parameter, and PARM also
contains the SYSIN keyword, CICS does not begin reading parameters from the
console until it has finished reading and processing the SYSIN data set.
Similarly, wherever you place the CONSOLE keyword in the SYSIN data set,
CICS does not begin reading parameters from the console until it has finished
reading and processing the SYSIN data set.
Examples:
//stepname EXEC PGM=DFHSIP,PARM='SIT6$,CONSOLE,.END'
//stepname EXEC PGM=DFHSIP,PARM='CONSOLE,SYSIN,.END'
//stepname EXEC PGM=DFHSIP,PARM='SIT=6$,CN,SI,.END'
Note:
If both SYSIN (or SI) and CONSOLE (or CN) appear as keywords of the PARM
parameter, the order in which they are coded is irrelevant as long as no other
keywords, other than .END, follow them.
.END
The meaning of this keyword varies, depending on its context:
Context
Explanation
PARM The use of the .END keyword is optional in the PARM parameter. If you
omit it, CICS assumes it to be at the end of the PARM parameter. If you
code .END in the PARM parameter it can have one of two meanings:
1. If you also code one, or both, of the other control keywords
(CONSOLE and/or SYSIN) .END denotes the end of the PARM
parameter only.
If .END is not the last entry in the PARM parameter, CICS truncates the
PARM parameter and the parameters following the .END keyword are
lost.
SYSIN The use of the .END keyword is optional in the SYSIN data set. If you
omit it, CICS assumes it to be at the end of SYSIN. If you code .END in
the SYSIN data set its meaning depends on your use of the CONSOLE
keyword, as follows:
v If you code the CONSOLE control keyword in the PARM parameter
or in the SYSIN data set, .END denotes the end of the SYSIN data
set only.
v If you do not code the CONSOLE control keyword in the PARM
parameter or in the SYSIN data set, .END denotes the end of all
CICS system initialization parameters, and CICS begins the
initialization process.
If you code .END, and it is not the last entry in the SYSIN data set, or
not at the end of a SYSIN record, CICS initialization parameters
following the .END are ignored. To avoid accidental loss of initialization
parameters, ensure that the .END keyword is on the last record in the
SYSIN data set, and that it is the only entry on that line. (However, if
you want to remove some system initialization parameters from a
particular run of CICS, you could position them after the .END
statement just for that run.)
The following example shows the use of .END in a SYSIN data set:
//SYSIN DD *
* CICS system initialization parameters
SIT=6$,START=COLD,
* XRF=NO, ( XRF this run - SIT defines XRF=YES
PDIR=1$,
. ( SUFFIX of PSB directory
.
.
.END
/*
CONSOLE
The meaning of .END through the console depends on whether you are
entering new parameters or entering corrections. The two meanings are
as follows:
1. If you are keying new parameters in response to message
DFHPA1104, .END terminates parameter reading, and CICS starts
initialization according to the SIT it has loaded, but modified by any
system initialization parameters you have supplied. Until you enter
the .END control keyword, CICS continues to prompt you for system
initialization parameters.
2. If you have coded PARMERR=INTERACT, and CICS detects a
parameter error, either in the keyword or in the value that you have
assigned to it, CICS prompts you to correct the error with message
DFHPA1912 or DFHPA1915. If you enter the correct keyword or
CICS scans the PARM string looking for a SIT= parameter, any of the special
control keywords, or any system initialization parameters, and proceeds as follows:
v If CICS finds a SIT= parameter but no SYSIN keyword, CICS tries to load the
SIT as soon as it has finished scanning the PARM parameter. Processing any
CICS system initialization parameters that are also present in the PARM
parameter takes place only after the SIT has been loaded.
v If CICS finds a SIT= parameter and also a SYSIN keyword, CICS does not try to
load the SIT until it has also finished scanning the SYSIN data set. In this case,
loading the SIT is deferred because there can be other SIT= parameters coded
in the SYSIN data set that override the one in the PARM parameter.
Processing any system initialization parameters that are also present in the
PARM parameter takes place only after the SIT has been loaded.
If CICS finds a SIT= parameter in SYSIN, it tries to load that SIT, overriding any
that was specified in the PARM parameter. If CICS does not find a SIT= parameter
in SYSIN, it tries to load any SIT specified in the PARM parameter.
However, if after scanning the PARM parameter and the SYSIN data set CICS has
not found a SIT= parameter, CICS does one of the following:
1. If you specified CONSOLE in the PARM parameter or in the SYSIN data set,
CICS prompts you with the following message to enter the SIT suffix as the first
parameter through the console:
Note: CICS does not process any system initialization parameters that are coded
in the PARM parameter and the SYSIN data set until after the SIT has been
loaded.
You can use apostrophes to punctuate message text, provided that you code two
successive apostrophes to represent a single apostrophe (as shown in the
example above). The apostrophes delimiting the text are mandatory.
v You must take care when coding parameters that use apostrophes, parentheses,
or commas as delimiters, because failure to include the correct delimiters is likely
to cause unpredictable results.
You can specify a SIT= parameter only as the first parameter through the console
when prompted by message DFHPA1921, at which point CICS tries to load the
specified SIT. If you try to specify a SIT= parameter after CICS has loaded the SIT
it is rejected as an error.
You can enter as many initialization parameters as you can get on one line of the
console, but you must use a comma to separate parameters. CICS continues to
prompt for system initialization parameters with displays of message DFHPA1105
until you terminate console input by entering the .END control keyword.
CICS prompts you to enter corrections to any errors it finds in the PARM parameter
or the SYSIN data set after it has loaded the SIT, and as each error is detected.
This means that if there is an APPLID parameter following the parameter that is in
error, either in the PARM parameter or in the SYSIN data set, it is the APPLID
coded in the SIT that CICS displays in messages DFHPA1912 and DFHPA1915.
If you run CICS with START=AUTO, and a warm or emergency restart results,
CICS restores all the installed resource definitions as they were at normal CICS
shutdown, or at the time of system failure. The general rule is that you cannot alter
installed resource definitions during a restart except by coding START=COLD or
START=INITIAL. For details of the results of the possible combinations of CICS
restart-type and global catalog state, see “The START system initialization
parameter”.
The CICS domains also use the global catalog to save their domain status between
runs. In some cases this information can be overridden during a restart by
supplying system initialization parameters. For example, CICS monitoring uses the
cataloged status at a restart, but modified by any system initialization parameter In
other cases the domain information saved in the catalog is always used in a restart.
For example, CICS statistics interval time is always restored from the catalog in a
warm or emergency restart, because the statistics domain does not have this as a
system initialization parameter. To change this you must use CEMT or EXEC CICS
commands after control is given to CICS. Alternatively, you can enforce system
defaults by performing a cold start.
Note: If you need to reinitialize the global catalog for any reason, you must also
reinitialize the local catalog.
The local catalog: The CICS domains use the local catalog to save some of their
information between CICS runs. If you delete and redefine the local catalog, you
must:
v Initialize the local catalog with an initial set of domain records.
| v Use the CICS-supplied utility program, DFHSMUTL, to re-add records to enable
| the CICS self-tuning mechanism for storage manager domain subpools. For
| details of how to do this see the CICS Operations and Utilities Guide.
v Delete and reinitialize the global catalog.
For more information about initializing the local catalog, see “The local catalog” on
page 165. Some of the information that is saved in the local catalog can be
overridden at CICS system initialization by system initialization parameters, such as
CICS transaction dump data set status.
Note: If you need to reinitialize the local catalog for any reason, you must also
reinitialize the global catalog.
If you set CICS to perform an initial start, you should reinitialize the
local catalog before bringing up CICS.
2. Cold start
CICS performs a cold start in the following cases:
v The recovery manager control record specifies a cold start. (This can
happen if a previous cold start did not complete.)
v There is both a recovery manager control record (which specifies
anything other than an initial start) and an autostart override record
that specifies AUTOCOLD.
Log records for local resources are purged and resource definitions
rebuilt from the CSD or CICS control tables. Units of work on other
systems are resynchronized with this system, as described under
START=COLD.
3. Warm start
If the recovery manager control record indicates that the previous run of
CICS terminated normally with a successful warm keypoint, CICS
performs a warm restart—unless the autostart override record specifies
AUTOINIT or AUTOCOLD, in which case an initial or cold start is
performed.
For the warm restart to be successful, the local catalog must contain
the information saved by the CICS domains during the previous
execution.
A warm start restores CICS to the state it was in at the previous
shutdown.
You can modify a warm restart by coding the NEWSIT system
initialization parameter. This has the effect of enforcing the system
initialization parameters coded in the SIT, overriding any cataloged
status from the previous CICS shutdown.
The exceptions to this are the system initialization parameters DCT,
FCT, the CSDxxxxx group (for example CSDACC), and GRPLIST,
which are always ignored in a warm restart, even if you specify
NEWSIT=YES. Specifying NEWSIT=YES causes, in effect, a partial
cold start.
4. Emergency start
If the control record in the global catalog indicates that the previous run
of CICS terminated in an immediate or uncontrolled shutdown, CICS
performs an emergency restart.
START=AUTO should be the normal mode of operation, with the choice of start
being made by CICS automatically. Use the recovery manager utility program,
DFHRMUTL, to set overrides.
Note: The global catalog and system log are initialized, and all information in
them is lost. Because recovery information for remote systems is not
preserved, damage may be done to distributed units of work
When it takes over, the alternate CICS region becomes the active CICS region.
For information about operating a CICS region with XRF, see the CICS
Operations and Utilities Guide.
Table 35 shows how the effect of the START parameter depends on the state of the
CICS global catalog and system log.
Table 35. Effect of the START= parameter in conjunction with the global catalog and system
log
START State of global catalog State of Result at restart
parm. system log
Any. Not defined to VSAM. Any. JCL error.
INITIAL Defined. Any. CICS performs an initial start.
The global catalog and
system log 7 are initialized.
COLD Defined but contains no Any. After prompting for
recovery manager control confirmation, CICS performs
record. an initial start. The global
catalog and system log 7 are
initialized.
COLD Contains recovery manager Not defined or Message DFHRM0401 is
records. dummy or issued. Startup fails.
empty.
Notes:
1. It is important to keep the CICS global and local catalogs in step. If CICS tries
to perform a warm or emergency start and finds that the local catalog has been
initialized, startup fails. Therefore, only initialize the local catalog at the same
time as the global catalog.
2. It is recommended that you always run the DFHRMUTL and DFHCCUTL utilities
in the same job. Run DFHRMUTL first and check its return code before running
DFHCCUTL. If you do this, the global and local catalogs should never get out of
step. For information about running DFHRMUTL and DFHCCUTL, see the CICS
Operations and Utilities Guide.
Table 36 on page 331 shows the effect of different types of CICS startup on the
CICS trace, monitoring, statistics, and dump domains.
Although the MODIFY NET, USERVAR command is only significant when you are
running CICS with XRF, the USERVAR message occurs for both XRF=YES and
XRF=NO CICS systems. If you receive messages DFHSI1589D and DFHSI1572,
and if the CICS region is not initializing as an alternate CICS region, you can start
the CICS-VTAM session manually when VTAM is eventually started, by means of
the CEMT SET VTAM OPEN command from a supported MVS console or a
non-VTAM terminal.
This may be caused by an error in the value of APPLID operand, in which case you
must correct the error and restart CICS. For information about other causes and
actions, see the CICS Messages and Codes manual.
Because VTAM and the alternate CICS region may be initialized concurrently, it is
possible that several tries may have to be made to open the VTAM ACB. If VTAM is
not active, the following message is written to the system console every 15
seconds:
DFHSI1589D 'applid' VTAM is not currently active.
If VTAM is active, but CICS cannot open the VTAM ACB, the following messages
are written to the system console:
+DFHSI1572 'applid' Unable to OPEN VTAM ACB - RC=xxxxxxxx, ACB CODE=yy.
DFHSI1590 'applid' XRF alternate cannot proceed without VTAM.
When the startup process is completed, users are able to enter transactions from
any terminals that are connected to CICS. For information about the CICS-supplied
transactions, see the CICS Supplied Transactions manual.
You can edit this file with TSO to change the default values The user replaceable
module DFHJVMAT can also be called at JVM initalization to examine and reset the
values. See the CICS Customization Guide for a description of DFHJVMAT.
Where:
CHECKSOURCE
Tells the JVM to check the source file and .class file. If the .class file is out of
date, the JVM recompiles the source.
CICS_HOME
specifies the HFS directory that is used by the CICS JVM interface when
creating stdin, stdout and stderr files. A period (.) is defined in SDFHENV,
which means that the current directory will be used. If CICS_HOME is not set at
all, then /tmp will be used as the directory.
Note: The initial process thread (IPT) stack is governed by the STACK run-time
option on the DFHCJVM C program that invokes the JVM. This program
sets the following value:
#pragma runopts(STACK(64K,16K,ANYWHERE,KEEP))
STDERR
specifies the name of the HFS file to be used for stderr. The default shipped in
SDFHENV is dfhjvmerr. The file will be created if it does not exist. If the file
already exists, output is appended at the end of the file. On completion of the
JVM program, if the stderr file is empty, it is deleted.
Chapter 23. Defining the CICS JVM execution environment variables 335
336 CICS TS for OS/390: CICS System Definition Guide
Chapter 24. CICS startup
This chapter describes how to start up CICS in the CICS region. Depending on your
system environment, you can start the CICS job from a procedure by using the
START command, or you can submit the CICS startup job stream through the
internal reader. This chapter gives an example of each of these methods. For an
example of a batch job that you can submit through the internal reader, see “A
sample CICS startup job” on page 339. “A sample CICS startup procedure” on
page 360 gives an example of a cataloged procedure suitable for starting CICS as
a started task.
When you run the startup job, you start a process called CICS system
initialization. This process must finish before you run any transactions. Completion
of CICS initialization is shown by the following message at the system console:
DFHSI1517 - applid: Control is being given to CICS.
Also, if you are operating CICS with CICS recovery options, backout procedures
may be used to restore recoverable resources to a logically consistent state. Briefly,
backout occurs if you start CICS in one of the following ways:
v With START=AUTO and CICS detects that the previous shutdown was
immediate or uncontrolled. With SDTRAN, an immediate shutdown does not
always leave in-flight units of work to be backed out. Also, even if there were no
in-flight UOWs, it is possible (although rare) that there were backout-failed UOWs
for which backout will be retried.
v With START=STANDBY and XRF=YES, and a takeover occurs.
For background information about backout, and recovery and restart, see the CICS
Recovery and Restart Guide.
If you are running CICS with DB2, you can specify the resource control table suffix
and DB2 subsystem ID to be used at startup by the INITPARM system initialization
parameter, as follows:
INITPARM=(DFHD2INI='xx,yyyy')
where xx is the 2-character resource control table suffix and yyyy is the 4-character
DB2 subsystem ID. Both values must conform to MVS JCL rules about special
characters. If you specify a DB2 subsystem ID, it is used at PLT startup (with the
resource control table suffix specified).
For information about setting up the RCT, see the CICS DB2 Guide.
The sample startup job stream is based on the system initialization parameters
contained in the CICS-supplied sample table, DFHSIT6$.
For more information about the DD statements in this job stream that are needed
by CICS and IMS, see the appropriate chapter in “Part 2. Defining data sets” on
page 91.
/*
//* 1 The JOB statement
//CICSRUN JOB accounting info,name,CLASS=A,
// MSGCLASS=A,MSGLEVEL=(1,1)
//*
//* 2 The JOBPARM statement
/*JOBPARM SYSAFF=sysid
/*
//***********************************************************
//******************* EXECUTE CICS ************************
//***********************************************************
//*
//* 3 The EXEC CICS=DFHSIP statement
//CICS EXEC PGM=DFHSIP,REGION=240M,
//* 4 SIT parameters specified on PARM parameter
// PARM=('SIT=6$',
// 'DSALIM=6M,EDSALIM=120M',
// 'RENTPGM=PROTECT,STGPROT=YES',
// 'START=AUTO,SI')
//*
//* 5 SIT parameters specified on the SYSIN data set
//SYSIN DD *
GRPLIST=(DFHLIST,userlist1,userlist2),
LPA=YES,
APPLID=CICSHTH1,
*
CICSDFLTUSER=CICSUSER, The default userid
MXT=30, Maximum number of user tasks is 30
INITPARM=(DFHDBCON='01',DFHD2INI=('01,MYDB')),
Pass DFSPZP01 suffix to DBCTL connect program
Use RCT DFHRCT01 with DB2 subsystem MYDB
ISC=YES, Include intersystem communication program
IRCSTRT=YES, Start interregion communication
.END
/*
//*
Notes:
The JOB statement specifies the accounting information that you want to use for
this run of CICS. For example:
//CICSRUN JOB 24116475,userid,MSGCLASS=A,MSGLEVEL=(1,1),
// CLASS=A,NOTIFY=userid
CICS does not support more than one EXEC PGM=DFHSIP job step in the same
MVS job.
The EXEC statement contains the REGION parameter to define the size of CICS
MVS region. In this example, the value is set to 240, requesting MVS to allocate to
the job all 16MB of private storage below the 16MB line, and an extended region
size of 240MB.
You determine how much of the allocated private storage you want for the CICS
dynamic storage areas, and how much CICS is to leave for demands on operating
system storage, by setting values for the DSALIM and EDSALIM system
initialization parameters. After obtaining the amount of space required for the DSAs
from the total defined by the REGION parameter, the remaining storage is available
to meet demands for operating system storage.
In our sample job stream, these system initialization parameters are specified in the
PARM parameter (see the next topic).
For more details about the REGION parameter and CICS storage, see “Storage
requirements for a CICS region” on page 351.
If you are running CICS with RACF support, see the CICS RACF Security Guide for
information about RACF-related parameters.
You can use the PARM parameter of the EXEC statement to specify system
initialization parameters as shown.
The information passed by the PARM parameter is limited to 100 characters. This
limit includes all commas, but excludes the apostrophes delimiting the PARM
strings, and excludes the opening and closing parentheses delimiting the PARM
parameter. (Internal parentheses enclosing system initialization operands are
included.) If 100 characters are not sufficient for the system initialization parameters
you want to provide at startup, indicate continuation by ending the PARM field with
the “SYSIN” or “CONSOLE” control keywords (or “SI” or “CN” for short). If you
specify SYSIN, system initialization parameters are read from the SYSIN data set; if
you specify CONSOLE, CICS prompts you to enter parameters through the
console. However, if all of your run-time system initialization parameters are in the
PARM parameter, you can end the PARM field simply without any control keywords,
or by the .END control keyword.
In our example, DFHSIT6$ is the SIT selected, and CICS system initialization uses
the values in that table, modified by the system initialization parameters supplied in
the PARM field and the SYSIN data set. For this example, the following system
initialization parameters are provided in the PARM parameter:
You can include the SYSIN data set inline as part of the job stream. System
initialization parameters entered in the SYSIN data set replace any, for the same
keyword, that were entered in the PARM parameter. If you include the same
parameter more than once, the last value read is the value used for initialization
| except for INITPARM. If you specify the INITPARM keyword and its parameters
| more than once, each one is accepted by CICS, for example:
| * The following INITPARM parameters are for DBCTL and a user program
| INITPARM=(DFHDBCON='XX,DBCON2',userprog='a,b,c')
| * The following INITPARM parameter is for DB2
| INITPARM=(DSN2STRT= 'DBA2')
Unless you explicitly code the system initialization control keyword CONSOLE,
CICS stops reading system initialization parameters when it reaches the end of
SYSIN or a .END control keyword.
In the sample job, CONSOLE is not coded in either PARM or SYSIN. The .END
control keyword is the last entry in SYSIN, so CICS does not prompt through the
console for further system initialization parameters. After reading the SYSIN data
set, CICS loads the specified SIT, applies any system initialization parameters
supplied in the PARM field and the SYSIN data set, and begins the initialization
process.
The SYSIN data set in our example includes several system initialization
parameters, as follows:
GRPLIST
The group list defined in DFHSIT6$ is DFHLIST, the IBM-defined list that is
CICSID is the generic applid of this CICS region, and CICSHTH1 is the
specific applid of the active CICS region. With XRF=YES, the active and
alternate CICS regions share the same generic applid, but have different
specific applids.
The specific applid can be useful for naming those data sets that are
unique (for example, dump data sets). Where necessary, it can be used as
the second-level qualifier to distinguish the data sets of the active and
alternate CICS regions.
CICSSVC
245 is the CICS type 3 SVC number installed in the LPA, and defined to
MVS in an SVCPARM statement. For more information about the CICSSVC
parameter, see page 236.
For guidance information about installing the CICS SVC in the LPA, and
defining it to MVS, see the CICS Transaction Server for OS/390 Installation
Guide.
DFLTUSER
CICSUSER is the default userid specified to RACF. During startup, CICS
tries to sign on the default userid. If it cannot be signed on (for example, if
not defined), CICS issues a message and terminates CICS initialization.
After the valid default userid is signed on, its security attributes are used for
all CICS terminal users who do not sign on with the CESN transaction. If
the default userid is defined to RACF with a CICS segment, the operator
attributes in that segment are also used for users who do not sign on.
MXT The maximum number of user tasks is limited to 30 for this run. For
information about what tasks are included in the MXT parameter, see page
272.
6 STEPLIB library
STEPLIB is the DDNAME of the library containing the modules loaded by the
operating system. DFHSIP, which is loaded from STEPLIB, must receive control in
an authorized state, so each partitioned data set (library) concatenated in STEPLIB
must be individually APF-authorized. In this sample job stream, the CICS authorized
library is CICSTS13.CICS.SDFHAUTH.
The pregenerated DFHSIP module, which has been link-edited with the authorized
attribute (SETCODE AC(1)), is supplied in CICSTS13.CICS.SDFHAUTH.
| You also need the Language Environment run-time library, CEE.SCEERUN, in the
| STEPLIB concatenation (or the MVS linklist) to run COBOL, PL/I, C and C++, and
| JVM programs under LE. Like SDFHAUTH, SCEERUN must be an APF-authorized
| library.
DFHRPL is the DD name of the library that contains modules loaded by CICS.
Protect individually the partitioned data sets constituting this library to prevent
unapproved or accidental modification of their contents. The DFHRPL concatenation
must include the library containing your CICS application programs, shown in our
example as “your.prog.library”, and your CICS control tables, shown in our example
as “your.table.library”.
Generally, you do not need to include any DB2 libraries in the DFHRPL DD
statement. If you do need DB2 libraries in the DFHRPL concatenation for an
application, they should be placed after the CICS libraries. For example, you need
SDSNLOAD in the DFHRPL to support those applications that issue dynamic calls
to the DB2 message handling module, DSNTIAR, or the later DSNTIA1, both of
which are shipped in SDSNLOAD. DSNTIA1 is loaded by applications programs
that include the DB2 application stub DSNTIAC, which issues an EXEC CICS
LOAD command for program DSNTIAC.
Define this data set if you want to save data to use later. The temporary storage
queues used, identified by symbolic names, exist until explicitly deleted. Even after
the originating task is deleted, temporary data can be accessed by other tasks,
through references to the symbolic name under which it is stored.
For details of how to define these data sets, and for information about space
calculations, see “Chapter 10. Defining the temporary storage data set” on
page 107.
If you are using temporary storage data sharing, you should ensure that you start
the temporary storage server before it is required by the CICS regions.
For more information about the temporary storage server and temporary storage
data sharing, see “Chapter 26. Starting up temporary storage servers” on page 367
The transient data intrapartition data set is used for queuing messages and data
within the CICS region.
For information about how to define these data sets, and about space calculations,
see “Defining the intrapartition data set” on page 115.
Define one or both of these sequential data sets, if you want to use auxiliary trace.
If you define automatic switching for your auxiliary trace data sets, define both data
sets. If you define only one data set, its DD name must be DFHAUXT.
For details of how to define these data sets, see “Chapter 15. Defining and using
auxiliary trace data sets” on page 171.
The auxiliary trace data sets in this job stream are unique to the active CICS
region, and as such are identified in our example by using the specific applid of the
If you allocate and catalog the auxiliary trace data sets on disk as shown in
Figure 38 on page 173, you can define them to CICS in the startup job stream
using the following DD statements:
//DFHAUXT DD DSN=CICSTS13.CICS.applid.DFHAUXT,DCB=BUFNO=n,DISP=SHR
//DFHBUXT DD DSN=CICSTS13.CICS.applid.DFHBUXT,DCB=BUFNO=n,DISP=SHR
If you specify BUFNO greater than 1, you can reduce the I/O overhead involved in
writing auxiliary trace records. A value between 4 and 10 can greatly reduce the I/O
overhead when running with auxiliary trace on.
LOGA, CSSL, and CPLI are examples of extrapartition transient data queues.
v LOGA defines a user data set used by the CICS sample programs.
v CSSL defines the data set used by a number of CICS services.
v CPLI is used only when you are running PL/I application programs. Here, CPLII
defines the data set to which both statistics and messages, and PL/I dumps are
directed.
v CCSO is used as an output queue only when you are running C/370 application
programs.
v CESO is used as an error queue only when you are running application
programs under Language Environment.
Sample definitions of the queues used by CICS are supplied in group DFHDCTG.
DFHDCTG is unlocked, so you can alter the definitions before installation.
The CICS local catalog is used by the CICS domains to save some of their
information between CICS runs, and to preserve this information across a cold start.
The local catalog is not shared by any other CICS system. If you are running CICS
with XRF, define a unique local catalog for the active CICS region, and another for
the alternate CICS region. For details of how to create and initialize a CICS local
catalog, see “Chapter 14. Defining and using catalog data sets” on page 159.
There is only one global catalog, which is passively shared by the active and
alternate CICS regions.
For details of how to create and initialize a CICS global catalog, see “Chapter 14.
Defining and using catalog data sets” on page 159.
This sample job illustrates the use of the AMP parameter on the DD statement.
Specifying this parameter, with its buffer subparameters, can help to improve restart
and shutdown time. This example is based on the recommended DEFINE
CLUSTER statements shown in Figure 35 and the associated notes given under
4 on page 160. The values given are the minimum values suitable for these
parameters and should not be reduced.
These data sets are required when you are running CICS with XRF. They are
actively shared by the active and the alternate CICS regions. For details of how to
create and initialize the CICS availability data sets, see “Chapter 17. Defining the
CICS availability manager data sets” on page 181.
This transient data destination is used by CICS as the target for messages sent to
any transient data destination before CICS has completed intrapartition transient
data initialization. It is particularly necessary in an XRF environment for use in an
alternate CICS region before takeover has occurred, during the period when
transient data initialization is suspended. For more information about the DFHCXRF
data set, see “The DFHCXRF data set” on page 118.
CICS records transaction dumps on a sequential data set, or pair of sequential data
sets, tape or disk. The data sets must be defined with the DD names DFHDMPA
and DFHDMPB, but if you define only one data set, its DD name must be
DFHDMPA. CICS always attempts to open at least one transaction dump data set
during initialization.
For details about how to define CICS transaction dump data sets and how they are
used, see “Chapter 16. Defining dump data sets” on page 175.
The transaction dump data sets in this job stream are unique to the active CICS
region, and as such are identified by using the specific applid of the active CICS
region (DBDCCIC1) as a second-level qualifier. The alternate CICS region needs its
own transaction dump data sets, and these could be identified by using the specific
applid of the alternate CICS region (DBDCCIC2) as a second-level qualifier.
In the sample job stream (Figure 55 on page 339), the SYSABEND DD statement
directs a formatted dump to a printer, and the SYSMDUMP DD statement saves an
unformatted dump to the SYS1.SYSMDP00 data set on disk.
To write more than one SYSMDUMP dump in the same data set on tape,
specify the following:
v DSNAME=SYS1.SYSMDPxx where xx is 00 through FF. SYSMDPxx is a
preallocated data set that you must initialize with an end-of-file (EOF)
mark on the first record.
v DISP=SHR.
You can ask MVS to write additional dumps only if you off-load any
previous dump and write an EOF mark at the beginning of the
SYS1.SYSMDPxx data set. To accomplish this, your MVS installation must
install an exit routine for message IEA993. For information on this
installation exit routine, see the OS/390 MVS Installation Exits manual.
SYSUDUMP DD statement
Produces a dump of user areas. The dump is formatted, so that it can be
printed directly.
The dump contents are as described only when you use the IBM-supplied defaults
for the dumps. The contents of these dumps can be set during MVS system
initialization and can be changed for an individual dump in the ABEND macro
instruction, in a CHNGDUMP command, and by a SLIP command. For details, see
the OS/390 MVS Initialization and Tuning Guide manual.
Dumps are optional; use a dump DD statement only when you want to produce a
dump.
For information about how defining these MVS system dump data sets, and about
printing dumps from them, see the OS/390 MVS JCL Reference manual. For
information about how to interpret dumps, see the OS/390 MVS Diagnosis: Tools
and Service Aids manual.
The system definition file (CSD) is required by CICS to hold some resource
definitions.
You may want to provide job control DD statements for the CSD. If you do, the CSD
data set is allocated at the time of CICS job step initiation, and remains allocated
for the duration of the CICS job step.
On the other hand, you may prefer to use dynamic allocation of the CSD. For
dynamic allocation, do not specify a DD statement for the CSD. Specify the data
set name (DSNAME) and the data set disposition (DISP) either in a SET FILE
command or in the SIT (as parameters CSDDSN and CSDDISP). CICS uses the
DSNAME and DISP to allocate the file as part of OPEN processing.
For information about creating and initializing the CSD, see “Chapter 13. Defining
the CICS system definition data set” on page 135.
| The JVM environment variables are specified in a PDS member referenced by the
| DFHJVM DD statement. This member contains the information needed by CICS to
| initialize a JVM to execute a JVM program. For information about DFHJVM and its
| contents, see “Chapter 23. Defining the CICS JVM execution environment variables”
| on page 333
| CICS requires this dummy DD statement when creating a JVM to execute a JVM
| program.
DFHCMACD is a VSAM key-sequenced data set (KSDS) that is used by the CMAC
transaction to provide online descriptions of CICS messages and codes. Before its
first use it must be defined and loaded as a KSDS data set.
“Chapter 20. Defining the CMAC messages data set” on page 211 describes
DFHCMACD in greater detail.
DFHDBFK is a VSAM key-sequenced data set (KSDS) that is used by the CDBM
transaction to store Group commands. Before its first use it must be defined as a
KSDS data set. The DFHDBFK DD statement is only required if you intend to use
the command storage functions of the CDBM transaction.
24 Sample program file (FILEA) and other permanently allocated data sets
You may want to provide job control DD statements for those user files that are
defined in the CSD (if you are using RDO) or in a file control table (for BDAM files
only). If you do, the data sets are allocated at the time of CICS job step initiation,
and remain allocated for the duration of the CICS job step. FILEA, the
On the other hand, you may prefer to take advantage of the CICS dynamic
allocation of files. For dynamic allocation, do not specify a DD statement for the
CSD. CICS then uses the full data set name as specified in the DSNAME
parameter of the file resource definition (up to 44 characters), together with the
DISP parameter, to allocate the file as part of OPEN processing. This form of
dynamic allocation applies equally to files that are defined to be opened explicitly,
and those that are to be opened on first reference by an application. For more
information about file opening, see “Chapter 18. Defining user files” on page 189.
For information about the parameters that you can code on file resource definitions,
see the CICS Resource Definition Guide.
The card reader/line printer (CRLP) simulated terminals shown in our sample job
stream are defined in the sample TCT (not used in this startup job). See the copy
member DFH$TCTS, in CICSTS13.CICS.SDFHSAMP, for the source statements
you need for such devices. For information about defining these devices in a TCT,
see the CICS Resource Definition Guide.
For sequential devices, the last entry in the input stream can be CESF
GOODNIGHT\ to provide a logical close, and quiesce the device. However, if you
close a device in this way, the receive-only status is recorded in the warm keypoint
at CICS shutdown. This means that the terminal is still in RECEIVE status in a
subsequent warm start, and CICS does not then read the input file. For more
information about how to restart a device that has been closed in a previous run of
CICS by means of a CESF GOODNIGHT transaction, see page 67.
Note the end-of-data character (the “\” symbol) at the end of each line of the
sample.
When you submit your CICS job, an MVS region is allocated for the execution of
CICS. You determine the overall size of the region by coding the REGION
parameter, either on the JOB card or on the EXEC PGM=DFHSIP statement. If you
specify the REGION parameter on the JOB statement, each step of the job
executes in the requested amount of space. If you specify the REGION parameter
on the EXEC statements in a job, each step executes in its own amount of space.
Use the EXEC statement REGION parameters when different steps need greatly
different amounts of space; for example, when using extra job steps to print
auxiliary trace data sets after CICS has shut down (as in the DFHIVPOL installation
verification procedure).
The available address space allocated above and below the 16MB line is
determined by the value you code on the REGION parameter, but subject to any
The transaction isolation facility increases the allocation of some virtual storage
above the 16MB boundary for CICS regions that are running with transaction
isolation active.
If you are running with transaction isolation active, CICS allocates storage for
task-lifetime storage in multiples of 1MB for user-key tasks that run above the 16MB
boundary. (1MB is the minimum unit of storage allocation above the line for the
EUDSA when transaction isolation is active.) However, although storage is allocated
in multiples of 1MB above the 16MB boundary, MVS paging activity affects only the
storage that is actually used (referenced), and unused parts of the 1MB allocation
are not paged.
If you are running without transaction isolation, CICS allocates user-key task-lifetime
storage above 16MB in multiples of 64KB.
The subspace group facility uses more real storage, as MVS creates for each
subspace a page and segment table from real storage. The CICS requirement for
real storage varies according to the transaction load at any one time. As a
guideline, each task in the system requires 9KB of real storage, and this should be
multiplied by the number of concurrent tasks that can be in the system at any one
time (governed by the MXT system initialization parameter).
However, automatic DSA sizing removes the need for accurate storage estimates,
with CICS dynamically changing the size of DSAs as demand requires.
For details of how MVS allocates the storage requested by your REGION
parameter, see the OS/390 MVS JCL Reference manual. For ease of reference,
examples of possible size ranges are given here.
REGION=0K or 0MB
MVS gives the job all the available private storage below and above the 16MB
line. The resulting size of the region below and above 16MB is unpredictable.
REGION=>0 and ≤16M
MVS establishes the specified value as the size of the private area below the
16MB line, and a default extended region size of 32MB. If the region size
specified is not available, the job step terminates abnormally.
The amount of private storage available below the line varies from installation to
installation, and possibly from IPL to IPL, because of the installation-dependent
parameters you use to generate and IPL your MVS system. Typically, the
amount of common storage required by MVS is 7MB or more, leaving you with
a potential private storage area of less than 9MB.
REGION=>16M and ≤32M
MVS gives the job all the storage available below 16MB, the size of which is
unpredictable, and a default extended region size of 32MB.
REGION >32M and ≤2047M
MVS gives the job all the storage available below 16MB, the size of which is
unpredictable, and the extended region size as specified. If the region size
specified is not available above 16 megabytes, the job step terminates
abnormally.
Storage protection
CICS releases from CICS/ESA 3.3 onward use the extensions added to ESA/390
storage protection facilities, available under MVS/ESA Version 4 Release 2.2, to
prevent CICS code and control blocks from being overwritten accidentally by your
own user application programs. This is done by allocating separate storage areas
(with separate storage keys) for your user application programs, and for CICS code
and control blocks. Access to a storage area is not permitted unless the access key
matches the key for that storage area.
The storage allocated for CICS code and control blocks is known as CICS-key
storage, and the storage allocated for your user application programs is known as
user-key storage. In addition to CICS-key and user-key storage, CICS can also use
key-0 storage for separate dynamic storage areas below and above the 16MB
boundary called the read-only DSAs (RDSA and ERDSA). The ERDSA is used for
eligible re-entrant CICS and user application programs link-edited with the RENT
and RMODE(ANY) attributes. The ERDSA is used for eligible re-entrant CICS and
user application programs link-edited with the RENT and RMODE(24) attributes.
The allocation of key-0 storage for the read-only DSAs is from the same storage
limit as the other DSAs, as specified by the DSALIM and EDSALIM system
initialization parameters.
Use of the storage protection facilities are optional. You can enable them by coding
options on new storage protection system initialization parameters. Between them,
these new parameters enable you to define or control:
v The storage key for the common work area (CWAKEY)
v The storage key for the terminal control table user areas (TCTUAKEY)
v A storage protection global option (STGPROT)
v A read-only program storage key option (RENTPGM)
v A transaction isolation option (TRANISO)
To help you get started, CICS provides DFHSIT$$, a default system initialization
table. This default table is supplied in the CICSTS13.CICS.SDFHSAMP library in
source form, and you can modify this to suit your own requirements. When
assembled and link-edited, DFHSIT$$ becomes the unsuffixed DFHSIT, which is
supplied in pregenerated form in CICSTS13.CICS.SDFHAUTH.
Because this work area is available to all transactions in a CICS region, you should
ensure that the storage key is appropriate to the use of the CWA by all transactions.
If there is only one transaction that runs in user key, and which requires write
access, you must specify user-key storage for the CWA, otherwise it will fail with a
storage protection exception (an ASRA abend). CICS obtains user-key storage for
the CWA by default, and you must review the use of this storage by all programs
before you decide to change it to CICS key.
It is possible that you might want to protect the CWA from being overwritten by
applications that should not have write access. In this case, provided all the
transactions that legitimately require write access to the CWA run in CICS key, you
can specify CICS-key storage for the CWA.
See page 245 for details of how to specify the CWAKEY parameter.
For VTAM terminals, you specify that you want a TCTUA by means of the
USERAREALEN parameter on the TYPETERM resource definition. The
USERAREALEN parameter on a typeterm definition determines the TCTUA sizes
for all terminals that reference the typeterm definition.
For TCAM and sequential terminals, definitions are added to the terminal control
table (TCT), and sizes are defined by means of the TCTUAL parameter on the
DFHTCT TYPE=TERMINAL and TYPE=LINE entries. For information about the
TCTUAL parameter, see the CICS Resource Definition Guide.
You specify the storage key for the TCTUAs globally for a CICS region by the
TCTUAKEY system initialization parameter. By default, CICS obtains user-key
storage for all TCTUAs.
You must review the use of TCTUAs in your CICS regions, and specify CICS key
only for TCTUAs when you are sure that this is justified. If you specify CICS-key
storage for TCTUAs, no user-key applications can write to any TCT user areas.
See page 302 for details of how to specify the TCTUAKEY parameter.
See page 296 for details of how to specify the STGPROT parameter.
Transaction isolation
CICS transaction isolation builds on CICS storage protection, enabling user
transactions to be protected from one another. You can specify transaction isolation
globally for a CICS region on the TRANISO (and STGPROT) system initialization
parameter.
In addition to being able to specify the storage and execution key individually for
each user transaction, you can specify that CICS is to isolate a transaction’s
user-key task-lifetime storage to provide transaction-to-transaction protection. You
do this by the ISOLATE option of the TRANSACTION or TRANCLASS resource
definition.
For an overview of transaction isolation, and CICS’ use of MVS subspaces, see the
CICS Performance Guide .
Table 37 shows the type of storage allocated according to the system initialization
parameters specified.
You specify the overall limits within which CICS can allocate the DSAs by the
DSALIM and EDSALIM system initialization parameters (for the DSAs below and
above the 16MB boundary respectively). Within these limits, CICS dynamically
controls the sizes of the individual DSAs and their associated cushions. Also, you
can vary these overall limits dynamically, by using either the CEMT SET SYSTEM
command or an EXEC CICS SET SYSTEM command.
Table 37. Controlling the storage key for the dynamic storage areas
Dynamic STGPROT= NO STGPROT= YES RENTPGM= RENTPGM=
storage area PROTECT NOPROTECT
CDSA CICS key CICS key N/A1 N/A1
RDSA N/A2 N/A2 Read-only key-0 CICS key
1
SDSA CICS key User key N/A N/A1
UDSA CICS key User key N/A1 N/A1
ECDSA CICS key CICS key N/A1 N/A1
ESDSA CICS key User key N/A1 N/A1
CICS dynamically tunes the size of the DSA storage cushions as necessary, within
the limits set by the DSALIM and EDSALIM system initialization parameters.
However, if the amount of storage available for the storage cushions becomes too
small, an SOS condition can still occur.
Effects: In a storage stress condition, the cushion mechanism can avert a storage
deadlock condition. This prevents CICS taking on additional work by stopping most
of the soliciting for new input messages. For information on the effects of stress
conditions, see the CICS Performance Guide.
When a storage stress situation exists, the loader domain attempts to alleviate it by
releasing the main storage for programs with no current user. If this fails, a
short-on-storage condition is indicated, and a message is issued at the console.
While the SOS condition is set, acquisition of new input message areas is
prevented, and all ATTACH requests from CICS system modules are deferred.
Recommendations
To help CICS optimize its use of the DSAs and their storage cushions, you are
recommended to:
v Avoid using large GETMAIN requests.
The storage cushion is a contiguous block of storage of fixed size, and therefore
may be able to satisfy a request for a large contiguous block of storage.
v Minimize the number of resident programs.
How implemented
CICS allocates the initial size of the storage cushions for the DSAs from the overall
storage limits defined by the DSALIM and EDSALIM system initialization
parameters. CICS dynamically tunes the sizes of the DSAs and their storage
cushions within these limits.
For descriptions of the DSALIM and EDSALIM system initialization parameters, see
page 248 and 253 respectively.
You can change the overall storage limits while CICS is running by means of a
CEMT SET SYSTEM command or an EXEC CICS SET command.
How monitored
Storage stress conditions are notified in the storage statistics (“Times cushion
released” and “Times request suspended”). A storage stress condition may not
cause an SOS condition; CICS may be able to alleviate the condition. However,
storage stress conditions are costly, and should be avoided.
The SOS condition is notified in the dynamic storage area statistics (“times went
short on storage”), and is made apparent to the terminal user by external effects
such as ceasing of polling and transaction initiation, and prolonged response times.
In addition, a message is displayed on the operating system console when the
short-on-storage (SOS) indication is detected. The SOS message, DFHSM0131 or
DFHSM0133, indicates that:
v The amount of free space in a dynamic storage area is less than needed, and
the associated DSA cannot be enlarged further (because the DSA limit has been
reached).
v There are currently suspended GETMAIN requests waiting for large enough
areas of contiguous storage to become available.
If the value you specify is not a multiple of 256KB for DSALIM, or 1MB for
EDSALIM, CICS rounds up the value to the next multiple.
You cannot specify fractions of megabytes: you must code sizes in bytes or
kilobytes. Some examples are shown in Table 38:
Table 38. Examples of DSA limit values in bytes, kilobytes and megabytes
Coded as:
bytes 2097152 3145788 3670016 4194304 4718592
kilobytes 2048K 3072K 3584K 4096K 4608K
megabytes 2M 3M - 4M -
For information about estimating the size of the dynamic storage areas, see the
CICS Performance Guide.
Note that if you intend to start CICS with the START command you must either:
v Give the MVS started task procedure a name different from the subsystem name
in IEFSSNaa (default ‘CICS’), or
v Issue the start command with the parameter SUB=JES2 or SUB=JES3 as
appropriate.
You can use the following form of the MVS START command to start a job from the
console:
S|START procname[.identifier][,SUB=subsystemname][,keyword=option
[,keyword=option] . . .]
procname
The name of the cataloged procedure that defines the job to be started.
identifier
The name you choose to identify the task.
For guidance information about the complete syntax of the START command, and
all the keywords and options you can use, see the OS/390 MVS System
Commands manual.
For example:
START DFHSTART.CICSA,SIP=T,REGNAME1=IDA,REGNAM2=IDA
If you are running CICS with RACF, you must associate the cataloged procedure
name with a suitably authorized RACF user through the RACF table, ICHRIN03.
| For information about this association, see the CICS RACF Security Guide.
| To define the AXM subsystem statically, the normal method, add an entry with the
| required parameters to the IEFSSNxx member of SYS1.PARMLIB, as follows:
| SUBSYS SUBNAME(AXM) INITRTN(AXMSI)
| Defining AXM in the IEFSSNxx member ensures that AXM system services
| automatically become available when you IPL MVS.
| To avoid the need to wait for an IPL when AXM modules are first installed, you can
| also initialize AXM system services by defining the subsystem dynamically, as
| follows:
| SETSSI ADD,SUBNAME=AXM,INITRTN=AXMSI
| If initialization of the AXM subsystem fails for any reason, (for example, because of
| an error in the command, or because AXMSI is not in a linklist library) MVS does
| not allow another attempt because the subsystem is then already defined. In this
| case, you should use a different subsystem name, such as AXM1, because AXM
| does not rely on a specific subsystem name. If you start AXM successfully the first
| time, further attempts are ignored.
| Note: See “Defining temporary storage pools for temporary storage data sharing”
| on page 110 for guidance on defining the sizes of temporary storage pools.
|
| Overview of the temporary storage data sharing server
| Access to a TS pool by CICS transactions running in an AOR is through a TS data
| sharing server that supports a named pool. In each MVS image in the sysplex, you
| need one TS server for each pool defined in a coupling facility which can be
| accessed from that MVS image. All TS pool access is performed by cross-memory
| calls to the TS server for the named pool.
| An AOR can access more than one TS server concurrently. This multiserver access
| is required if you create multiple pools, because each TS server provides access to
| only one pool of TS queues.
| The methods for specifying a TS pool make it easy to migrate queues from a QOR
| to a TS data sharing pool. You can use the TS global user exit, XTSEREQ, to
| modify the SYSID on a TS request so that it references a TS data sharing pool.
| Figure 58 on page 368 illustrates a parallel sysplex with three CICS AORs linked to
| the temporary storage server address space(s).
|
Coupling
facility (CF1) F2)
Temporary
storage
pool
|
| Defining TS server regions
| You must ensure that the TS server region is activated before the CICS region
| needs it. A shared TS pool consists of an XES list structure, which is accessed
| through a cross-memory queue server region. A shared TS pool is started in an
| MVS image by starting up a queue server region for that pool as either a batch job
| or a started task. This invokes the queue server region program, DFHXQMN, which
| resides in an APF-authorized library.
| You can specify the DFHXQMN initialization parameters either in a SYSIN data set
| defined in the JCL, or in the PARM parameter on the EXEC statement.
| During server initialization, the server acquires all of the available storage above the
| 16M line, as determined by the REGION size, then releases 5% of it for use by
| operating system services. It also acquires 5% of the free storage below the line for
| use in routines which require 24-bit addressable storage, for example sequential file
| read and write routines.
| After server initialization, AXM page allocation services are used to manage server
| region storage. Server statistics indicate how much storage is actually allocated and
| used within the storage areas above and below the 16M line, which are called
| AXMPGANY and AXMPGLOW in the statistics.
| If a task in the server region or a cross-memory request runs out of storage, this is
| likely to result in AXM terminating that task or request using a simulated abend with
| system completion code 80A to indicate a GETMAIN failure. Although the server
| can usually continue processing other requests in this case, running out of storage
| in a critical routine can cause the server to terminate, so it is best to ensure that the
| REGION size is large enough to eliminate the risk.
| If you specify more than one parameter in the PARM field, or on the same SYSIN
| input line, the parameters must be separated by commas. Any text following one or
| more spaces is taken as a descriptive comment. Any parameter line starting with an
| asterisk or a space is assumed to be a whole line comment.
| The main parameters used are listed on the server print file during start-up.
| The following parameters are all valid as initialization parameters (in the SYSIN file,
| or the PARM field), and some can be modified by the server SET command.
| You can display any parameter with the server DISPLAY command. Display the
| values of all parameters using DISPLAY ALLPARMS.
| Primary parameters
| These parameters are usually specified for all servers:
| POOLNAME=pool_name
| specifies the name, of 1 to 8 characters, of the queue pool used to form the
| server name and the name of the coupling facility list structure
| DFHXQLS_poolname. This parameter is valid only at initialization, and must
| always be specified.
| A queue index buffer holds a queue index entry plus up to 32K of queue data
| (for a small queue). When a READ or WRITE request completes the queue
| index information is retained in the buffer. This can avoid the need to reread the
| queue index if the same queue is referenced from the same MVS image before
| the buffer has been reused. If no buffer is available at the time of a request, the
| request is made to wait until one becomes free.
| The number of buffers should preferably be at least ten for each CICS region
| that can connect to the server in this MVS image. This avoids the risk of buffer
| waits. Additional buffers may be used to reduce the number of coupling facility
| accesses by keeping recently used queue index entries in storage. In particular,
| if the current version of a queue index entry is in storage at the time a queue
| item is read, the request requires only one coupling facility access instead of
| two. If the current version of a queue index entry is in storage when a second
| or subsequent item is written to the same queue, the request requires only one
| coupling facility access instead of three.
| It is not worth defining extra buffers beyond the point where this might cause
| MVS paging, as it is more efficient to reread the index entry than to page in the
| buffer from auxiliary storage. This parameter is valid only at initialization.
| This takes effect when the list structure is being created with a specified value
| of less than that specified for the list structure in the CFRM policy.
| The default value 0 specifies that no maximum limit is to be applied other than
| that specified in the CFRM policy. A non-zero value is generally rounded up by
| MVS to the next multiple of 256K.
| For information about defining list structures, see “Defining temporary storage pools
| for temporary storage data sharing” on page 110.
| Note that using these options in a production environment may significantly impact
| performance and cause the print file to grow very rapidly, using up spool space.
| Trace messages from cross-memory requests may be lost if they are generated
| faster than the trace print subtask can print them. In such cases, the trace indicates
| only how many messages were lost.
| TRACECF={OFF|ON}
| specifies the coupling facility interface debug trace options, OFF or ON. This
| option produces trace messages on the print file indicating the main parameters
| to the coupling facility request interface and the result from the IXLLIST macro.
| Tuning parameters
| The following parameters are provided for tuning purposes. They are normally
| allowed to assume their default values:
| ELEMENTSIZE={256|number}
| specifies the element size for structure space, which must be a power of 2. For
| current coupling facility implementations there is no known reason to specify
| other than the default value of 256.
| This parameter is valid only at server initialization and is used only when the
| structure is first allocated. The valid range is 256 to 4096.
| The ideal value for this ratio results from the average size of data for each entry
| being divided by the element size. However, the server automatically adjusts
| the ratio according to the actual entry and element usage.
| This parameter is valid only at server initialization and is used only when the
| structure is first allocated.
| For small queues, the last used time is updated on every reference. For large
| queues, updating the last used time requires an extra coupling facility access,
| so that it is done only if the queue has not previously been accessed within this
| interval of the current time. This means that the last used time interval returned
| by INQUIRE can be greater than the true value by an amount up to the value
| specified on this parameter. As the main purpose of the last used time
| specification is to determine whether the queue is obsolete, an interval of a few
| minutes should be sufficient.
| This parameter can force queues to be converted to the large queue format at a
| smaller size than 32K. This is to prevent large amounts of data being written to
| the small queue format. Performance improvements can result on systems
| where asynchronous coupling facility processing causes contention for
| Warning parameters
| These parameters modify the threshold at which warning messages and automatic
| ALTER actions occur when the structure becomes nearly full:
| ELEMENTWARN={80|number}
| specifies the percentage of elements in use at which warnings and automatic
| ALTER actions should be first triggered.
|
| Queue server automatic ALTER processing
| The queue server monitors the total number of elements and entries in use in the
| structure, using information returned by the coupling facility on every request. When
| the numbers in use exceed the specified thresholds, a warning message,
| DFHXQ0411 or DFHXQ0412, is issued, and is repeated each time the number in
| use increases beyond further thresholds.
| Each time the warning is issued, the server tests whether an automatic ALTER for
| the entry to element ratio should be performed. The test is done by calculating how
| many excess elements or entries will be left when the other runs out completely.
| This is based on the ratio between the current numbers of elements and entries
| actually in use.
| An IXLALTER request is issued to alter the entry to element ratio to the actual
| current ratio between the number of entries and elements in use if:
| v The number of excess elements or entries exceeds the number specified in the
| ALTERELEMMIN or ALTERENTRYMIN parameter, and
| v The same number expressed as a percentage of the total exceeds the value
| specified in the ALTERELEMPC or ALTERENTRYPC parameter
| Only one ALTER request may be active at a time for a given structure. If the ALTER
| process is started by one server, the ALTER of another server will be rejected.
| However, the system automatically notifies all servers when the ALTER completes,
| giving the new numbers of elements and entries so that each server can update its
| own status information.
| The MVS STOP command is equivalent to issuing the server command STOP
| using the MVS MODIFY command.
| The server also responds to XES events such as an operator SETXCF command to
| alter the structure size. If the server can no longer access the coupling facility, it
| automatically issues a server CANCEL command to close itself down immediately.
| When UNLOAD or RELOAD is specified, the server program requires exclusive use
| of the list structure. If the structure is currently being used by a normal server, the
| attempt to unload or reload is rejected. Similarly, if a normal server attempts to start
| up while an unload or reload function is in progress, the attempt is rejected because
| shared access to the structure is not available.
| All normal server parameters can be specified on UNLOAD and RELOAD, but
| many of these, such as the number of queue buffers, are ignored because they do
| not apply to unload or reload processing.
| The UNLOAD function requires a DD statement for file name DFHXQUL describing
| the sequential data set to which the queue pool is to be unloaded. The format of
| the unloaded file is:
| RECFM=F,LRECL=4096,BLKSIZE=4096.
| An upper limit for the total size of the data set in bytes can be estimated from the
| pool usage statistics produced by the server. The total data size in bytes is obtained
| by multiplying the number of elements in use by the element size (usually 256), and
| for each queue there is also some control information which typically occupies
| fewer than 100 bytes per queue. The size is normally smaller than this because
| unused space in data elements is not included in the unloaded file. See Figure 60
| for an example of UNLOAD JCL.
|
//UNLDTSQ1 JOB ...
//TSUNLOAD EXEC PGM=DFHXQMN CICS TS queue server program
//STEPLIB DD DSN=CICSxxx.SDFHAUTH,DISP=SHR Authorized library
//SYSPRINT DD SYSOUT=* Options, messages and statistics
//DFHXQUL DD DSN=TSQ1.UNLOADED.QPOOL, Unloaded queue pool
// DISP=(NEW,CATLG),
// SPACE=(4096,(10000,1000)) Estimated size in 4K blocks
//SYSIN DD *
FUNCTION=UNLOAD Function to be performed is UNLOAD
POOLNAME=PRODTSQ1 Pool name
/*
| The RELOAD function requires a DD statement for file name DFHXQRL describing
| the sequential data set from which the queue pool is to be reloaded. The structure
| is allocated if necessary during reloading, in which case the same server
| parameters may be used to control structure attributes as for normal server
| execution. The RELOAD process bypasses any queues that are already found in
| the queue pool, because, for example, the structure was too small and the reload
| job had to be restarted after using ALTER to increase the size.
| Note that when a pool is nearly full (with less than about 5% free entries and
| elements) there is no guarantee that it can be unloaded and reloaded into a
| structure of exactly the same size. The amount of space available is affected by the
| current ratio of entries to elements, which can only be controlled approximately by
| the automatic ALTER process.
| If RELOAD fails because it runs out of space, the resulting messages include the
| numbers of queues reloaded and blocks read up to the time of the failure. Compare
| these values with those in the messages from the original UNLOAD to determine
| how many more queues and how much more data remained to be loaded. See
| Figure 61 for an example of RELOAD JCL.
|
//RELDTSQ1 JOB ...
//TSRELOAD EXEC PGM=DFHXQMN CICS TS queue server program
//STEPLIB DD DSN=CICSxxx.SDFHAUTH,DISP=SHR Authorized library
//SYSPRINT DD SYSOUT=* Options, messages and statistics
//DFHXQRL DD DSN=TSQ1.UNLOADED.QPOOL,DISP=OLD Unloaded queue pool
//SYSIN DD *
FUNCTION=RELOAD Function to be performed is RELOAD
POOLNAME=PRODTSQ1 Pool name
POOLSIZE=50M Increased pool size
MAXQUEUES=10000 Increased number of big queues
/*
| For full details about messages produced by the TS server, see the CICS
| Messages and Codes manual.
| Note: Before you can start a server for named coupling facility data table pool, first
| define the coupling facility structure to be used for the pool. See “Defining a
| coupling facility data table pool” on page 205 for information about defining a
| coupling facility list structure for a CFDT.
|
| Overview of a coupling facility data table server
| CICS coupling facility data tables is designed to provide rapid sharing of working
| data within a sysplex, with update integrity. The data is held in a table that is similar
| in many ways to a shared user-maintained data table, and the API used to store
| and retrieve the data is based on the file control API used for user-maintained data
| tables.
| Within each MVS image, there must be one CFDT server for each CFDT pool
| accessed by CICS regions in the MVS image. Coupling facility data table pools are
| defined as a list structure in the coupling facility resource management (CFRM)
| policy. The pool name, which is used to form the server name with the prefix
| DFHCF., is specified in the start-up JCL for the server.
| Coupling facility data table pools can be used almost continuously and permanently.
| CICS provides utility commands that you can use to minimize the impact of
| maintenance.
| Figure 62 on page 382 illustrates a Parallel Sysplex with three CICS AORs linked to
| the coupling facility data table servers.
|
Coupling
facility (CF1) F2)
Number
counter
pool
Figure 62. Conceptual view of a Parallel Sysplex with coupling facility data table servers
|
| Defining and starting a coupling facility data table server region
| You activate a coupling facility data table pool in an MVS image by starting up a
| coupling facility data table server region for that pool. You can start the server as a
| started task, started job, or as a batch job.
| The most important parameter is the pool name, which is mandatory. Among other
| things, the pool name is used to form, with the prefix DFHCF., the server name
| (giving DFHCF.poolname). Optional pool-related parameters include the maximum
| number of tables to be supported.
| The easiest way to ensure that all pool-related parameters are consistent across
| MVS images is to use the same SYSIN parameter data set (or an identical copy of
| it) for all servers accessing the same pool, and to specify in the PARM field any
| parameters that vary between servers
| For details of all the parameters, see “Coupling facility data table server
| parameters” on page 384.
| Coupling facility data table server REGION parameter: Use the JCL REGION
| parameter to ensure that the coupling facility data table server region has enough
| storage to process the maximum number of data table requests that can be
| executing concurrently.
| The number of coupling facility data table requests that each connected CICS
| region can have active at a time is limited to about 10. Each request requires about
| 40KB, therefore the REGION size should specify at least 400KB for each connected
| CICS region, plus a margin of about 10% for other storage areas. Thus, for a server
| supporting up to 5 CICS regions, specify REGION=2200K.
| During server initialization, the server acquires all the available storage above
| 16MB, as determined by the REGION parameter, then releases 5% of it for use by
| operating system services. It also acquires 5% of the free storage below 16MB for
| use in routines that require 24-bit addressable storage, for example sequential file
| read and write routines.
| After initialization, the server uses AXM page allocation services to manage its
| storage. Server statistics indicate how much storage is actually allocated and used
| within the storage areas above and below 16MB, which are called AXMPGANY and
| AXMPGLOW in the statistics.
| If a task in the server region or a cross-memory request runs out of storage, this is
| likely to result in AXM terminating that task or request using a simulated abend with
| system completion code 80A to indicate a GETMAIN failure. Although the server
| can usually continue processing other requests in this case, running out of storage
| in a critical routine can cause the server to terminate. Therefore, it is best to ensure
| that the REGION size is large enough to eliminate this risk.
| You can enter some parameter keywords in more than one form, such as in
| abbreviated or truncated form.
| The main parameters are listed on the server print file during start-up.
| The parameter descriptions that follow are divided into a number of categories:
| v Pool name parameter (on page 384)
| v Security parameters (on page 385)
| v Statistics parameters (on page 386)
| v List structure parameters (on page 386)
| v Debug trace parameters (on page 387)
| v Tuning parameters (on page 388)
| v Lock wait parameters (on page 389)
| v Warning parameters (on page 390)
| v Automatic structure alter parameters (on page 390)
| v Reserved space parameters (on page 391).
| The parameters in the above groups are all valid as initialization parameters (in the
| SYSIN file or PARM field), and some can also be modified by the SET command.
| Security parameters
| You can use these parameters to specify whether you want to use the optional
| security mechanism that the server provides, to check that CICS regions are
| authorized to open a coupling facility data table. They also allow you to override
| standard processing for this optional security.
| SECURITY={YES|NO}
| specifies whether individual coupling facility data table security checks are
| required.
| YES You want the server to perform a security check against each CICS
| region that attempts to open a coupling facility data table. Access is
| controlled through profiles defined in the general resource class named
| on the SECURITYCLASS parameter.
| This requires an external security manager, such as RACF, that
| supports the FASTAUTH function in cross-memory mode.
| NO You do not want the server to perform this extra security check when
| opening a coupling facility data table.
| This is the only security check performed by the server that is optional. The
| other file security checks are always performed by the server, as described in
| the CICS RACF Security Guide.
| Note: For this security check, the resource name used by the server is the
| either the name specified on the TABLENAME attribute of the CICS file
| resource definition, or the FILE name if TABLENAME is not specified.
| YES The server prefixes the resource name with the server region user ID
| (the default) or an alternative prefix specified on the
| SECURITYPREFIXID parameter.
| NO The server passes to RACF only the 8-character resource name,
| without any prefix.
| Statistics parameters
| Use the following parameters to specify server statistics options:
| ENDOFDAY={00:00|hh:mm}
| specifies the time of day, in hours and minutes, when the server is to collect
| and reset end-of-day statistics.
| Note: If the STATSOPTIONS parameter specifies NONE, the server still writes
| end-of-day statistics to the print file.
| You cannot change this number without reallocating the structure, which means
| first deleting the existing structure (see “Deleting or emptying coupling facility
| data table pools” on page 399). If the structure is being allocated at less than its
| maximum size, specify a number for the maximum number of tables based on
| the maximum size of the structure, rather than its initial allocation size.
| This parameter is valid only at server initialization and is used only when the
| structure is first allocated. The valid range is from 1 to 999 999.
| Note: If the value is greater than the value specified on the CFRM
| SIZE parameter, the server POOLSIZE parameter is ignored and
| the initial allocation is based on the parameters specified in the
| CFRM policy.
| This parameter is valid only at server initialization and is only used when the
| structure is first allocated.
| Trace messages from cross-memory requests can be lost if they are generated
| faster than the trace print subtask can print them. In this event, the trace only
| indicates how many messages were lost.
| Tuning parameters
| These parameters are provided for tuning purposes, but normally you can omit
| these and let the server assume their default values.
| ELEMENTRATIO={1|number}
| specifies the element part of the entry-to-element ratio when the structure is first
| allocated. This determines what proportion of the structure space is initially set
| aside for data elements. (For information about list structures and
| entry-to-element ratios, see the OS/390 MVS Programming: Sysplex Services
| Guide, GC28-1771.)
| Divide the average size of data per entry by the element size to obtain the
| optimum value for this ratio. However, if the structure becomes short of space
| and altering the ratio could improve space utilization, the server automatically
| adjusts the ratio according to the actual entry and element usage.
| This parameter is valid only at server initialization and is used only when the
| structure is first allocated.
| This parameter is valid only at server initialization and is only used when the
| structure is first allocated.
| This parameter is valid only at server initialization and is used only when the
| structure is first allocated. The valid range is from 1 to 255.
| There are two times involved: a scan time interval and a wait time. The server
| starts its lock scan interval timing when the first request is made to wait.
| This mechanism has very little effect on normal processing and the default lock wait
| retry parameter values are designed to suit the majority of installations.
| LOCKSCANINTERVAL={5|number}
| specifies the time interval after which requests waiting for record locks are
| scanned to check for lock wait timeout.
| This affects the overall duration of the lock wait timeout, because a request that
| starts waiting for a lock during a given scan interval is timed as if from the start
| of the interval. The lock scan interval should be less than the lock wait interval,
| and ideally should be such that the lock wait interval is an exact multiple of the
| lock scan interval.
| You can specify this value as a number of seconds or in the time format
| hh:mm:ss.
| Warning parameters
| Use these parameters to modify the thresholds at which warning messages are
| issued, and an automatic structure alter occurs, when the structure becomes nearly
| full.
| ELEMENTWARN={80|number}
| specifies the percentage of list structure elements in use at which warning
| messages and an automatic structure alter should be first triggered.
| Using the reserved space parameters means that, even if the structure fills up very
| rapidly (for example, because a table is being loaded that is too large for the
| available space), enough space should remain to allow rewrites of existing records
| and allow internal communication between servers to continue normally.
| Note that this mechanism cannot prevent the structure from eventually becoming
| totally full, as recoverable rewrites are allowed to use the reserved space
| temporarily, and rewrites that increase the data length will gradually use up the
| reserved elements. If action is not taken to prevent the structure from becoming
| totally full, the following effects can occur:
| v An attempt to close a table or change the table status could encounter a
| temporary structure full condition. In this case, the attempt is retried indefinitely,
| because it must be completed in order to preserve table integrity (the only
| alternative being to terminate the server). The retry process normally succeeds
| Each time the server issues a warning, it also tests whether an automatic structure
| alter for the entry-to-element ratio should be issued. If any form of alter has already
| been issued recently (by any server or through an operator SETXCF ALTER
| command) and the structure space usage has remained above warning levels since
| the previous attempt, any further structure alter attempt is suppressed until at least
| the minimum interval (specified through the ALTERMININTERVAL parameter) has
| elapsed.
| Only one alter request can be active at a time for a given structure. This means a
| server may well find that another server has already started the structure alter
| process, in which case its own alter is rejected. However, the system automatically
| notifies all servers when the structure alter is completed, giving the new numbers of
| elements and entries so that each server can update its own status information.
| See “Coupling facility data table server parameters” on page 384 for details of these
| keywords.
| The following SET keywords are used to modify the server’s recovery status of an
| inactive CICS region that had unresolved units of work when it last terminated:
| RESTARTED=applid
| Establish a temporary recoverable connection for the given APPLID. This
| resolves any units of work that were in commit or backout processing when the
| region last terminated, and indicates whether there are any remaining in-doubt
| units of work.
| This command should be used only when it is not possible to restart the
| original CICS region to resolve the work normally, because it can result in
| inconsistency between coupling facility data table resources and other CICS
| resources updated by the same unit of work.
| This command should be used only when it is not possible to restart the original
| CICS region to resolve the work normally, because it can result in inconsistency
| between coupling facility data table resources and other CICS resources
| updated by the same unit of work.
| Use the following SET parameters to modify options relating to a specific table:
| TABLE=name
| specifies the table to which the following table-related parameters in the same
| command are to be applied. This parameter is required before any table-related
| parameters.
| If the maximum number is set to a value less than the current number of
| records in the table, no new records can be stored until records have been
| deleted to reduce the current number to within the new maximum limit. For a
| recoverable table, this also means that records cannot be updated, because the
| recoverable update process adds a new record on the rewrite operation then
| deletes the original record when the transaction completes.
| Examples of the SET command: The following example changes the statistics
| options:
| SET STATSOPT=BOTH,EOD=21:00,STATSINT=06:00
| The following example modifies the maximum number of records allowed in the
| specified table:
| SET TABLE=PAYECFT1,MAXRECS=200000
| Some of the parameters that provide additional information support generic names.
| You specify generic names using the following wildcard characters:
| v An * (asterisk symbol ). Use this anywhere in the parameter value to represent
| from 0 to 8 characters of any value. For example, CICSH* to represent all the
| CICS APPLIDs in a CICSplex identified by the letter H.
| v A % (per cent symbol). Use this anywhere in the parameter value to represent
| only one character of any value. For example, CICS%T* to represent all the TOR
| APPLIDs in all CICSplexes.
| The parameters supported by the DISPLAY and PRINT commands are as follows:
| APPLIDS
| Display the APPLID and MVS system name for every CICS region that currently
| has a recoverable connection to the pool. This command returns information not
| only for the server to which the MODIFY command is issued, but for all other
| servers connected to the same pool.
| If applid or generic is not specified, the server treats this as equivalent to the
| command DISPLAY APPLIDS.
| If you specify applid.*, the server displays the UOW information for a
| specific APPLID, which should correspond to only one region in the
| sysplex.
| Note that only tables with a non-zero number of requests since the start of the
| current statistics interval are shown.
| SETXCF commands
| The server also responds to XES events such as an operator SETXCF command to
| alter the structure size. If the server can no longer access the coupling facility, it
| automatically issues a server CANCEL command to close itself down immediately.
|
| Deleting or emptying coupling facility data table pools
| You can delete a coupling facility data table pool using the MVS SETXCF command
| to delete its coupling facility list structure:
| For example:
| SETXCF FORCE,STRUCTURE,STRNAME=DFHCFLS_poolname
| When you attempt to start a server for a pool that has been deleted (or attempt to
| reload the pool), it is allocated as a new structure. The newly allocated structure
| uses size and location attributes specified by the currently active CFRM policy, and
| other values determined by the server initialization parameters (in particular,
| MAXTABLES).
|
| Unloading and reloading coupling facility data table pools
| You can unload, and reload, the complete contents of a coupling facility data table
| pool to and from a sequential data set by invoking the server program with the
| FUNCTION parameter, using the UNLOAD and RELOAD options. The unload and
| reload process preserves not only the table data, but also all recovery information
| such as unit of work status and record locks for recoverable updates.
| RECFM=F
| LRECL=4096
| BLKSIZE=4096
| You can obtain an estimate of the upper limit for the total size of the
| data set, in bytes, from the pool usage statistics produced by the
| server:
| v From the statistics, multiply the number of elements in use by the
| element size (usually 256) to get a total number of bytes for the data
| size, although the space actually needed to unload the data is
| normally much less, because unused space in a data element is not
| unloaded.
| v Add some space for the record keys, calculated using a two-byte
| prefix plus the keylength for each record, plus about 100 bytes per
| Note: If you omit the FUNCTION parameter, the server program initializes a
| coupling facility data table server address space.
| For the UNLOAD and RELOAD function, the server program requires exclusive use
| of the list structure. If the structure is currently being used by a normal server, the
| unload or reload attempt is rejected. Similarly, if a normal server attempts to start
| up while an unload or reload job is in progress, the attempt fails because shared
| access to the structure is not available.
| You can specify all normal server parameters when unloading or reloading, but
| some of these (for example, security-related parameters) are ignored because they
| do not apply to unload or reload processing.
| Note that when a pool is nearly full (with less than about 5% free entries and
| elements) there is no guarantee that it can be unloaded and reloaded into a
| structure of exactly the same size. This is because the amount of space available is
| affected by the current ratio of entries to elements, which is controlled only
| approximately by the automatic ALTER process.
| If reloading fails because it runs out of space, the resulting messages include the
| numbers of tables reloaded and blocks read up to the time of the failure. You can
| compare these values with those in the messages from the original unload job, to
| determine how many more tables and how much more data remains to be loaded.
| If a table had been partially reloaded before running out of space, it is deleted so
| that the whole table is reloaded again if the reload is retried later.
| If reloading is interrupted for any other reason than running out of space, for
| example by an MVS system failure, reloading can still be restarted using the
| partially reloaded structure, but in that case the structure space occupied by any
| partially reloaded table will be unavailable, so it is normally better to delete the
| structure (using the MVS SETXCF FORCE command) and start reloading again
| with a newly allocated structure.
| Note: Before you can start a server for named named counter pool, first define the
| coupling facility structure to be used for the pool. See “Chapter 8. Defining
| sequence numbering resources” on page 75 for information about defining a
| coupling facility list structure for a named counter server.
|
| Overview of a named counter server
| The CICS named counter facility provides a facility for generating unique sequence
| numbers for use by application programs in a Parallel Sysplex environment. Each
| named counter is held in a pool of named counters, which resides in a coupling
| facility list structure. Retrieval of the next number in sequence from a named
| counter is through a callable programming interface.
| Figure 64 on page 404 illustrates a Parallel Sysplex with three CICS AORs linked to
| named counter servers.
|
Coupling
facilties
(CF1)
F2)
Coupling
facility
data table
pool
Figure 64. Conceptual view of a Parallel Sysplex with named counter servers
|
| Defining and starting a named counter server region
| You activate a named counter pool in an MVS image by starting up a named
| counter server region for that pool. You can start the server as a started task,
| started job, or as a batch job.
| The most important parameter is the pool name, which is mandatory. Among other
| things, the pool name is used to form, with the prefix DFHNC, the server name
| (giving DFHNC.poolname).
| The easiest way to ensure that all pool-related parameters are consistent across
| MVS images is to use the same SYSIN parameter data set (or an identical copy of
| it) for all servers accessing the same pool, and to specify in the PARM field any
| parameters that vary between servers.
| For details of all the parameters, see “Named counter server parameters” on
| page 406.
| Named counter server REGION parameter: Use the JCL REGION parameter to
| ensure that the named counter server region has enough storage to process the
| maximum number of data table requests that can be executing concurrently.
| The named counter server typically uses less than one megabyte of storage above
| 16MB and less than 20KB below 16MB.
| During server initialization, the server acquires all the available storage above
| 16MB, as determined by the REGION parameter, then releases 5% of it for use by
| operating system services. It also acquires 5% of the free storage below 16MB for
| use in routines that require 24-bit addressable storage.
| After initialization, the server uses AXM page allocation services to manage its
| storage. Server statistics indicate how much storage is actually allocated and used
| within the storage areas above and below 16MB, which are called AXMPGANY and
| AXMPGLOW in the statistics.
| If a task in the server region or a cross-memory request runs out of storage, this is
| likely to result in AXM terminating that task or request using a simulated abend with
| system completion code 80A to indicate a GETMAIN failure. Although the server
| can usually continue processing other requests in this case, running out of storage
| in a critical routine can cause the server to terminate. Therefore, it is best to ensure
| that the REGION size is large enough to eliminate this risk.
Figure 65. Sample JCL to start a named counter server address space
| You can enter some parameter keywords in more than one form, such as in
| abbreviated or truncated form.
| The main parameters are listed on the server print file during start-up.
| Statistics parameters
| Use the following parameters to specify server statistics options:
| ENDOFDAY={00:00|hh:mm}
| specifies the time of day, in hours and minutes, when the server is to collect
| and reset end-of-day statistics.
| Note: If the STATSOPTIONS parameter specifies NONE, the server still writes
| end-of-day statistics to the print file.
| Note: If the value is greater than the value specified on the CFRM
| SIZE parameter, the server POOLSIZE parameter is ignored and
| the initial allocation is based on the parameters specified in the
| CFRM policy.
| This parameter is valid only at server initialization and is only used when the
| structure is first allocated.
| Trace messages from cross-memory requests can be lost if they are generated
| faster than the trace print subtask can print them. In this event, the trace only
| indicates how many messages were lost.
| CFTRACE={OFF|ON}
| specifies the coupling facility interface debug trace option.
| OFF Coupling facility interface debug trace is disabled.
| ON Coupling facility interface debug trace produces trace messages on the
| print file, indicating the main parameters to the coupling facility request
| interface, and the result from the IXLLIST macro.
| Warning parameters
| Use these parameters to modify the thresholds at which warning messages are
| issued when the structure becomes nearly full.
| ENTRYWARN={80|number}
| specifies the percentage of list structure entries in use at which warning
| messages should be first triggered.
| Note: You can also use the MVS STOPcommand, which is equivalent to
| issuing the server STOP command through the MVS MODIFY command.
| The syntax of the STOP command is:
| STOP|P [jobname.]identifier[,A=asid]
| CANCEL
| Terminate the server immediately.
| See “Named counter server parameters” on page 406 for details of these keywords.
| Examples of the SET command: The following example changes the statistics
| options:
| SET STATSOPT=BOTH,EOD=21:00,STATSINT=06:00
| The parameters supported by the DISPLAY and PRINT commands are as follows:
| COUNTERS
| Display the names of all the named counters currently allocated in a pool.
| COUNTERS={name|generic_name}
| Display the details of a specific named counter, or set of named counters
| whose names match the generic name. Generic names are specified using the
| wildcard characters * (asterisk symbol) and % (per cent symbol).
| POOLNAME
| STATSOPT
| ENDOFDAY
| STATSINTERVAL
| POOLSIZE
| CFTRACE
| RQTRACE
| ENTRYWARN
| ENTRYWARNINC
| POOLSTATS
| STGSTATS
| For example:
| SETXCF FORCE,STRUCTURE,STRNAME=DFHCFLS_poolname
| You can delete a structure only when there are no servers connected to the pool,
| otherwise MVS rejects the command.
| When you attempt to start a server for a pool that has been deleted (or attempt to
| reload the pool), it is allocated as a new structure. The newly allocated structure
| uses size and location attributes specified by the currently active CFRM policy.
|
| Unloading and reloading named counter pools
| You can unload, and reload, the complete contents of a named counter pool to and
| from a sequential data set by invoking the server program with the FUNCTION
| parameter, using the UNLOAD and RELOAD options.
| RECFM=F
| LRECL=4096
| BLKSIZE=4096
| RELOAD
| Reload, into the named counter pool named on the POOLNAME
| parameter, a previously unloaded named counter pool.
| The RELOAD function requires a DD statement for DDNAME
| DFHNCRL, describing the sequential data set from which the table pool
| is to be reloaded.
| The structure is allocated, if necessary, during reloading, in which case
| you can use the same server parameters to control structure attributes
| as for normal server startup. The reload process bypasses named
| counters that are already found in the pool (for example, because the
| structure was too small and the reload job had to be restarted after
| using ALTER to increase the structure size).
| Note: If you omit the FUNCTION parameter, the server program initializes a
| named counter server address space.
| For the UNLOAD and RELOAD function, the server program requires exclusive use
| of the list structure. If the structure is currently being used by a normal server, the
| unload or reload attempt is rejected. Similarly, if a normal server attempts to start
| up while an unload or reload job is in progress, the attempt fails because shared
| access to the structure is not available.
| You can specify all normal server parameters when unloading or reloading, but
| some of these (for example, statistics-related parameters) are ignored because they
| do not apply to unload or reload processing.
| If reloading fails because it runs out of space, the resulting messages include the
| numbers of named counters reloaded and blocks read up to the time of the failure.
Note: A check mark (U) in column two indicates that the parameters are read from
the SIT directly by the CICS component that uses them, and are not
obtained through the parameter manager domain interface. For more
information about the parameter manager domain, see “The CICS parameter
manager domain” on page 319.
Table 39. System initialization parameters grouped by functional area
Functional group System initialization keywords
Application considerations U CMDPROT, CWAKEY, INITPARM,
LGNMSG, OPERTIM, TCTUALOC,
TCTUAKEY
Autoinstall for VTAM terminals and U AIEXIT, AILDELAY, AIQMAX, AIRDELAY,
| APPC connections AICONS
Autoinstall for programs PGAICTLG, PGAIEXIT, PGAIPGM
Basic mapping support U BMS, PGCHAIN, PGCOPY, PGPURGE,
PGRET, PRGDLAY, SKRxxxx
Data interchange U DIP
Dispatcher functions ICV, ICVTSD, MXT, PRTYAGE,
SUBTSKS
Dump functions DUMP, DUMPDS, DUMPSW, DURETRY,
SYDUMAX, TRDUMAX TRTRANSZ,
TRTRANTY
Exits U TBEXITS, TRAP
Extended recovery facility U ADI, AUTCONN, CLT, JESDI, PDI,
RMTRAN, RST, TAKEOVR, XRF,
XRFSOFF, XRFSTME
Files (user) FCT, FTIMEOUT, RLS
Front end programming interface FEPI
Intersystem communication and U APPLID, DTRPGM, DTRTRAN,
multiregion operation DTRPGM, IRCSTRT, ISC, MROFSE,
MROBTCH, MROLRM, SYSIDNT (For
ISC, see also VTAM group.)
Journaling U AKPFREQ
Loading programs LLACOPY, LPA, PGAICTLG, PGAIPGM,
PGAIEXIT, PLTPI, PLTPISEC,
PLTPIUSR, PRVMOD
Miscellaneous U DATFORM, DB2CONN, DBCTLCON,
DOCCODEPAGE, FLDSEP, FLDSTRT,
ISRDELAY, MSGCASE, MSGLVL,
NATLANG, PRINT, SPOOL, STATRCD,
WEBDELAY
Index 421
CWAKEY, storage key for the CWA 353 data tables
CWAKEY, system initialization parameter 245 closing 202
CXRF queue 118 loading 202
CXRF transient data queue 118 opening 201
DD statements, DISP operand 120 overview 201
DISP operand of DD statements 120 types of 201
XRF considerations 202
database recovery control (DBRC)
D system initialization parameter, DLDBRC 233
DAE, system initialization parameter 245 use of generic applid 233
data facility data set services 105 date format 245
DATFORM, system initialization parameter 245
data facility hierarchical storage manager (DFHSM)
DB2 (Database 2)
backup while open 101
defining support 59
data sets
DB2, RCT suffix option of CICS startup 338
actively shared 100
DB2 load library
allocation 100
requirement for DSNTIAR and DSNTIA1 346
allocation and dispositions (XRF) 98
DB2 resource security
auxiliary temporary storage 107, 110
XUSER system initialization parameter
auxiliary trace 171
AUTHTYPE 316
BDAM 195
COMAUTHTYPE 316
catalog data sets 159, 165
DB2CONN, system initialization parameter 246
CAVM control data set 100
DBCTLCON, system initialization parameter 246
CAVM message data set 100
DBRC (database recovery control)
CDBM SUPPORT data set 207
system initialization parameter, DLDBRC 233
created by DFHALTDS job 97
use of generic applid 233
created by DFHCOMDS job 97
DCT (destination control table)
created by DFHDEFDS job 97
queues in sample DCT 113
CSD 135
specifying security checking of DCT entries 309
defining, transient data (extrapartition) 118
specifying the DCT suffix 246
defining, transient data (intrapartition) 115
DCT, system initialization parameter 246
defining user files 196
DDname list, in translator dynamic invocation 25
DFHAUXT, auxiliary trace 100
DDS option of system initialization parameter
DFHBUXT, auxiliary trace 100
BMS 234
DFHDMPx, dump 100
deadlock timeout 358
DFHLCD, CICS local catalog 100
delay, persistent verification 283
DFHXRCTL, XRF control data set 181
delay intervals
DFHXRMSG, XRF message data set 183
active delay for XRF 276
DISP option 100
alternate delay for XRF 229
dump 175, 250, 251
JES for XRF 264
dynamic allocation in an application program 198
reconnection for XRF 233
dynamic allocation using CEMT 198
destination control table (DCT)
GTF data sets 98
queues in sample DCT 113
integrity on shared DASD 101
specifying security checking of DCT entries 309
messages data set 211
specifying the DCT suffix 246
MVS system data sets used by CICS 97, 98
destination request limit, open and close 276
passively shared 100
DEVTYPE macro (MVS) 178
SDUMP data sets 98
DFH$TCTS, copybook 66
SMF data sets 98
transient data (extrapartition) 113 DFH$TDWT (transient data write-to-terminal sample
transient data (intrapartition) 113 program) 114
unique 100 DFH0STAT, statistics sample program 359
user data sets DFH0STM, BMS mapset 360
BDAM 195 DFH0STS, BMS mapset 360
closing 200 DFH99, sample DYNALLOC utility program 198
defining to CICS 196 DFHALTDS, job to create data sets for alternate CICS
loading VSAM data sets 191 regions 168
opening 199 DFHASMVS procedure 13, 16, 18, 20
VSAM 190 DFHAUPLE procedure
VSAM bases and paths 190 assembling and link-editing control tables 10
XRF considerations 98 library requirements 9
XRF control data sets 181 to assemble and link-edit resource definitions 7
Index 423
DSA (dynamic storage areas) EDCCICS, load module (C/370) 32
CDSA 356 EDCXV, load module (C/370) 32
CICS-key storage 353 EDSALIM, system initialization parameter 252
cushions 357 emergency restart
CWAKEY, storage key for the CWA 353 resource backout 299
ECDSA 304, 356 START system initialization parameter 293
ERDSA 353, 356 ENCRYPTION, system initialization parameter 253
ESDSA 356 EODI, system initialization parameter 253
EUDSA 304, 356 ERDSA (extended read-only DSA) 353, 356
key-0 storage 353 ERDSASZE, system initialization parameter 253
RDSA 353, 356 ESDSA (shared DSA) 356
RENTPGM, storage for read-only DSAs 342 ESDSASZE, system initialization parameter 253
RENTPGM, system initialization parameter 284 ESMEXITS, system initialization parameter 253
SDSA 356 EUDSA (extended user DSA) 356
SOS (short-on-storage) 357 EUDSASZE, system initialization parameter 254
STGPROT, system initialization parameter 296 exception class monitoring 268, 269
storage protection facilities 353 EXEC CICS CREATE commands 3
TCTUAKEY, storage key for terminal control user EXEC interface modules 25, 53
areas 353 exit interval, region 262
UDSA 356 exits
DSALIM, system initialization parameter 247 FE, global trap exit 303
DSECT operand of system initialization parameter IEFUSI, exit routine 352
TYPE 229 XDUCLSE, dump global user exit 178
DSHIPIDL, system initialization parameter 248 XDUOUT, dump global user exit 178
DSHIPINT, system initialization parameter 248 XDUREQ, dump global user exit 178
DSNTIA1 346 XDUREQC, dump global user exit 178
DSNTIAC 346 extended read-only DSA (ERDSA)
DSNTIAR 346 preparing programs for the ERDSA 37
DSRTPGM, system initialization parameter 249 extended recovery facility (XRF)
DTIMOUT (deadlock timeout interval) 358 terminal considerations 74
DTRPGM, system initialization parameter 249 external CICS interface
DTRTRAN, system initialization parameter 249 procedures to install programs 24
DUMP, system initialization parameter 250 external security interface 288
dump analysis and elimination extrapartition transient data 113
system initialization parameter 245 CSSL, and other destinations used by CICS 347
dump data sets 175, 250, 251 extrapartition transient data queues 347
dump table facility 175
job control statements for CICS execution 179
job control statements to allocate 178
F
facilities
space calculations 179
autoinstall 4
dump utility program, DFHDU530 177
auxiliary trace autoswitch facility 233
DUMPDS, system initialization parameter 250
BMS, basic mapping facility 13
dumps
CICS-supplied, for installing programs 24
controlling with dump table 175
generalized trace facility (GTF) 98
effect of START= parameter 331
IBM Screen Definition Facility II (SDF II) 13
DUMPSW, system initialization parameter 251
LLA, library lookaside facility 36
DURETRY, system initialization parameter 251
SAA Resource Recovery facility 26
dynamic allocation
shared PL/I library 50
ADYN, dynamic allocation transaction 198
storage protection facilities 353
DFH99, sample DYNALLOC utility program 198
temporary storage 107
dynamic allocation of the CSD 155
VLF, virtual lookaside facility 36
dynamic invocation of translator 24 FCT (file control table)
dynamic transaction routing program, DFHDYP specifying the FCT suffix 254
coding the DTRPGM system initialization FCT, system initialization parameter 254
parameter 249 FE global trap exit 303
DFHSIT macro parameters 217 FEPI (front end programming interface)
CSZL, transient data queue 114
CSZX, transient data queue 114
E FEPI, system initialization parameter 254
ECDSA (extended CICS-key DSA) 356 field name start character 256
ECDSASZE, system initialization parameter 251 field separator characters 255
Index 425
intrapartition transient data 113, 346 keypoint frequency 231
intrapartition transient data queues keys for page-retrieval 290
defining the intrapartition data set 115
INTTR, system initialization parameter 264
IPCS (interactive problem control system) 177
L
Language Environment 27
IRC (interregion communication) 264
C support 29
IRCSTRT, system initialization parameter 264
CEEMSG, transient data destination 28
ISC (intersystem communication) 264
CEEOUT, transient data destination 28
ISC, system initialization parameter 264
CESE, transient data destination 28
CESO, transient data destination 28
J COBOL support 28
Japanese language feature enabling the interface 27
installing definitions in the CSD 156 initialization 27
Java virtual machine installing 27
environment variables 333 interface module, CEECCICS 27
JCL (job control language) PL/I support 30
CICS startup 339 procedures to install programs 24
as a batch job 339 storage requirements 27
as a started task 360 LCD (local catalog data set)
JES delay interval for XRF 264 description 165
JESDI, system initialization parameter 264 job control statement for CICS execution 169
job control language (JCL) job control statements to define and initialize 168
for CICS as a batch job 339 use in restart 326
for CICS as a started task 360 LE
job streams support for COBOL 28
assembling and link-editing partition sets 20 LE run-time library, SCEERUN 340
assembling and link-editing physical map sets 15 LGDFINT system initialization parameter 264
CICS startup 338 LGNMSG, system initialization parameter 266
defining DFHXRMSG data set 183 libraries
defining XRF control data set 182 CICSTS13.CICS.SDFHAUTH 9
installing assembler-language application CICSTS13.CICS.SDFHLOAD 9
programs 43 CICSTS13.CICS.SDFHMAC 9
installing COBOL application programs 45 CICSTS13.CICS.XDHINST 10
installing physical and symbolic description CICSTS13.CICSSDFHAUTH 7
maps 19 CICSTS13.CICSSDFHLOAD 7
non-XRF tables 12 IMS.PGMLIB (IMS library) 53
XRF-related tables 12 IMS.RESLIB (IMS) 56
XRF tables 12 SCEERUN, LE run-time library 340
jobs SMP/E global zone 9
DFHALTDS, job to create data sets for alternate SMP/E target zone 9
CICS regions 168 STEPLIB 53
DFHCMACI, job to create and initialize the messages SYS1.AMODGEN (MVS library) 13
data set 96 SYS1.MACLIB 9
DFHCOMDS, job to create common CICS data SYS1.MODGEN (MVS library) 13
sets 96 SYS1.PLIBASE 33
DFHDEFDS, job to create data sets for each SYS1.SHRMAC 33
region 96, 107 library lookaside (LLA) 36
DFHISTAR 10, 97 line printer 66
journaling link pack area (LPA)
BWO 101 preparing programs for the LPA 36
specifying security checking for journal entries 310 LLA (library lookaside) 36
XJCT, system initialization parameter 310 LLACOPY, system initialization parameter 266
JOURNALMODEL definitions 127 LLACOPY macro 266
JVM 333 load libraries
JVM dummy DD statement, DFHCJVM 350 support for secondary extents 40
JVM environment variables, DFHJVM 350 load modules
DFHFCT, FCT load module 5
EDCCICS, load module for C/370 32
K EDCXV, load module (C/370) 32
key-0 storage 353 local catalog data set (LCD)
KEYFILE, system initialization parameter 264 description 165
Index 427
NEWSIT, system initialization parameter 327 PL/I language support (continued)
(continued) sample job stream 48
effect on warm start 327 shared library support 33, 50
NODDS option of system initialization parameter transient data destinations 347
BMS 234 translator 24
non-VTAM terminals 5 PLIMSG queue 118
null parameters, example of DFHNCTR CALLs with 84 PLT (program list table)
system initialization programs 278
system termination programs 279
O PLTPI, system initialization parameter 278
open destination request limit 276 PLTPISEC, system initialization parameter 278
operator communication for initialization PLTPIUSR, system initialization parameter 278
parameters 325 PLTSD, system initialization parameter 279
OPERTIM, system initialization parameter 275 preparing programs to run in the ERDSA 37
OPNDLIM, system initialization parameter 276 preparing programs to run in the LPA 36
OPNDST write-to-operator timeout limit 276 preparing programs to run in the RDSA 37
option list, in translator dynamic invocation 25 PRGDLAY, system initialization parameter 279
overriding system initialization parameters primary delay interval for XRF 276
from the console 324 PRINT, system initialization parameter 279
from the SYSIN data set 323 procedures, CICS-supplied
overview of coupling facility data table server 381 DFHASMVS 13, 16, 17, 18, 20
overview of named counter server 403 DFHAUPLE 7, 9, 10, 12
overview of temporary storage data sharing server 367 DFHEITAL, online procedure for assembler 24, 42
DFHEITDL, online procedure for C/370 24, 51
P DFHEITPL, online procedure for PL/I 24, 48
DFHEITVL, online procedure for COBOL 24, 45
PA keys for page-retrieval 290
DFHEXTDL, online procedure for C/370 24, 51
PA keys for screen copying 279
DFHEXTPL, online procedure for PL/I 24, 48
page-chaining command character string 277
DFHEXTVL, online procedure for COBOL 24, 45
page-copying command character string 277
DFHLNKVS, cataloged procedure 16, 20
page-purging command character string 277
DFHMAPS, procedure for installing maps 18, 19,
page-retrieval command character string 277
34
page-retrieval keys 290
DFHYITDL, online procedure for C 24, 51
parameters
DFHYITEL, online procedure for C++ 24
null 84
DFHYITPL, online procedure for PL/I 24, 48
PARM startup parameter
DFHYITVL, online procedure for COBOL 24, 45
system initialization parameters 342
DFHYXTDL, online procedure for C 24, 51
PARMERR, system initialization parameter 276
DFHYXTEL, online procedure for C++ 24
partition sets
DFHYXTPL, online procedure for PL/I 24
installing 20
DFHYXTVL, online procedure for COBOL 24, 45
loading above the 16MB line 13
for installing application programs 24
PDI, system initialization parameter 276
program list table (PLT) 338
PDIR (PSB directory) 57
system initialization programs 278
PDIR, system initialization parameter 276
system termination programs 279
performance class monitoring 269
program specification block (PSB)
persistent sessions, VTAM 72
PDIR, system initialization parameter 276
persistent verification delay 283
specifying security checking of PSB entries 313
PF keys for page-retrieval 290
PRTYAGE, system initialization parameter 281
PGCHAIN, system initialization parameter 277
PRVMOD, system initialization parameter 281
PGCOPY, system initialization parameter 277
PSB (program specification block)
PGPURGE, system initialization parameter 277
PDIR, system initialization parameter 276
PGRET, system initialization parameter 277
specifying security checking of PSB entries 313
physical and symbolic description maps, installing
PSBCHK, system initialization parameter 281
together 19
PSDINT, system initialization parameter 218, 282
physical map sets
PSTYPE, system initialization parameter 218, 282
installing 15
purge delay time interval, BMS 279
preparing 13
PVDELAY, system initialization parameter 283
PL/I language support
application programs 48
EXEC interface module 26, 53 R
Language Environment support 30 RACF (resource access control facility)
procedures to install PL/I programs 24 checking program entries with RACF 312
Index 429
samples (continued) sequential terminal devices 66 (continued)
installing COBOL application programs 196 end-of-file 67
installing physical and symbolic description logical close to quiesce 351
maps 19 terminating input data 66
non-XRF tables 12 shared PL/I library 50
to define DFHXRMSG data set 183 sharing the CSD 140
to define XRF control data set 182 freeing internal locks 147
XRF-related tables 12 protection by internal locks 143
XRF tables 12 sharing between CICS regions 143
job stream, non-XRF tables 12 single keystroke retrieval (SKR) 290
sample job to define auxiliary data sets on disk 173 SIT, system initialization parameter 289
screen copying 279 SIT (system initialization table)
SDF II (IBM Screen Definition Facility II) 13 default SIT (DFHSIT) 220
SDSA (shared DSA) 356 DFHSIT keywords and operands 216
SDSASZE, system initialization parameter 287 DFHSIT TYPE=CSECT 229
SDTRAN, system initialization parameter 287 DFHSIT TYPE=DSECT 229
SDUMP data sets 98 installing the SIT 7
SDUMP macro 175 supplying system initialization parameters to
CICS retry interval 251 CICS 319
DURETRY option 251 SKR (single keystroke retrieval) 290
SEC, system initialization parameter 288 SKRxxxx, system initialization parameter 290
secondary extents, CICS load libraries 40 SNSCOPE, system initialization parameter 290
SECPRFX, system initialization parameter 289 SOS (short-on-storage) 357
security space calculations
auxiliary trace data set 173
for transactions 315
CAVM 183
MRO bind-time security 288
CSD 136
of attached entries 315
defining data sets 93
of XRF data sets 186
disk space 136
protecting your data sets 93, 101
dump data sets 179
resource class names 309
global catalog 163
SEC, system initialization parameter 288
XRF message data set 184
SECPRFX, system initialization parameter 289
SPCTR, system initialization parameter 291
security checking
SPCTRxx, system initialization parameter 291
for EXEC CICS system commands 309
SPOOL, system initialization parameter 292
for program entries 312
spool performance considerations 292
for temporary storage entries 315
SRBSVC, system initialization parameter 293
of DB2 resources 309
SRT, system initialization parameter 293
of destination control entries 309
of EXEC-started transaction entries 311 SRT (system recovery table) 293
of file control entries 310 SSLDELAY, system initialization parameter 293
of journal entries 310 STANDARD option of system initialization parameter
of PSB entries 313 BMS 234
specifying a prefix to resource name 289 STANDBY start option 294
using RACF to establish APPC sessions 289 standby start-up for XRF 294
XAPPC, system initialization parameter 308 START, system initialization parameter 293
XCMD, system initialization parameter 309 (option,ALL) 295
XDB2, system initialization parameter 309 START=AUTO 326
XDCT, system initialization parameter 309 START=COLD 328
XFCT, system initialization parameter 310 START=INITIAL 328
XJCT, system initialization parameter 310 START=STANDBY 329
XPCT, system initialization parameter 311 START command, MVS 360
XPPT, system initialization parameter 312 started task, CICS as a 360
XPSB, system initialization parameter 313 STARTER, system initialization parameter 295
XTRAN, system initialization parameter 315 starting CICS regions 337
XTST, system initialization parameter 315 as a started task 360
sequential devices 66 MVS START command 360
sequential terminal devices 65 sample job stream 338
closing (quiescing) the devices 67 specifying the type of startup 325
coding input for 66 START=AUTO 326
DFHTC2500, close terminal warning message 67 START=COLD 328
DFHTC2507, close terminal warning message 67 START=INITIAL 328
Index 431
system initialization parameters (continued) system initialization parameters
ECDSASZE 229 NATLANG 272
EDSALIM (EDSA storage limit) 252 NCPLDFT 274
ENCRYPTION 253 NEWSIT 274
entering at the console 324 OPERTIM 275
EODI 253 OPNDLIM 276
ERDSASZE 253 PARMERR 276
ESDSASZE 253 PDI 276
ESMEXITS 253 PDIR 276
EUDSASZE 254 PGAICTLG 277
FCT 254 PGAIEXIT 277
FEPI 254 PGAIPGM 277
FLDSEP 255 PGCHAIN 277
FLDSTRT 256 PGCOPY 277
FORCEQR 256 PGPURGE 277
from operator’s console 320, 325 PGRET 277
from the SYSIN data set 343 PLTPI 278
FSSTAFF 256 PLTPISEC 278
GMTEXT 258 PLTPIUSR 278
GMTRAN 259 PLTSD 279
GNTRAN 259 PRGDLAY 279
GRPLIST 260 PRINT 279
GTFTR 262 PRTYAGE 281
how to specify 215 PRVMOD 281
HPO 262 PSBCHK 281
ICP 262 PSDINT 282
ICV 262 PSTYPE 282
System initialization parameters PVDELAY 283
ICVR 263 RAMAX 283
system initialization parameters RAPOOL 283
ICVTSD 263 RDSASZE 284
in the PARM parameter 320, 323 RENTPGM 284
in the SYSIN data set 320, 323 RESP 285
INITPARM 263 RESSEC 285
INTTR 264 RMTRAN 286
IRCSTRT 264 RRMS 287
ISC 264 RST 287
JESDI 264 RUWAPOOL 287
KEYFILE 264 SDSASZE 287
LGDFINT 264 SDTRAN 287
LGNMSG 266 SEC 288
LLACOPY 266 SECPRFX 289
LPA 266 SIT 289
MAXOPENTCBS 267 SKRxxxx 290
MCT 267 SNSCOPE 290
MN 267 SPCTR 291
MNCONV 268 SPCTRTS 291
MNEVE 268 SPCTRxx 291
MNEXC 269 specifying on the PARM statement 342
MNFREQ 269 SPOOL 292
MNPER 269 SRBSVC 293
MNSUBSYS 269 SRT 293
MNSYNC 269 SSLDELAY 293
MNTIME 269 START 293
MROBTCH 270 STARTER 295
MROFSE 271 STATRCD 295
MROLRM 271 STGPROT 296
MSGCASE 271 STGRCVY 296
MSGLVL 271 STNTR 297
System initialization parameters STNTRTS 297
MXT 271 STNTRxx 297
Index 433
trace (continued) transient data write-to-terminal sample program
DFHBUXT auxiliary trace data set 233 (DFH$TDWT) 114
DFHTU530, trace utility program 174 translator
GTFTR, system initialization parameter 172, 262 dynamic invocation of 24
INTTR, system initialization parameter 172, 264 translators
job control statements to allocate auxiliary trace data CICS-supplied 24
sets 173 DDname list, translator dynamic invocation 25
option in transaction dump 304 DFHEAP1$, translator for assembler 24
sample job to define auxiliary data sets on disk 173 DFHECP1$, translator for COBOL 24
SM component, warning when setting trace DFHEDP1$, translator for C 24
level 297 DFHEPP1$, translator for PL/I 24
space calculations for auxiliary trace data sets 173 translator requirements 53
SPCTR, system initialization parameter 291 TRAP, system initialization parameter 303
SPCTRxx, system initialization parameter 291 TRTABSZ, system initialization parameter 304
special tracing, setting levels of 291 TRTRANSZ, system initialization parameter 304
starting auxiliary trace 172 TRTRANTY, system initialization parameter 304
STNTR, system initialization parameter 297 TS, system initialization parameter 304
STNTRxx, system initialization parameter 297 TSO users 70
SYSTR, system initialization parameter 298 TST, system initialization parameter 305
table size in main storage 304 TST (temporary storage table) 305
table size in transaction dump 304 specifying security checking of temporary storage
TRTABSZ, system initialization parameter 304 entries 315
TRTRANSZ, system initialization parameter 304 TYPE, system initialization parameter 229
TRTRANTY, system initialization parameter 304 TYPE=CSECT, DFHSIT 229
USERTR, system initialization parameter 172, 306 TYPE=DSECT, DFHSIT 229
using auxiliary trace data sets 171 types of data tables 201
using DFHDEFDS to allocate auxiliary trace data
sets 173 U
trace utility program, DFHTU530 174 UDSA (user DSA) 356
TRANISO, system initialization parameter 303 UDSASZE, system initialization parameter 305
transaction isolation 355 UOWNETQL, system initialization parameter 305
transactions user file definitions 189
ADYN, dynamic allocation transaction 198 user files
CEDA 5 coupling facility data table server 381
CEDB 5 named counter server 403
CEDC 5 userid timeout limit 306
CEMT PERFORM SHUT 68 USERTR, system initialization parameter 306
CESF GOODNIGHT 67 USRDELAY, system initialization parameter 306
CESF LOGOFF 67 utility programs
CESN 70 DFHCCUTL, local catalog initialization utility
CSFU, CICS file utility transaction 199 program 167
transient data (extrapartition) data sets 113 DFHDU530, dump utility program 177
DFHJUP, CICS journal utility program 132
defining 118
DFHMSCAN 41
transient data (intrapartition) data set DFHTU530, CICS auxiliary trace utility program 95
defining 115 IDCAMS, AMS utility program 191
failure to open 115
job control statements to define 115
multiple extents and volumes 116
other considerations 116
V
VERBEXIT, IPCS parameter 177
VSAM data set 116
virtual lookaside facility (VLF) 36
control interval size 116
virtual telecommunications access method (VTAM)
job control statement for CICS execution 117
ACB at CICS startup 331
space considerations 116
high performance option (HPO) 262
XRF considerations 117
logon data 264, 266
transient data destination system initialization parameter, VTAM 307
CEEMSG, Language Environment 28 terminals, statements for 61
CEEOUT, Language Environment 28 VBUILD TYPE=APPL statement 232
CESE, Language Environment 28 virtual terminals
CESO, Language Environment 28 VTPREFIX 307
transient data queues 113 VLF (virtual lookaside facility) 36
Index 435
436 CICS TS for OS/390: CICS System Definition Guide
Sending your comments to IBM
If you especially like or dislike anything about this book, please use one of the
methods listed below to send your comments to IBM.
Feel free to comment on what you regard as specific errors or omissions, and on
the accuracy, organization, subject matter, or completeness of this book.
Please limit your comments to the information in this book and the way in which the
information is presented.
When you send comments to IBM, you grant IBM a nonexclusive right to use or
distribute your comments in any way it believes appropriate, without incurring any
obligation to you.
You can send your comments to IBM in any of the following ways:
v By mail, to this address:
Information Development Department (MP095)
IBM United Kingdom Laboratories
Hursley Park
WINCHESTER,
Hampshire
United Kingdom
v By fax:
– From outside the U.K., after your international access code use
44–1962–870229
– From within the U.K., use 01962–870229
v Electronically, use the appropriate network ID:
– IBM Mail Exchange: GBIBM2Q9 at IBMMAIL
– IBMLink™: HURSLEY(IDRCF)
– Internet: idrcf@hursley.ibm.com
SC33-1682-02
Spine information: