CICS Performance Guide
CICS Performance Guide
CICS Performance Guide
SC33-1699-03
CICS® Transaction Server for OS/390®
SC33-1699-03
Note!
Before using this information and the product it supports, be sure to read the general information under “Notices” on
page xiii.
Contents v
Isolating (fencing) real storage for CICS (PWSS and SNA transaction flows (MSGINTEG, and
PPGRTR) . . . . . . . . . . . . . . . 190 ONEWTE) . . . . . . . . . . . . . . 208
Recommendations . . . . . . . . . . . 191 Effects . . . . . . . . . . . . . . . 208
How implemented . . . . . . . . . . 191 Where useful . . . . . . . . . . . . 208
How monitored . . . . . . . . . . . 191 Limitations . . . . . . . . . . . . . 208
Increasing the CICS region size . . . . . . . 192 How implemented . . . . . . . . . . 209
How implemented . . . . . . . . . . 192 How monitored . . . . . . . . . . . 209
How monitored . . . . . . . . . . . 192 SNA chaining (TYPETERM RECEIVESIZE,
Giving CICS a high dispatching priority or BUILDCHAIN, and SENDSIZE) . . . . . . . 209
performance group . . . . . . . . . . . 192 Effects . . . . . . . . . . . . . . . 209
How implemented . . . . . . . . . . 193 Where useful . . . . . . . . . . . . 210
How monitored . . . . . . . . . . . 193 Limitations . . . . . . . . . . . . . 210
Using job initiators . . . . . . . . . . . 193 Recommendations . . . . . . . . . . . 210
Effects . . . . . . . . . . . . . . . 194 How implemented . . . . . . . . . . 210
Limitations . . . . . . . . . . . . . 194 How monitored . . . . . . . . . . . 210
How implemented . . . . . . . . . . 194 Number of concurrent logon/logoff requests
How monitored . . . . . . . . . . . 194 (OPNDLIM) . . . . . . . . . . . . . . 210
Region exit interval (ICV) . . . . . . . . . 194 Effects . . . . . . . . . . . . . . . 211
Main effect . . . . . . . . . . . . . 195 Where useful . . . . . . . . . . . . 211
Secondary effects . . . . . . . . . . . 195 Limitations . . . . . . . . . . . . . 211
Where useful . . . . . . . . . . . . 196 Recommendations . . . . . . . . . . . 211
Limitations . . . . . . . . . . . . . 196 How implemented. . . . . . . . . . . 211
Recommendations . . . . . . . . . . . 196 How monitored . . . . . . . . . . . 211
How implemented . . . . . . . . . . 197 Terminal scan delay (ICVTSD) . . . . . . . . 211
How monitored . . . . . . . . . . . 197 Effects . . . . . . . . . . . . . . . 212
Use of LLA (MVS library lookaside) . . . . . . 197 Where useful . . . . . . . . . . . . 213
Effects of LLACOPY . . . . . . . . . . 198 Limitations . . . . . . . . . . . . . 213
The SIT Parameter LLACOPY . . . . . . . 198 Recommendations . . . . . . . . . . . 213
DASD tuning . . . . . . . . . . . . . 199 How implemented . . . . . . . . . . 214
Reducing the number of I/O operations . . . 199 How monitored . . . . . . . . . . . 214
Tuning the I/O operations . . . . . . . . 199 Negative poll delay (NPDELAY) . . . . . . . 214
Balancing I/O operations . . . . . . . . 200 NPDELAY and unsolicited-input messages in
TCAM. . . . . . . . . . . . . . . 214
Chapter 16. Networking and VTAM 201 Effects . . . . . . . . . . . . . . . 214
Terminal input/output area (TYPETERM Where useful . . . . . . . . . . . . 215
IOAREALEN or TCT TIOAL) . . . . . . . . 201 Compression of output terminal data streams . . 215
Effects . . . . . . . . . . . . . . . 201 Limitations . . . . . . . . . . . . . 215
Limitations . . . . . . . . . . . . . 202 Recommendations . . . . . . . . . . . 215
Recommendations . . . . . . . . . . . 202 How implemented . . . . . . . . . . 216
How implemented . . . . . . . . . . 203 How monitored . . . . . . . . . . . 216
How monitored . . . . . . . . . . . 203 Automatic installation of terminals . . . . . . 216
Receive-any input areas (RAMAX) . . . . . . 203 Maximum concurrent autoinstalls (AIQMAX) 216
Effects . . . . . . . . . . . . . . . 203 The restart delay parameter (AIRDELAY) . . . 216
Where useful . . . . . . . . . . . . 204 The delete delay parameter (AILDELAY) . . . 217
Limitations . . . . . . . . . . . . . 204 Effects . . . . . . . . . . . . . . . 218
Recommendations . . . . . . . . . . . 204 Recommendations . . . . . . . . . . . 218
How implemented . . . . . . . . . . 204 How monitored . . . . . . . . . . . 219
How monitored . . . . . . . . . . . 204
Receive-any pool (RAPOOL) . . . . . . . . 204 | Chapter 17. CICS Web support . . . . 221
Effects . . . . . . . . . . . . . . . 205 | CICS Web performance in a sysplex . . . . . . 221
Where useful . . . . . . . . . . . . 205 | CICS Web support performance in a single address
Limitations . . . . . . . . . . . . . 205 | space . . . . . . . . . . . . . . . . 222
Recommendations . . . . . . . . . . . 206 | CICS Web use of DOCTEMPLATE resources . . . 222
How implemented . . . . . . . . . . 206 | CICS Web support use of temporary storage . . . 223
How monitored . . . . . . . . . . . 206 | CICS Web support of HTTP 1.0 persistent
High performance option (HPO) with VTAM. . . 207 | connections . . . . . . . . . . . . . . 223
Effects . . . . . . . . . . . . . . . 207 | CICS Web security. . . . . . . . . . . . 223
Limitations . . . . . . . . . . . . . 207 | CICS Web 3270 support . . . . . . . . . . 223
Recommendations . . . . . . . . . . . 207 | Secure sockets layer support . . . . . . . . 224
How implemented . . . . . . . . . . 207
How monitored . . . . . . . . . . . 207 Chapter 18. VSAM and file control . . 225
vi CICS TS for OS/390: CICS Performance Guide
VSAM considerations: general objectives . . . . 225 How implemented . . . . . . . . . . 240
Local shared resources (LSR) or Nonshared How monitored . . . . . . . . . . . 240
resources (NSR) . . . . . . . . . . . 225 Hiperspace buffers . . . . . . . . . . . 240
Number of strings . . . . . . . . . . . 227 Effects . . . . . . . . . . . . . . . 241
Size of control intervals . . . . . . . . . 229 Limitations . . . . . . . . . . . . . 241
Number of buffers (NSR) . . . . . . . . 230 Recommendations . . . . . . . . . . . 241
Number of buffers (LSR) . . . . . . . . 230 How implemented . . . . . . . . . . 241
CICS calculation of LSR pool parameters . . . 231 Subtasking: VSAM (SUBTSKS=1) . . . . . . . 241
Data set name sharing . . . . . . . . . 232 Effects . . . . . . . . . . . . . . . 242
AIX considerations . . . . . . . . . . 233 Where useful . . . . . . . . . . . . 243
Situations that cause extra physical I/O . . . 233 Limitations . . . . . . . . . . . . . 243
Other VSAM definition parameters . . . . . 234 Recommendations . . . . . . . . . . . 243
VSAM resource usage (LSRPOOL) . . . . . . 234 How implemented . . . . . . . . . . 244
Effects . . . . . . . . . . . . . . . 234 | How monitored . . . . . . . . . . . 244
Where useful . . . . . . . . . . . . 234 Data tables . . . . . . . . . . . . . . 244
Limitations . . . . . . . . . . . . . 234 Effects . . . . . . . . . . . . . . . 244
Recommendations . . . . . . . . . . . 234 Recommendations . . . . . . . . . . . 244
How implemented . . . . . . . . . . 234 How implemented . . . . . . . . . . 245
VSAM buffer allocations for NSR (INDEXBUFFERS How monitored . . . . . . . . . . . 245
and DATABUFFERS) . . . . . . . . . . . 235 | Coupling facility data tables . . . . . . . . 245
Effects . . . . . . . . . . . . . . . 235 | Locking model . . . . . . . . . . . . 247
Where useful . . . . . . . . . . . . 235 | Contention model . . . . . . . . . . . 247
Limitations . . . . . . . . . . . . . 235 | Effects . . . . . . . . . . . . . . . 248
Recommendations . . . . . . . . . . . 235 | Recommendations . . . . . . . . . . . 248
How implemented . . . . . . . . . . 235 | How implemented . . . . . . . . . . 249
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 249
VSAM buffer allocations for LSR . . . . . . . 236 | CFDT statistics . . . . . . . . . . . . 250
Effects . . . . . . . . . . . . . . . 236 | RMF reports. . . . . . . . . . . . . 251
Where useful . . . . . . . . . . . . 236 | VSAM record-level sharing (RLS). . . . . . . 251
Recommendations . . . . . . . . . . . 236 | Effects . . . . . . . . . . . . . . . 252
How implemented . . . . . . . . . . 236 | How implemented . . . . . . . . . . 253
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 254
VSAM string settings for NSR (STRINGS) . . . . 237
Effects . . . . . . . . . . . . . . . 237 | Chapter 19. Java program objects . . 255
Where useful . . . . . . . . . . . . 237 | Overview. . . . . . . . . . . . . . . 255
Limitations . . . . . . . . . . . . . 237 | Performance considerations. . . . . . . . . 255
Recommendations . . . . . . . . . . . 237 | DLL initialization . . . . . . . . . . . 255
How implemented . . . . . . . . . . 237 | LE runtime options . . . . . . . . . . 256
How monitored . . . . . . . . . . . 237 | API costs . . . . . . . . . . . . . . 257
VSAM string settings for LSR (STRINGS) . . . . 238 | CICS system storage . . . . . . . . . . 257
Effects . . . . . . . . . . . . . . . 238 | Workload balancing of IIOP method call requests 258
Where useful . . . . . . . . . . . . 238 | CICS dynamic program routing . . . . . . 258
Limitations . . . . . . . . . . . . . 238 | TCP/IP port sharing . . . . . . . . . . 258
Recommendations . . . . . . . . . . . 238 | Dynamic domain name server registration for
How implemented . . . . . . . . . . 238 | TCP/IP . . . . . . . . . . . . . . 258
How monitored . . . . . . . . . . . 238
Maximum keylength for LSR (KEYLENGTH and
| Chapter 20. Java virtual machine
MAXKEYLENGTH) . . . . . . . . . . . 239
Effects . . . . . . . . . . . . . . . 239 | (JVM) programs . . . . . . . . . . 259
Where useful . . . . . . . . . . . . 239 | Overview. . . . . . . . . . . . . . . 259
Recommendations . . . . . . . . . . . 239 | Performance considerations. . . . . . . . . 259
How implemented . . . . . . . . . . 239 | Storage usage . . . . . . . . . . . . 260
Resource percentile for LSR (SHARELIMIT) . . . 239 | How monitored . . . . . . . . . . . . 261
Effects . . . . . . . . . . . . . . . 239
Where useful . . . . . . . . . . . . 240 Chapter 21. Database management 263
Recommendations . . . . . . . . . . . 240 DBCTL minimum threads (MINTHRD). . . . . 263
How implemented . . . . . . . . . . 240 Effects . . . . . . . . . . . . . . . 263
VSAM local shared resources (LSR) . . . . . . 240 Where useful . . . . . . . . . . . . 263
Effects . . . . . . . . . . . . . . . 240 Limitations . . . . . . . . . . . . . 263
Where useful . . . . . . . . . . . . 240 Implementation . . . . . . . . . . . 263
Recommendations . . . . . . . . . . . 240 How monitored . . . . . . . . . . . 264
Contents vii
DBCTL maximum threads (MAXTHRD) . . . . 264 How implemented . . . . . . . . . . 286
Effects . . . . . . . . . . . . . . . 264 Maximum task specification (MXT) . . . . . . 287
Where useful . . . . . . . . . . . . 264 Effects . . . . . . . . . . . . . . . 287
Limitations . . . . . . . . . . . . . 264 Limitations . . . . . . . . . . . . . 287
Implementation . . . . . . . . . . . 264 Recommendations . . . . . . . . . . . 287
How monitored . . . . . . . . . . . 264 How implemented . . . . . . . . . . 288
DBCTL DEDB parameters (CNBA, FPBUF, FPBOF) 264 How monitored . . . . . . . . . . . 288
Where useful . . . . . . . . . . . . 265 Transaction class (MAXACTIVE) . . . . . . . 288
Recommendations . . . . . . . . . . . 265 Effects . . . . . . . . . . . . . . . 288
How implemented . . . . . . . . . . 266 Limitations . . . . . . . . . . . . . 288
How monitored . . . . . . . . . . . 266 Recommendations . . . . . . . . . . . 288
CICS DB2 attachment facility . . . . . . . . 266 How implemented . . . . . . . . . . 289
Effects . . . . . . . . . . . . . . . 267 How monitored . . . . . . . . . . . 289
Where useful . . . . . . . . . . . . 267 Transaction class purge threshold
How implemented . . . . . . . . . . 267 (PURGETHRESH) . . . . . . . . . . . . 289
How monitored . . . . . . . . . . . 267 Effects . . . . . . . . . . . . . . . 290
CICS DB2 attachment facility (TCBLIMIT, and Where useful . . . . . . . . . . . . 290
THREADLIMIT) . . . . . . . . . . . . 268 Recommendations . . . . . . . . . . . 290
Effect . . . . . . . . . . . . . . . 268 How implemented . . . . . . . . . . 290
Limitations . . . . . . . . . . . . . 268 How monitored . . . . . . . . . . . 290
Recommendations . . . . . . . . . . . 268 Task prioritization . . . . . . . . . . . . 291
How monitored . . . . . . . . . . . 269 Effects . . . . . . . . . . . . . . . 291
CICS DB2 attachment facility (PRIORITY) . . . . 269 Where useful . . . . . . . . . . . . 292
Effects . . . . . . . . . . . . . . . 269 Limitations . . . . . . . . . . . . . 292
Where useful . . . . . . . . . . . . 269 Recommendations . . . . . . . . . . . 292
Limitations . . . . . . . . . . . . . 269 How implemented . . . . . . . . . . 293
Recommendations . . . . . . . . . . . 269 How monitored . . . . . . . . . . . 293
How implemented . . . . . . . . . . 269 Simplifying the definition of CICS dynamic
How monitored . . . . . . . . . . . 269 storage areas . . . . . . . . . . . . 293
Extended dynamic storage areas . . . . . . 294
Chapter 22. Logging and journaling 271 Dynamic storage areas(below the line) . . . . 295
Coupling facility or DASD-only logging? . . . . 271 Using modules in the link pack area (LPA/ELPA) 297
Integrated coupling migration facility . . . . 271 Effects . . . . . . . . . . . . . . . 297
Monitoring the logger environment . . . . . . 271 Limitations . . . . . . . . . . . . . 297
Average blocksize . . . . . . . . . . . . 273 Recommendations . . . . . . . . . . . 297
Number of log streams in the CF structure . . . 274 How implemented . . . . . . . . . . 298
AVGBUFSIZE and MAXBUFSIZE parameters 274 Map alignment . . . . . . . . . . . . . 298
Recommendations . . . . . . . . . . . 275 Effects . . . . . . . . . . . . . . . 298
Limitations . . . . . . . . . . . . . 275 Limitations . . . . . . . . . . . . . 298
How implemented . . . . . . . . . . 276 How implemented . . . . . . . . . . 299
How monitored . . . . . . . . . . . 276 How monitored . . . . . . . . . . . 299
LOWOFFLOAD and HIGHOFFLOAD parameters Resident, nonresident, and transient programs . . 299
on log stream definition . . . . . . . . . . 276 Effects . . . . . . . . . . . . . . . 299
Recommendations . . . . . . . . . . . 277 Recommendations . . . . . . . . . . . 300
How implemented . . . . . . . . . . 278 How monitored . . . . . . . . . . . 300
How monitored . . . . . . . . . . . 278 Putting application programs above the 16MB line 300
Staging data sets . . . . . . . . . . . . 278 Effects . . . . . . . . . . . . . . . 300
Recommendations . . . . . . . . . . . 279 Where useful . . . . . . . . . . . . 301
Activity keypoint frequency (AKPFREQ) . . . . 279 Limitations . . . . . . . . . . . . . 301
Limitations . . . . . . . . . . . . . 280 How implemented . . . . . . . . . . 301
Recommendations . . . . . . . . . . . 281 Transaction isolation and real storage requirements 301
How implemented . . . . . . . . . . 281 Limiting the expansion of subpool 229 using
How monitored . . . . . . . . . . . 281 VTAM pacing . . . . . . . . . . . . . 302
DASD-only logging . . . . . . . . . . . 281 Recommendations . . . . . . . . . . . 302
How implemented . . . . . . . . . . 303
Chapter 23. Virtual and real storage 283
Tuning CICS virtual storage . . . . . . . . 283 Chapter 24. MRO and ISC . . . . . . 305
Splitting online systems: virtual storage . . . . 284 CICS intercommunication facilities . . . . . . 305
Where useful . . . . . . . . . . . . 285 Limitations . . . . . . . . . . . . . 306
Limitations . . . . . . . . . . . . . 285 How implemented . . . . . . . . . . 306
Recommendations . . . . . . . . . . . 286 How monitored . . . . . . . . . . . 307
Contents ix
DBCTL session termination . . . . . . . . 364 Storage Reports . . . . . . . . . . . . 533
Dispatcher domain . . . . . . . . . . 367 Loader and Program Storage Report. . . . . . 543
Dump domain . . . . . . . . . . . . 373 Storage Subpools Report . . . . . . . . . 547
System dumps . . . . . . . . . . . . 373 Transaction Classes Report . . . . . . . . . 549
Transaction dumps . . . . . . . . . . 376 Transactions Report . . . . . . . . . . . 551
Enqueue domain . . . . . . . . . . . 378 Transaction Totals Report . . . . . . . . . 552
Front end programming interface (FEPI) . . . 381 Programs Report . . . . . . . . . . . . 554
File control . . . . . . . . . . . . . 385 Program Totals Report . . . . . . . . . . 556
ISC/IRC system and mode entries . . . . . 396 DFHRPL Analysis Report . . . . . . . . . 558
System entry . . . . . . . . . . . . 397 Programs by DSA and LPA Report . . . . . . 559
Mode entry . . . . . . . . . . . . . 405 Temporary Storage Report . . . . . . . . . 561
ISC/IRC attach time entries . . . . . . . 410 Temporary Storage Queues Report . . . . . . 566
Journalname . . . . . . . . . . . . . 411 Tsqueue Totals Report . . . . . . . . . . 567
Log stream . . . . . . . . . . . . . 413 Temporary Storage Queues by Shared TS Pool . . 567
LSRpool . . . . . . . . . . . . . . 416 Transient Data Report . . . . . . . . . . 569
Monitoring domain . . . . . . . . . . 428 Transient Data Queues Report . . . . . . . . 571
Program autoinstall . . . . . . . . . . 430 Transient Data Queue Totals Report . . . . . . 572
Loader . . . . . . . . . . . . . . 431 Journalnames Report . . . . . . . . . . . 573
Program . . . . . . . . . . . . . . 442 Logstreams Report . . . . . . . . . . . 574
Recovery manager. . . . . . . . . . . 445 Autoinstall and VTAM Report . . . . . . . . 577
Statistics domain . . . . . . . . . . . 451 Connections and Modenames Report . . . . . 580
Storage manager . . . . . . . . . . . 452 | TCP/IP Services Report . . . . . . . . . . 584
Table manager . . . . . . . . . . . . 464 LSR Pools Report . . . . . . . . . . . . 587
TCP/IP Services - resource statistics . . . . . 465 Files Report . . . . . . . . . . . . . . 592
TCP/IP Services - request statistics . . . . . 467 File Requests Report . . . . . . . . . . . 593
Temporary storage . . . . . . . . . . 468 Data Tables Reports . . . . . . . . . . . 595
Terminal control . . . . . . . . . . . 474 Coupling Facility Data Table Pools Report . . . . 597
Transaction class (TCLASS) . . . . . . . . 478 Exit Programs Report. . . . . . . . . . . 598
Transaction manager . . . . . . . . . . 482 Global User Exits Report . . . . . . . . . 599
Transient data . . . . . . . . . . . . 491 DB2 Connection Report . . . . . . . . . . 600
User domain statistics . . . . . . . . . 499 DB2 Entries Report . . . . . . . . . . . 606
VTAM statistics . . . . . . . . . . . 500 Enqueue Manager Report . . . . . . . . . 609
Recovery Manager Report . . . . . . . . . 612
Appendix B. Shared temporary Page Index Report. . . . . . . . . . . . 614
storage queue server statistics. . . . 503
Shared TS queue server: coupling facility statistics 503 Appendix F. MVS and CICS virtual
Shared TS queue server: buffer pool statistics. . . 505 storage . . . . . . . . . . . . . . 615
Shared TS queue server: storage statistics . . . . 506 MVS storage . . . . . . . . . . . . . 616
The MVS common area . . . . . . . . . 616
| Appendix C. Coupling facility data Private area and extended private area . . . . 619
| tables server statistics . . . . . . . 509 The CICS private area . . . . . . . . . . 619
High private area . . . . . . . . . . . 621
| Coupling facility data tables: list structure statistics 509
MVS storage above region . . . . . . . . . 623
| Coupling facility data tables: table accesses
The CICS region . . . . . . . . . . . . 623
| statistics . . . . . . . . . . . . . . . 511
CICS virtual storage . . . . . . . . . . 623
| Coupling facility data tables: request statistics . . 512
MVS storage . . . . . . . . . . . . . 624
| Coupling facility data tables: storage statistics . . 513
The dynamic storage areas . . . . . . . . . 625
CICS subpools . . . . . . . . . . . . 626
| Appendix D. Named counter sequence | Short-on-storage conditions caused by subpool
| number server . . . . . . . . . . . 515 | storage fragmentation . . . . . . . . . . 636
| Named counter sequence number server statistics 515 CICS kernel storage . . . . . . . . . . . 639
| Named counter server: storage statistics . . . . 516
Appendix G. Performance data . . . . 641
Appendix E. The sample statistics Variable costs . . . . . . . . . . . . . 641
program, DFH0STAT . . . . . . . . 519 Logging . . . . . . . . . . . . . . 642
Analyzing DFH0STAT Reports . . . . . . . 520 Syncpointing . . . . . . . . . . . . 643
| System Status Report . . . . . . . . . . . 521 Additional costs . . . . . . . . . . . . 644
Transaction Manager Report . . . . . . . . 526 Transaction initialization and termination . . . . 644
Dispatcher Report . . . . . . . . . . . . 528 Receive . . . . . . . . . . . . . . 644
Dispatcher TCBs Report . . . . . . . . . . 530 Attach/terminate . . . . . . . . . . . 644
Contents xi
xii CICS TS for OS/390: CICS Performance Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply in the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore this statement may not apply
to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM United Kingdom
Laboratories, MP151, Hursley Park, Winchester, Hampshire, England, SO21 2JN.
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
Other company, product, and service names may be trademarks or service marks
of others.
Notices xv
xvi CICS TS for OS/390: CICS Performance Guide
Preface
What this book is about
This book is intended to help you to:
v Establish performance objectives and monitor them
v Identify performance constraints, and make adjustments to the operational CICS
system and its application programs.
This book does not discuss the performance aspects of the CICS Transaction Server
for OS/390 Release 3 Front End Programming Interface. For more information
about the Front End Programming Interface, See the CICS Front End Programming
Interface User’s Guide. This book does not contain Front End Programming Interface
dump statistics.
If you have a performance problem and want to correct it, read Parts 3 and 4. You
may need to refer to various sections in Part 2.
Notes on terminology
The following abbreviations are used throughout this book:
v “CICS” refers to the CICS element in the CICS Transaction Server for OS/390®
v “MVS” refers to the operating system, which can be either an element of
OS/390, or MVS/Enterprise System Architecture System Product (MVS/ESA SP).
v “VTAM®” refers to ACF/VTAM.
v “DL/I” refers to the database component of IMS/ESA.
If you have any questions about the CICS Transaction Server for OS/390 library,
see CICS Transaction Server for OS/390: Planning for Installation which discusses both
hardcopy and softcopy books and the ways that the books can be ordered.
ACF/VTAM
ACF/VTAM Installation and Migration Guide, GC31-6547-01
ACF/VTAM Network Implementation Guide, SC31-6548
DATABASE 2
DB2 for OS/390 Administration Guide, SC26-8957
DFSMS/MVS
DFSMS/MVS NaviQuest User’s Guide, SC26-7194
DFSMS/MVS DFSMSdfp Storage Administration Reference, SC26-4920
IMS/ESA
IMS/ESA Version 5 Admin Guide: DB, SC26-8012
IMS/ESA Version 5 Admin Guide: System, SC26-8013
IMS/ESA Version 5 Performance Analyzer’s User’s Guide, SC26-9088
IMS/ESA Version 6 Admin Guide: DB, SC26-8725
IMS/ESA Version 6 Admin Guide: System, SC26-8720
IMS Performance Analyzer User’s Guide SC26-9088
MVS
OS/390 MVS Initialization and Tuning Guide, SC28-1751
OS/390 MVS Initialization and Tuning Reference, SC28-1752
OS/390 MVS JCL Reference, GC28-1757
OS/390 MVS System Management Facilities (SMF), GC28-1783
OS/390 MVS Planning: Global Resource Serialization, GC28-1759
OS/390 MVS Planning: Workload Management, GC28-1761
OS/390 MVS Setting Up a Sysplex, GC28-1779
OS/390 RMF
OS/390 RMF User’s Guide, GC28-1949-01
OS/390 Performance Management Guide, SC28-1951-00
OS/390 RMF Report Analysis, SC28-1950-01
OS/390 RMF Programmers Guide, SC28-1952-01
Tuning tools
Generalized Trace Facility Performance Analysis (GTFPARS) Program
Description/Operations Manual, SB21-2143
Network Performance Analysis and Reporting System Program Description/Operations,
SB21-2488
Network Program Products Planning, SC30-3351
Others
CICS Workload Management Using CICSPlex SM and the MVS/ESA Workload
Manager, GG24-4286
System/390 MVS Parallel Sysplex Performance, GG24-4356
Bibliography xxi
System/390 MVS/ESA Version 5 Workload Manager Performance Studies, SG24-4352
IBM 3704 and 3705 Control Program Generation and Utilities Guide, GC30-3008
IMSASAP II Description/Operations, SB21-1793
Screen Definition Facility II Primer for CICS/BMS Programs, SH19-6118
Systems Network Architecture Management Services Reference,SC30-3346
Teleprocessing Network Simulator General Information, GH20-2487
Subsequent updates will probably be available in softcopy before they are available
in hardcopy. This means that at any time from the availability of a release, softcopy
versions should be regarded as the most up-to-date.
For CICS Transaction Server books, these softcopy updates appear regularly on the
Transaction Processing and Data Collection Kit CD-ROM, SK2T-0730-xx. Each reissue
of the collection kit is indicated by an updated order number suffix (the -xx part).
For example, collection kit SK2T-0730-06 is more up-to-date than SK2T-0730-05. The
collection kit is also clearly dated on the cover.
Updates to the softcopy are clearly marked by revision codes (usually a “#”
character) to the left of the changes.
| “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 replaces the
| chapter on Performance Reporter for MVS..
| A chapter has been added, “Chapter 19. Java program objects” on page 255, to
| introduce performance considerations when using Java language support.
| “Chapter 20. Java virtual machine (JVM) programs” on page 259 describes
| performance implications for programs run using the MVS Java Virtual Machine
| (JVM).
| “Chapter 8. Managing Workloads” on page 123 has been revised to discuss more
| fully the implications and benefits of using the MVS workload manager, and to
| introduce the CICSPlex SM dynamic routing program used by the WLM.
| Changes have also been made to several reports in the sample statistics program,
| DFH0STAT.
Good performance is the achievement of agreed service levels. This means that
system availability and response times meet user’s expectations using resources
available within the budget.
There are several basic steps in tuning a system, some of which may be just
iterative until performance is acceptable. These are:
1. Agree what good performance is.
2. Set up performance objectives (described in Chapter 1. Establishing
performance objectives).
3. Decide on measurement criteria (described in Chapter 3. Performance
monitoring and review).
4. Measure the performance of the production system.
5. Adjust the system as necessary.
6. Continue to monitor the performance of the system and anticipate future
constraints (see “Monitoring for the future” on page 15).
Parts 1 and 2 of this book describe how to monitor and assess performance.
Recommendations given in this book, based on current knowledge of CICS, are general in
nature, and cannot be guaranteed to improve the performance of any particular system.
Performance objectives often consist of a list of transactions and expected timings for
each. Ideally, through them, good performance can be easily recognized and you
know when to stop further tuning. They must, therefore, be:
v Practically measurable
v Based on a realistic workload
v Within the budget.
After you have defined the workload and estimated the resources required, you
must reconcile the desired response with what you consider attainable. These
objectives must then be agreed and regularly reviewed with users.
The word user here means the terminal operator. A user, so defined, sees CICS
performance as the response time, that is, the time between the last input action (for
example, a keystroke) and the expected response (for example, a message on the
screen). Several such responses might be required to complete a user function, and
the amount of work that a user perceives as a function can vary enormously. So,
the number of functions per period of time is not a good measure of performance,
unless, of course, there exists an agreed set of benchmark functions.
A more specific unit of measure is therefore needed. The words transaction and task
are used to describe units of work within CICS. Even these can lead to ambiguities,
because it would be possible to define transactions and tasks of varying size.
However, within a particular system, a series of transactions can be well defined
and understood so that it becomes possible to talk about relative performance in
terms of transactions per second (or minute, or hour).
are allocated, used, and released immediately on completion of the task. In this
mode the words transaction and task are more or less synonymous.
Conversational mode is potentially wasteful in a system that does not have
Conversational
├────────────────── Transaction ──────────────────┤
│ │
├───────────────────── Task ──────────────────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘
abundant resources. There are further questions and answers during which
resources are not released. Resources are, therefore, tied up unnecessarily waiting
for users to respond, and performance may suffer accordingly. Transaction and task
are, once again, more or less synonymous.
Pseudoconversational mode allows for slow response from the user. Transactions
Pseudoconversational
├────────────────── Transaction ──────────────────┤
│ │
├───────── Task ─────────┼──────── Task ──────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘
are broken up into more than one task, yet the user need not know this. The
resources in demand are released at the end of each task, giving a potential for
improved performance.
You should consider whether to define your criteria in terms of the average, the
90th percentile, or even the worst-case response time. Your choice may depend on
the audit controls of your installation and the nature of the transactions in
question.
Later, transactions with common profiles can be merged, for convenience into
transaction categories.
Establish the priority of each transaction category, and note the periods during
which the priorities change.
See “Chapter 2. Gathering data for performance objectives” on page 7 for more
detailed recommendations on this step.
Any assumptions that you make about your installation must be used consistently
in future monitoring. These assumptions include computing-system factors and
business factors.
Business factors are concerned with work fluctuations. Allow for daily peaks (for
example, after receipt of mail), weekly peaks (for example, Monday peak after
weekend mail), and seasonal peaks as appropriate to the business. Also allow for
the peaks of work after planned interruptions, such as preventive maintenance and
public holidays.
Remember that, after the system has been brought into service, no amount of
tuning can compensate for poor initial design.
Post-development review
Review the performance of the complete system in detail. The main purposes are
to:
v Validate performance against objectives
v Identify resources whose use requires regular monitoring
v Feed the observed figures back into future estimates.
To achieve this, you should:
1. Identify discrepancies from the estimated resource use
2. Identify the categories of transactions that have caused these discrepancies
3. Assign priorities to remedial actions
4. Identify resources that are consistently heavily used
5. Provide utilities for graphic representation of these resources
6. Project the loadings against the planned future system growth to ensure that
adequate capacity is available
7. Update the design document with the observed performance figures
8. Modify the estimating procedures for future systems.
The data logged should include the date and time, location, duration, cause (if
known), and the action taken to resolve the problem.
Tasks (not to be confused with the task component of a CICS transaction) include:
v Running one or more of the tools described in “Chapter 4. An overview of
performance-measurement tools” on page 23
v Collating the output
v Examining it for trends.
You should allocate responsibility for these tasks between operations personnel,
programming personnel, and analysts. You must identify the resources that are to
be regarded as critical, and set up a procedure to highlight any trends in the use of
these resources.
Because the tools require resources, they may disturb the performance of a
production system.
Give emphasis to peak periods of activity, for both the new application and the
system as a whole. It may be necessary to run the tools more frequently at first to
confirm that the expected peaks correspond with the actual ones.
It is not normally practical to keep all the detailed output. Arrange for summarized
reports to be filed with the corresponding CICS statistics, and for the output from
the tools to be held for an agreed period, with customary safeguards for its
protection.
When to review?
You should plan for the following broad levels of monitoring activity:
v Dynamic (online) monitoring.
v Daily monitoring.
v Periodic (weekly and monthly) monitoring.
v Keeping sample reports as historical data. You can also keep historical data in a
database such as the Performance Reporter database.
Dynamic monitoring
Dynamic monitoring, is “on-the-spot” monitoring that you can, and should, carry
out at all times. This type of monitoring generally includes the following:
v Observing the system’s operation continuously to discover any serious
short-term deviation from performance objectives.
Use the CEMT transaction (CEMT INQ|SET MONITOR), together with end-user
feedback. You can also use the Resource Measurement Facility (RMF) to collect
information about processor, channel, coupling facility, and I/O device usage.
v Obtaining status information. Together with status information obtained by
using the CEMT transaction, you can get status information on system
processing during online execution. This information could include the queue
levels, active regions, active terminals, and the number and type of
conversational transactions. You could get this information with the aid of an
automated program invoked by the master terminal operator. At prearranged
times in the production cycle (such as before scheduling a message, at shutdown
of part of the network, or at peak loading), the program could capture the
transaction processing status and measurements of system resource levels.
Daily monitoring
The overall objective here is to measure and record key system parameters daily.
The daily monitoring data usually consists of counts of events and gross level
timings. In some cases, the timings are averaged for the entire CICS system.
v Record both the daily average and the peak period (usually one hour) average
of, for example, messages, tasks, processor usage, I/O events, and storage used.
Compare these against your major performance objectives and look for adverse
trends.
v List the CICS-provided statistics at the end of every CICS run. You should date
and time-stamp the data that is provided, and file it for later review. For
example, in an installation that has settled down, you might review daily data at
the end of the week; generally, you can carry out reviews less frequently than
collection, for any one type of monitoring data. If you know there is a problem,
you might increase the frequency; for example, reviewing daily data
immediately it becomes available.
You should be familiar with all the facilities in CICS for providing statistics at
times other than at shutdown. The main facilities, using the CEMT transaction,
are invocation from a terminal (with or without reset of the counters) and
automatic time-initiated requests.
v File an informal note of any incidents reported during the run. These may
include a shutdown of CICS that causes a gap in the statistics, a complaint from
your end users of poor response times, a terminal going out of service, or any
other item of significance. This makes it useful when reconciling disparities in
detailed performance figures that may be discovered later.
v Print the system console log for the period when CICS was active, and file a
copy of the console log in case it becomes necessary to review the CICS system
performance in the light of the concurrent batch activity.
v Run one of the performance analysis tools described in “Chapter 4. An overview
of performance-measurement tools” on page 23 for at least part of the day if
there is any variation in load from day to day. File the summaries of the reports
produced by the tools you use.
v Transcribe onto a graph any items identified as being consistently heavily used
in the post-development review phase (described in “Chapter 2. Gathering data
for performance objectives” on page 7).
v Collect CICS statistics, monitoring data, and RMF™ data into the Performance
Reporter database.
Weekly monitoring
Here, the objective is to periodically collect detailed statistics on the operation of
your system for comparison with your system-oriented objectives and workload
profiles.
v Run the CICS monitoring facility with performance class active, and process it. It
may not be necessary to do this every day, but it is important to do it regularly
and to keep the sorted summary output as well as the detailed reports.
Monthly monitoring
v Run RMF.
v Review the RMF and performance analysis listings. If there is any indication of
excessive resource usage, follow any previously agreed procedures (for example,
notify your management), and do further monitoring.
v Date- and time-stamp the RMF output and keep it for use in case performance
problems start to arise. You can also use the output in making estimates, when
detailed knowledge of component usage may be important. These aids provide
detailed data on the usage of resources within the system, including processor
usage, use of DASD, and paging rates.
v Produce monthly Performance Reporter reports showing long-term trends.
In a complex production system there is usually too much performance data for it
to be comprehensively reviewed every day. Key components of performance
degradation can be identified with experience, and those components are the ones
to monitor most closely. You should identify trends of usage and other factors
(such as batch schedules) to aid in this process.
Generally, there should be a progressive review of data. You should review daily
data weekly, and weekly data monthly, unless any incident report or review raises
questions that require an immediate check of the next level of detail. This should
be enough to detect out-of-line situations with a minimum of effort.
The review procedure also ensures that additional data is available for problem
determination, should it be needed. The weekly review should require
approximately one hour, particularly after experience has been gained in the
process and after you are able to highlight the items that require special
consideration. The monthly review will probably take half a day at first. After the
procedure has been in force for a period, it will probably be completed more
quickly. However, when new applications are installed or when the transaction
volumes or numbers of terminals are increased, the process is likely to take longer.
Review the data from the RMF listings only if there is evidence of a problem from
the gross-level data, or if there is an end-user problem that can’t be solved by the
review process. Thus, the only time that needs to be allocated regularly to the
detailed data is the time required to ensure that the measurements were correctly
made and reported.
Do not discard all the data you collect, after a certain period. Discard most, but
leave a representative sample. For example, do not throw away all weekly reports
after three months; it is better to save those dealing with the last week of each
month. At the end of the year, you can discard all except the last week of each
quarter. At the end of the following year, you can discard all the previous year’s
data except for the midsummer week. Similarly, you should keep a representative
selection of daily figures and monthly figures.
The intention is that you can compare any report for a current day, week, or month
with an equivalent sample, however far back you want to go. The samples become
more widely spaced but do not cease.
When you measure performance against objectives and report the results to users,
you have to identify any systematic differences between the measured data and
If the measurements differ greatly from the estimates, you must revise application
response-time objectives or plan a reduced application workload, or upgrade your
system. If the difference is not too large, however, you can embark on tuning the
total system. Parts 3 and 4 of this book tell you how to do this tuning activity.
Some of the questions are not strictly to do with performance. For instance, if the
transaction statistics show a high frequency of transaction abends with usage of the
abnormal condition program, this could perhaps indicate signon errors and,
therefore, a lack of terminal operator training. This, in itself, is not a performance
problem, but is an example of the additional information that can be provided by
monitoring.
1. How frequently is each available function used?
a. Has the usage of transaction identifiers altered?
b. Does the mix vary from one time of the day to another?
c. Should statistics be requested more frequently during the day to verify this?
In these cases, you have to identify the function by program or data set usage,
with appropriate reference to the CICS program statistics, file statistics, or other
statistics. In addition, you may be able to put user tags into the monitoring
data (for example, a user character field in the case of the CICS monitoring
facility), which can be used as a basis for analysis by products such as the
TIVOLI Performance Reporter.
In addition to the above, you should regularly review certain items in the CICS
statistics, such as:
v Times the MAXTASK limit reached (transaction manager statistics)
v Peak tasks (transaction class statistics)
v Times cushion released (storage manager statistics)
v Storage violations (storage manager statistics)
v Maximum RPLs posted (VTAM statistics)
v Short-on-storage count (storage manager statistics)
v Wait on string total (file control statistics)
v Use of DFHSHUNT log streams.
| v Times aux. storage exhausted (temporary storage statistics)
| v Buffer waits (temporary storage statistics)
| v Times string wait occurred (temporary storage statistics)
| v Times NOSPACE occurred (transient data global statistics)
| v Intrapartition buffer waits (transient data global statistics)
| v Intrapartition string waits (transient data global statistics)
You should also satisfy yourself that large numbers of dumps are not being
produced.
Furthermore, you should review the effects of and reasons for system outages and
their duration. If there is a series of outages, you may be able to detect a common
cause of them.
When a major change to the system is planned, increase the monitoring frequency
before and after the change. A major change includes the addition of:
v A new application or new transactions
If the system performance has altered as a result of a major change to the system,
data for before-and-after comparison of the appropriate statistics provides the best
way of identifying the reasons for the alteration.
Consider having extra tools installed to make it easier to project and test future
usage of the system. Tools such as the Teleprocessing Network Simulator (TPNS)
program can be used to test new functions under volume conditions before they
actually encounter production volumes. Procedures such as these can provide you
with insight as to the likely performance of the production system when the
changes are implemented, and enable you to plan option changes, equipment
changes, scheduling changes, and other methods for stopping a performance
problem from arising.
You have to monitor all of these factors to determine when constraints in the
system may develop. A variety of programs could be written to monitor all these
resources. Many of these programs are currently supplied as part of IBM products
such as CICS or IMS/ESA, or are supplied as separate products. This chapter
describes some of the products that can give performance information on different
components of a production system.
The list of products in this chapter is far from being an exhaustive summary of
performance monitoring tools, yet the data provided from these sources comprises
a large amount of information. To monitor all this data is an extensive task.
Furthermore, only a small subset of the information provided is important for
identifying constraints and determining necessary tuning actions, and you have to
identify this specific subset for your particular CICS system.
You also have to bear in mind that there are two different types of tools:
1. Tools that directly measure whether you are meeting your objectives
2. Additional tools to look into internal reasons why you might not be meeting
objectives.
None of the tools can directly measure whether you are meeting end-user response
time objectives. The lifetime of a task within CICS is comparable, that is, usually
related to, response time, and bad response time is usually correlated with long
lifetime within CICS, but this correlation is not exact because of other contributors
to response time.
Obviously, you want tools that help you to measure your objectives. In some cases,
you may choose a tool that looks at some internal function that contributes
towards your performance objectives, such as task lifetime, rather than directly
measuring the actual objective, because of the difficulty of measuring it.
When you have gained experience of the system, you should have a good idea of
the particular things that are most significant in that particular system and,
therefore, what things might be used as the basis for exception reporting. Then,
one way of simply monitoring the important data might be to set up
exception-reporting procedures that filter out the data that is not essential to the
tuning process. This involves setting standards for performance criteria that
identify constraints, so that the exceptions can be distinguished and reported while
You often have to gather a considerable amount of data before you can fully
understand the behavior of your own system and determine where a tuning effort
can provide the best overall performance improvement. Familiarity with the
analysis tools and the data they provide is basic to any successful tuning effort.
Remember, however, that all monitoring tools cost processing effort to use. Typical
costs are 5% additional processor cycles for the CICS monitoring facility
(performance class), and up to 1% for the exception class. The CICS trace facility
overhead is highly dependent on the workload used. The overhead can be in
excess of 25%.
In general, then, we recommend that you use the following tools in the sequence
of priorities shown below:
1. CICS statistics
2. CICS monitoring data
3. CICS internal and auxiliary trace.
In this chapter, the overview of the various tools for gathering or analyzing data is
arranged as follows:
v CICS performance data
v Operating system performance data
v Performance data for other products.
CICS statistics
CICS statistics are the simplest and the most important tool for permanently
monitoring a CICS system. They collect information on the CICS system as a
whole, without regard to tasks.
The CICS statistics domain writes five types of statistics to SMF data sets: interval,
end-of-day, requested, requested reset, and unsolicited statistics.
Each of these sets of data is described and a more general description of CICS
statistics is given in “Chapter 5. Using CICS statistics” on page 39and “Appendix A.
CICS statistics tables” on page 345.
See “Appendix E. The sample statistics program, DFH0STAT” on page 519 for the
details and interpretation of the report.
The CICS trace facilities can also be useful for analyzing performance problems
such as excessive waiting on events in the system, or constraints resulting from
inefficient system setup or application program design.
Several types of tracing are provided by CICS, and are described in the CICS
Problem Determination Guide. Trace is controlled by:
v The system initialization parameters (see the CICS System Definition Guide).
v CETR (see the CICS Supplied Transactions manual). CETR also provides for trace
selectivity by, for instance, transaction type or terminal name.
v CEMT SET INTTRACE, CEMT SET AUXTRACE, or CEMT SET GTFTRACE (see
the CICS Supplied Transactions manual).
v EXEC CICS SET TRACEDEST, EXEC CICS SET TRACEFLAG, or EXEC CICS
SET TRACETYPE (see the CICS System Programming Reference for programming
information).
This data, used with the data produced by the measurement tools, provides the
basic information that you should have for evaluating your system’s performance.
RMF measures and reports system activity and, in most cases, uses a sampling
technique to collect data. Reporting can be done with one of three monitors:
1. Monitor I measures and reports the use of system resources (that is, the
processor, I/O devices, storage, and data sets on which a job can enqueue
during its execution). It runs in the background and measures data over a
period of time. Reports can be printed immediately after the end of the
measurement interval, or the data can be stored in SMF records and printed
RMF should be active in the system 24 hours a day, and you should run it at a
dispatching priority above other address spaces in the system so that:
v The reports are written at the interval requested
v Other work is not delayed because of locks held by RMF.
A report is generated at the time interval specified by the installation. The largest
system overhead of RMF occurs during the report generation: the shorter the
interval between reports, the larger the burden on the system. An interval of 60
minutes is recommended for normal operation. When you are addressing a specific
problem, reduce the time interval to 10 or 15 minutes. The RMF records can be
directed to the SMF data sets with the NOREPORT and RECORD options; the
report overhead is not incurred and the SMF records can be formatted later.
Note: There may be some discrepancy between the CICS initialization and
termination times when comparing RMF reports against output from the
CICS monitoring facility.
For further details of RMF, see the OS/390 Resource Measurement Facility (RMF)
Users Guide, SC28-1949.
Guidance on how to use RMF with the CICS monitoring facility is given in “Using
CICS monitoring SYSEVENT information with RMF” on page 67. In terms of CPU
costs this is an inexpensive way to collect performance information. Shorter reports
throughout the day are needed for RMF because a report of a full day’s length
includes startup and shutdown and does not identify the peak period.
GTF should run at a dispatching priority (DPRTY) of 255 so that records are not
lost. If GTF records are lost and the DPRTY is specified at 255, specify the BUF
operand on the execute statement as greater than 10 buffers.
You can use these options to get the data normally needed for CICS performance
studies:
TRACE=SYS,RNIO,USR (VTAM)
TRACE=SYS (Non-VTAM)
If you need data on the units of work dispatched by the system and on the length
of time it takes to execute events such as SVCs, LOADs, and so on, the options are:
TRACE=SYS,SRM,DSP,TRC,PCI,USR,RNIO
The TRC option produces the GTF trace records that indicate GTF interrupts of
other tasks that it is tracing. This set of options uses a higher percentage of
processor resources, and you should use it only when you need a detailed analysis
or timing of events.
No data-reduction programs are provided with GTF. To extract and summarize the
data into a meaningful and manageable form, you can either write a
data-reduction program or use one of the program offerings that are available.
For further details, see the OS/390 MVS Diagnosis: Tools and Service Aids.
GTF reports
You can produce reports from GTF data using the interactive problem control
system (IPCS). The reports generated by IPCS are useful in evaluating both system
and individual job performance. It produces job and system summary reports as
well as an abbreviated detail trace report. The summary reports include
information on MVS dispatches, SVC usage, contents supervision, I/O counts and
timing, seek analysis, page faults, and other events traced by GTF. The detail trace
reports can be used to follow a transaction chronologically through the system.
Before GTF is run, you should plan the events to be traced. If specific events such
as start I/Os (SIOs) are not traced, and the SIO-I/O timings are required, the trace
must be re-created to get the data needed for the reports.
If there are any alternative paths to a control unit in the system being monitored,
you should include the PATHIO input statement in the report execution statement.
Without the PATHIO operand, there are multiple I/O lines on the report for the
device with an alternative path: one line for the primary device address and one
for the secondary device address. If this operand is not included, the I/Os for the
primary and alternate device addresses have to be combined manually to get the
totals for that device.
A large number of ready-made reports are available, and in addition you can
generate your own reports to meet specific needs.
In the reports the Tivoli Performance Reporter uses data from CICS monitoring
and statistics. Tivoli Performance Reporter also collects data from the MVS system
and from products such as RMF, TSO, IMS™ and NetView. This means that data
from CICS and other systems can be shown together, or can be presented in
separate reports.
Reports can be presented as plots, bar charts, pie charts, tower charts, histograms,
surface charts, and other graphic formats. The Tivoli Performance Reporter for
OS/390 simply passes the data and formatting details to Graphic Data Display
See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information about the Tivoli Performance Reporter for OS/390 as a CICS
performance measurement tool.
This section gives an overview of the tools that can be used to monitor information
on various access methods and other programs used with CICS and the operating
system.
ACF/VTAM
ACF/VTAM® (program number 5735-RC2) provides information about buffer
usage either to GTF in SMF trace data or to the system console through DISPLAY
and BFRUSE commands. Other tuning statistics can also be recorded on the system
console through the MODIFY procname, TNSTAT command. (This command is
described in the ACF/VTAM Diagnostic Techniques manual.)
LISTCAT (VSAM)
VSAM LISTCAT provides information that interprets the actual situation of VSAM
data sets. This information includes counts of the following:
v Whether and how often control interval (CI) or control area (CA) splits occur
(splits should occur very rarely, especially in CA).
DB monitor (IMS)
The IMS DB monitor report print program (DFSUTR30) provides information on
batch activity (a single-thread environment) to IMS databases, and is activated
through the DLMON system initialization parameter. As in the case of CICS
auxiliary trace, this is for more in-depth investigation of performance problems by
single-thread studies of individual transactions.
The DB monitor cannot be started and stopped from a terminal. After the DB
monitor is started in a CICS environment, the only way to stop it is to shut down
CICS. The DB monitor cannot be started or stopped dynamically.
When the DB monitor runs out of space on the IMSMON data set, it stops
recording. The IMSMON data set is a sequential data set, for which you can
allocate space with IEFBR14. The DCB attributes are:
DCB=(RECFM=VB,LRECL=2044,BLKSIZE=2048)
If you are running the DB monitor in a multithread (more than one) environment,
the only statistics that are valid are the VSAM buffer pool statistics.
DBT can help you maintain data integrity by assisting the detection and repair of
errors before a problem disrupts operations. It speeds database reorganization by
providing a clear picture of how data is stored in the database, by allowing the
user to simulate various database designs before creating a new database, and by
providing various sort, unload, and reload facilities. DBT also improves
For further information, see the IMS System Utilities/Database Tools (DBT) General
Information manual.
IMSASAP:
v Produces a comprehensive set of reports, organized by level of detail and area of
analysis, to satisfy a wide range of IMS/ESA system analysis requirements
v Provides report selection and reporting options to satisfy individual
requirements and to assist in efficient analysis
v Produces alphanumerically collated report items in terms of ratios, rates, and
percentages to facilitate a comparison of results without additional computations
v Reports on schedules in progress including wait-for-input and batch message
processing programs
v Provides reports on IMS/ESA batch programs.
Statistics are collected during CICS online processing for later offline analysis. The
statistics domain writes statistics records to a System Management Facilities (SMF)
data set. The records are of SMF type 110, sub-type 002. Monitoring records and
some journaling records are also written to the SMF data set as type 110 records.
You might find it useful to process statistics and monitoring records together. For
programming information about SMF, and about other SMF data set
considerations, see the CICS Customization Guide.
End-of-day statistics are always written to the SMF data set, regardless of
the settings of any of the following:
v The system initialization parameter, STATRCD, or
v CEMT SET STATISTICS or
v The RECORDING option of EXEC CICS SET STATISTICS.
Requested statistics
are statistics that the user has asked for by using one of the following
commands:
v CEMT PERFORM STATISTICS RECORD
v EXEC CICS PERFORM STATISTICS RECORD
v EXEC CICS SET STATISTICS ON|OFF RECORDNOW.
These commands cause the statistics to be written to the SMF data set
immediately, instead of waiting for the current interval to expire. The
PERFORM STATISTICS command can be issued with any combination of
resource types or you can ask for all resource types with the ALL option.
For more details about CEMT commands see the CICS Supplied
Transactions; for programming information about the equivalent EXEC
CICS commands, see the CICS System Programming Reference.
Requested reset statistics
differ from requested statistics in that all statistics are collected and
statistics counters are reset. You can reset the statistics counters using the
following commands:
v CEMT PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS SET STATISTICS ON|OFF RESETNOW RECORDNOW
The PERFORM STATISTICS command must be issued with the ALL option
if RESETNOW is present.
You can also invoke requested reset statistics when changing the recording
status from ON to OFF, or vice versa, using CEMT SET STATISTICS
RECORDING ON
Expiry of INTERVAL
Writes to the SMF data set
Resets counters
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified
Expiry of ENDOFDAY
Writes to SMF data set
Resets counters
RECORDING OFF
Expiry of INTERVAL
No action
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified
Expiry of ENDOFDAY
Writes to SMF data set
Resets counters
08 09 10 11 12 13 14 15 16 17 18 19 20 21
I E
Resetting statistics counters
Unsolicited statistics
are automatically gathered by CICS for dynamically
allocated and deallocated resources. CICS writes these
Note: To ensure that accurate statistics are recorded unsolicited statistics (USS)
must be collected. An unsolicited record resets the statistics fields it contains.
In particular, during a normal CICS shutdown, files are closed before the
end of day statistics are gathered. This means that file and LSRPOOL end of
day statistics will be zero, while the correct values will be recorded as
unsolicited statistics.
For detailed information about the reset characteristics, see “Appendix A. CICS
statistics tables” on page 345.
The arrival of the end-of-day time, as set by the ENDOFDAY parameters, always
causes the current interval to be ended (possibly prematurely) and a new interval
to be started. Only end-of-day statistics are collected at the end-of-day time, even if
it coincides exactly with the expiry of an interval.
Changing the end-of-day value changes the times at which INTERVAL statistics are
recorded immediately. In Figure 2, when the end-of-day is changed from midnight
to 1700 just after 1400, the effect is for the interval times to be calculated from the
new end-of-day time. Hence the new interval at 1500 as well as for the times after
new end-of-day time.
When you change any of the INTERVAL values (and also when CICS is
initialized), the length of the current (or first) interval is adjusted so that it expires
after an integral number of intervals from the end-of-day time.
Change to
INTERVAL(020000)
Change to
ENDOFDAY(170000)
08 09 10 11 12 13 14 15 16 17 18 19 20 21
I I I I I E I I
Note: Interval statistics are taken precisely on a minute boundary. Thus users with
many CICS regions on a single MVS image could have every region writing
statistics at the same time, if you have both the same interval and the same
end of day period specified. This could cost up to several seconds of the
entire CPU. If the cost becomes too noticeable, in terms of user response
time around the interval expiry, you should consider staggering the
intervals. One way of doing this while still maintaining very close
correlation of intervals for all regions is to use a PLT program like the
supplied sample DFH$STED which changes the end-of-day, and thus each
interval expiry boundary, by a few seconds. See the CICS Operations and
Utilities Guide for further information about DFH$STED.
For more information about the statistics domain statistics, see page 451.
For more information about transaction manager statistics, see page 482.
For more information, see the transaction class statistics on page 478
The CICS DB2 global and resource statistics are described in the CICS statistics
tables on page 352. For more information about CICS DB2 performance, see the
CICS DB2 Guide.
Dispatcher statistics
TCB statistics
The “Accum CPU time/TCB” is the amount of CPU time consumed by each CICS
TCB since the last time statistics were reset. Totaling the values of “Accum time in
MVS wait” and “Accum time dispatched” gives you the approximate time since
the last time CICS statistics were reset. The ratio of the “Accum CPU time /TCB”
to this time shows the percentage usage of each CICS TCB. The “Accum CPU
time/TCB” does not include uncaptured time, thus even a totally busy CICS TCB
would be noticeably less than 100% busy from this calculation. If a CICS region is
more than 70% busy by this method, you are approaching that region’s capacity.
The 70% calculation can only be very approximate, however, depending on such
factors as the workload in operation, the mix of activity within the workload, and
which release of CICS you are currently using. Alternatively, you can calculate if
your system is approaching capacity by using RMF to obtain a definititve
measurement, or you can use RMF with your monitoring system. For more
information, see OS/390 RMF V2R6 Performance Management Guide, SC28-1951.
Note: “Accum time dispatched” is NOT a measurement of CPU time because MVS
can run higher priority work, for example, all I/O activity and higher
priority regions, without CICS being aware.
For more information, see the CICS statistics tables on page 452.
Loader statistics
“Average loading time” = “Total loading time” / “Number of library load
requests”. This indicates the response time overhead suffered by tasks when
accessing a program which has to be brought into storage. If “Average loading
time” has increased over a period, consider MVS library lookaside usage.
“Not-in-use” program storage is freed progressively so that the “Amount of the
dynamic storage area occupied by not in use programs”, and the free storage in
the dynamic storage area are optimized for performance. Loader attempts to keep
not-in-use programs in storage long enough to reduce the performance overhead of
reloading the program. As the amount of free storage in the dynamic storage
decreases, the not-in-use programs are freemained in order of those least frequently
used to avoid a potential short-on-storage condition.
Note: The values reported are for the instant at which the statistics are gathered
and vary since the last report.
Note: This factor is meaningful only if there has been a substantial degree of
loader domain activity during the interval and may be distorted by startup
usage patterns.
This is an indication of the response time impact which may be suffered by a task
due to contention for loader domain resources.
Note: This calculation is not performed on requests that are currently waiting.
For more information, see the CICS statistics tables on page 431.
The “Writes more than control interval” is the number of writes of records whose
length was greater than the control interval (CI) size of the TS data set. This value
The number of “times aux. storage exhausted” is the number of situations where
one or more transactions may have been suspended because of a NOSPACE
condition, or (using a HANDLE CONDITION NOSPACE command, the use of
RESP on the WRITEQ TS command, or WRITEQ TS NOSUSPEND command) may
have been forced to abend. If this item appears in the statistics, increase the size of
the temporary storage data set. “Buffer writes” is the number of WRITEs to the
temporary storage data set. This includes both WRITEs necessitated by recovery
requirements and WRITEs forced by the buffer being needed to accommodate
another CI. I/O activity caused by the latter reason can be minimized by
increasing buffer allocation using the system initialization parameter, TS=(b,s),
where b is the number of buffers and s is the number of strings.
The “Peak number of strings in use” item is the peak number of concurrent I/O
operations to the data set. If this is significantly less than the number of strings
specified in the TS system initialization parameter, consider reducing the system
initialization parameter to approach this number.
If the “Times string wait occurred” is not zero, consider increasing the number of
strings. For details about adjusting the size of the TS data set and the number of
strings and buffers, see the CICS System Definition Guide.
For more information, see the CICS statistics tables on page 468
You should aim to minimize the “Intrapartition buffer waits” and “string waits” by
increasing the number of buffers and the number of strings if you can afford any
associated increase in your use of real storage.
| For more information, see the CICS statistics tables on pages 503 and 468.
|
| User domain statistics
| The user domain attempts to minimize the number of times it calls the security
| domain to create user security blocks (such as the ACEE), because this operation is
| very expensive in both processor time and input/output operations. If possible,
| each unique representation of a user is shared between multiple transactions. A
| user-domain representation of a user can be shared if the following attributes are
| identical:
| v The userid.
| v The groupid.
| The user domain keeps a count of the number of concurrent usages of a shared
| instance of a user. The count includes the number of times the instance has been
| associated with a CICS resource (such as a transient data queue) and the number
| of active transactions that are using the instance.
| Whenever CICS adds a new user instance to the user domain, the domain attempts
| to locate that instance in its user directory. If the user instance already exists with
| the parameters described above, that instance is reused. USGDRRC records how
| many times this is done. However, if the user instance does not already exist, it
| needs to be added. This requires an invocation of the security domain and the
| external security manager. USGDRNFC records how many times this is necessary.
| When the count associated with the instance is reduced to zero, the user instance is
| not immediately deleted: instead it is placed in a timeout queue controlled by the
| USRDELAY system initialization parameter. While it is in the timeout queue, the
| user instance is still eligible to be reused. If it is reused, it is removed from the
| timeout queue. USGTORC records how many times a user instance is reused while
| it was being timed out, and USGTOMRT records the average time that user
| instances remain on the timeout queue until they are removed.
| However, if a user instance remains on the timeout queue for a full USRDELAY
| interval without being reused, it is deleted. USGTOEC records how many times
| this happens.
| You should be aware that high values of USRDELAY may affect your security
| administrator’s ability to change the authorities and attributes of CICS users,
| because those changes are not reflected in CICS until the user instance is refreshed
| in CICS by being flushed from the timeout queue after the USRDELAY interval.
| Some security administrators may require you to specify USRDELAY=0. This still
| allows some sharing of user instances if the usage count is never reduced to zero.
| Generally, however, remote users are flushed out immediately after the transaction
| they are executing has terminated, so that their user control blocks have to be
| reconstructed frequently. This results in poor performance. For more information,
| see “User domain statistics” on page 499.
VTAM statistics
The “peak RPLs posted” includes only the receive-any RPLs defined by the
RAPOOL system initialization parameter. In non-HPO systems, the value shown
can be larger than the value specified for RAPOOL, because CICS reissues each
receive-any request as soon as the input message associated with the posted RPL
has been disposed of. VTAM may well cause this reissued receive-any RPL to be
posted during the current dispatch of terminal control. While this does not
necessarily indicate a performance problem, a number much higher than the
In addition to indicating whether the value for the RAPOOL system initialization
parameter is large enough, you can also use the “maximum number of RPLs
posted” statistic (A03RPLX) to determine other information. This depends upon
whether your MVS system has HPO or not.
For HPO, RAPOOL(A,B) allows the user to tune the active count (B). The size of
the pool (A) should be dependent on the speed at which they get processed. The
active count (B) has to be able to satisfy VTAM at any given time, and is
dependent on the inbound message rate for receive-any requests.
Here is an example to illustrate the differences for an HPO and a non-HPO system.
Suppose two similar CICS executions use a RAPOOL value of 2 for both runs. The
number of RPLs posted in the MVS/HPO run is 2, while the MVS/non-HPO run
is 31. This difference is better understood when we look at the next item in the
statistics.
This item is not printed if the maximum number of RPLs posted is zero. In our
example, let us say that the MVS/HPO system reached the maximum 495 times.
The non-HPO MVS system reached the maximum of 31 only once. You might
deduce from this that the pool is probably too small (RAPOOL=2) for the HPO
system and it needs to be increased. An appreciable increase in the RAPOOL value,
from 2 to, say, 6 or more, should be tried. As you can see from the example given
below, the RAPOOL value was increased to 8 and the maximum was reached only
16 times:
MAXIMUM NUMBER OF RPLS POSTED 8
NUMBER OF TIMES REACHED MAXIMUM 16
In a non-HPO system, these two statistics are less useful, except that, if the
maximum number of RPLs posted is less than RAPOOL, RAPOOL can be reduced,
thereby saving virtual storage.
VTAM SOS simply means that a CICS request for service from VTAM was rejected
with a VTAM sense code indicating that VTAM was unable to acquire the storage
required to service the request. VTAM does not give any further information to
CICS, such as what storage it was unable to acquire.
This situation most commonly arises at network startup or shutdown when CICS
is trying to schedule requests concurrently, to a larger number of terminals than
during normal execution. If the count is not very high, it is probably not worth
tracking down. In any case, CICS automatically retries the failing requests later on.
If your network is growing, however, you should monitor this statistic and, if the
count is starting to increase, you should take action. Use D NET,BFRUSE to check
if VTAM is short on storage in its own region and increase VTAM allocations
accordingly if this is required.
The maximum value for this statistic is 99, at which time a message is sent to the
console and the counter is reset to zero. However, VTAM controls its own buffers
and gives you a facility to monitor buffer usage.
For more information, see the CICS statistics tables on page 500.
Dump statistics
Both transaction and system dumps are very expensive and should be thoroughly
investigated and eliminated.
For more information, see the CICS statistics tables on page 373.
Enqueue statistics
The enqueue domain supports the CICS recovery manager. Enqueue statistics
contain the global data collected by the enqueue domain for enqueue requests.
Waiting for an enqueue on a resource can add significant delays in the execution of
a transaction. The enqueue statistics allow you to assess the impact of waiting for
enqueues in the system and the impact of retained enqueues on waiters. Both the
current activity and the activity since the last reset are available.
For more information, see the CICS statistics tables on page 378.
Transaction statistics
Use these statistics to find out which transactions (if any) had storage violations.
It is also possible to use these statistics for capacity planning purposes. But
remember, many systems experience both increasing cost per transaction as well as
increasing transaction rate.
For more information, see the CICS statistics tables on page 484.
Program statistics
“Average fetch time” is an indication of how long it actually takes MVS to perform
a load from the partitioned data set in the RPL concatenation into CICS managed
storage.
The average for each RPL offset of “Program size” / “Average fetch time”. is an
indication of the byte transfer rate during loads from a particular partitioned data
set. A comparison of these values may assist you to detect bad channel loading or
file layout problems.
For more information, see the CICS statistics tables on page 442.
For more information, see the CICS statistics tables on page 382.
File statistics
File statistics collect data about the number of application requests against your
data sets. They indicate the number of requests for each type of service that are
processed against each file. If the number of requests is totalled daily or for every
CICS execution, the activity for each file can be monitored for any changes that
occur. Note that these file statistics may have been reset during the day; to obtain a
figure of total activity against a particular file during the day, refer to the
DFHSTUP summary report. Other data pertaining to file statistics and special
processing conditions are also collected.
The wait-on-string number is only significant for files related to VSAM data sets.
For VSAM, STRNO=5 in the file definition means, for example, that CICS permits
five concurrent requests to this file. If a transaction issues a sixth request for the
same file, this request must wait until one of the other five requests has completed
(“wait-on-string”).
The number of strings associated with a file when specified through resource
definition online.
String number setting is important for performance. Too low a value causes
excessive waiting for strings by tasks and long response times. Too high a value
increases VSAM virtual storage requirements and therefore real storage usage.
However, as both virtual storage and real storage are above the 16MB line, this
may not be a problem. In general, the number of strings should be chosen to give
near zero “wait on string” count.
Note: Increasing the number of strings can increase the risk of deadlocks because
of greater transaction concurrency. To minimize the risk you should ensure
that applications follow the standards set in the CICS Application
Programming Guide.
A file can also “wait-on-string” for an LSRpool string. This type of wait is reflected
in the local shared resource pool statistics section (see “LSRPOOL statistics” on
page 56) and not in the file wait-on-string statistics.
If you are using data tables, an extra line appears in the DFHSTUP report for those
files defined as data tables. “Read requests”, “Source reads”, and “Storage
alloc(K)” are usually the numbers of most significance. For a CICS-maintained
table a comparison of the difference between “read requests” and “source reads”
with the total request activity reported in the preceding line shows how the
request traffic divides between using the table and using VSAM and thus indicates
the effectiveness of converting the file to a CMT. “Storage alloc(K)” is the total
storage allocated for the table and provides guidance to the cost of the table in
storage resource, bearing in mind the possibility of reducing LSRpool sizes in the
light of reduced VSAM accesses.
Journalname statistics contain data about the use of each journal, as follows:
v The journal type (MVS logger, SMF or dummy)
v The log stream name for MVS logger journal types only
v The number of API journal writes
v The number of bytes written
v The number of flushes of journal data to log streams or SMF.
Note that the CICS system journalname and log stream statistics for the last three
items on this list are always zero. These entries appear in journalname statistics to
inform you of the journal type and log stream name for the special CICS system
journals.
For more information on journalname statistics, see the CICS statistics tables on
page 411.
Log stream statistics contain data about the use of each log stream including the
following:
v The number of write requests to the log stream
v The number of bytes written to the log stream
v The number of log stream buffer waits
v The number of log stream browse and delete requests.
For more information on log stream statistics, see the CICS statistics tables on page
413.
For more information on logging and journaling, see “Chapter 22. Logging and
journaling” on page 271.
For information about the SMF Type 88 records produced by the MVS system
logger, see the OS/390 MVS System Management Facilities (SMF) manual.
You should usually aim to have no requests that waited for a string. If you do then
the use of MXT may be more effective.
When the last open file in an LSRPOOL is closed, the pool is deleted. The
subsequent unsolicited statistics (USS) LSRPOOL record written to SMF can be
mapped by the DFHA08DS DSECT.
The fields relating to the size and characteristics of the pool (maximum key length,
number of strings, number and size of buffers) may be those which you have
specified for the pool, through resource definition online command DEFINE
LSRPOOL. Alternatively, if some, or all, of the fields were not specified, the values
of the unspecified fields are those calculated by CICS when the pool is built.
You should consider specifying separate data and index buffers if you have not
already done so. This is especially true if index CI sizes are the same as data CI
sizes.
You should also consider using Hiperspace™ buffers while retaining a reasonable
number of address space buffers. Hiperspace buffers tend to give CPU savings of
keeping data in memory, exploiting the relatively cheap expanded storage, while
allowing central storage to be used more effectively.
For more information, see the CICS statistics tables on pages 416.
For more information, see the CICS statistics tables on page 445.
For more information, see the CICS statistics tables on page 474.
The following section attempts to identify the kind of questions you may have in
connection with system performance, and describes how answers to those
questions can be derived from the statistics report. It also describes what actions, if
any, you can take to resolve ISC/IRC performance problems.
Some of the questions you may be seeking an answer to when looking at these
statistics are these:
v Are there enough sessions defined?
v Is the balance of contention winners to contention losers correct?
v Is there conflicting usage of APPC modegroups?
v What can be done if there are unusually high numbers, compared with normal
or expected numbers, in the statistics report?
All the fields below are specific to the mode group of the mode name given.
Table 3. ISC/IRC mode entries
Mode entry Field IRC LU6.1 APPC
Mode name A20MODE X
ATIs satisfied by contention losers A20ES1 X
ATIs satisfied by contention winners A20ES2 X
Peak contention losers A20E1HWM X
Peak contention winners A20E2HWM X
Peak outstanding allocates A20ESTAM X
Total specific allocate requests A20ESTAS X
Total specific allocates satisfied A20ESTAP X
Total generic allocates satisfied A20ESTAG X
Queued allocates A20ESTAQ X
Failed link allocates A20ESTAF X
Failed allocates due to sessions in use A20ESTAO X
Total bids sent A20ESBID X
Current bids in progress A20EBID X
Peak bids in progress A20EBHWM X
For more information about the usage of individual fields, see the CICS statistics
described under “ISC/IRC system and mode entries” on page 396.
Action: Consider making more sessions available with which to satisfy the allocate
requests. Enabling CICS to satisfy allocate requests without the need for queueing
may lead to improved performance.
However, be aware that increasing the number of sessions available on the front
end potentially increases the workload to the back end, and you should investigate
whether this is likely to cause a problem.
The following fields should give some guidance as to whether you need to
increase the number of contention winner sessions defined:
1. “Current bids in progress” (fields A14EBID and A20EBID) “Peak bids in progress”
(fields A14EBHWM and A20EBHWM)
The value “Peak bids in progress” records the maximum number of bids in
progress at any one time during the statistics reporting period. “Current bids in
progress” is always less than or equal to the “Peak bids in progress”.
Ideally, these fields should be kept to zero. If either of these fields is high, it
indicates that CICS is having to perform a large number of bids for contention
loser sessions.
2. “Peak contention losers” (fields A14E1HWM and A20E1HWM).
If the number of “Peak contention losers” is equal to the number of contention
loser sessions available, the number of contention loser sessions defined may be
too low. Alternatively, for APPC/LU6.1, CICS could be using the contention
loser sessions to satisfy allocates due to a lack of contention winner sessions.
This should be tuned at the front-end in conjunction with winners at the
back-end. For details of how to specify the maximum number of sessions, and
the number of contention winners, see the information on defining SESSIONS
in the CICS Resource Definition Guide.
For APPC, consider making more contention winner sessions available, which
should reduce the need to use contention loser sessions to satisfy allocate requests
and, as a result, should also make more contention loser sessions available.
For LU6.1, consider making more SEND sessions available, which decreases the
need for LU6.1 to use primaries (RECEIVE sessions) to satisfy allocate requests.
For IRC, there is no bidding involved, as MRO can never use RECEIVE sessions to
satisfy allocate requests. If “Peak contention losers (RECEIVE)” is equal to the
number of contention loser (RECEIVE) sessions on an IRC link, the number of
allocates from the remote system is possibly higher than the receiving system can
cope with. In this situation, consider increasing the number of RECEIVE sessions
available.
Note: The usage of sessions depends on the direction of flow of work. Any tuning
which increases the number of winners available at the front-end should
also take into account whether this is appropriate for the direction of flow of
work over a whole period, such as a day, week, or month.
This could cause a problem for any specific allocate, because CICS initially tries to
satisfy a generic allocate from the first modegroup before trying other modegroups
in sequence.
Group installed
ISCGROUP in CSD in CICS region:
Second user
- - - - - TCTME created
for MODEGRPX
ISC Persistent verification (PV) activity. If the number of “entries reused” in the PV
activity is low, and the “entries timed out” value is high, the PVDELAY system
initialization parameter should be increased. The “average reuse time between
entries” gives some indication of the time that could be used for the PVDELAY
system initialization parameter.
For more information, see the CICS statistics tables on page 410.
|
| Coupling facility data tables server statistics
| Coupling facility data tables server statistics are provided by the AXM page pool
| management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix C. Coupling facility data tables server statistics” on
| page 509.
|
| Named counter sequence number server statistics
| Named counter sequence number server statistics are provided by the AXM page
| pool management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix D. Named counter sequence number server” on
| page 515.
Note: Statistics records and some journaling records are also written to the SMF
data set as type 110 records. You might find it particularly useful to process
the statistics records and the monitoring records together, because statistics
provide resource and system information that is complementary to the
transaction data produced by CICS monitoring. The contents of the statistics
fields, and the procedure for processing them, are described in
“Appendix A. CICS statistics tables” on page 345.
Monitoring data is useful both for performance tuning and for charging your users
for the resources they use.
Performance class data provides detailed, resource-level data that can be used for
accounting, performance analysis, and capacity planning. This data contains
information relating to individual task resource usage, and is completed for each
task when the task terminates.
| If the monitoring performance class is also being recorded, the performance class
| record for the transaction includes the total elapsed time the transaction was
| delayed by a CICS system resource shortage. This is measured by the exception
| class and the number of exceptions encountered by the transaction. The exception
| class records can be linked to the performance class records either by the
| transaction sequence number or by the network unit-of-work id. For more
| information on the exception class records, see “Exception class data” on page 107.
CICS invokes the MVS System Resource Manager (SRM) macro SYSEVENT at the
end of every transaction to record the elapsed time of the transaction.
You can enable SYSEVENT class monitoring by coding the MNEVE=ON (together
with MN=ON) system initialization parameters. Alternatively, you can use either
the CEMT command (CEMT SET MONITOR ON EVENT) or EXEC CICS SET
MONITOR STATUS(ON) EVENTCLASS(EVENT).
If the SYSEVENT option is used, at the end of each transaction CICS issues a Type
55 (X'37') SYSEVENT macro. This records each transaction ID, the associated
terminal ID, and the elapsed time duration of each transaction. This information is
collected by the SRM and output, depending on the Resource Measurement
Facility (RMF) options set, can be written to SMF data sets.
If you are running CICS with the MVS workload manager environment in goal
mode, the MVS workload manager provides transaction activity report reporting
which replaces the SYSEVENT class of monitoring.
The objective of using the CICS monitoring facility with RMF is to enable
transaction rates and internal response times to be monitored without incurring the
overhead of running the full CICS monitoring facility and associated reporting.
This approach may be useful when only transaction statistics are required, rather
than the very detailed information that CICS monitoring facility produces. An
example of this is the monitoring of a production system where the minimum
overhead is required.
For more information about how to use RMF, refer to the MVS Resource
Measurement Facility (RMF), Version 4.1.1 - Monitor I & II Reference and Users Guide.
If records are directed to SMF, refer to the OS/390 MVS System Management
Facilities (SMF) manual. The following example shows the additional parameters
that you need to add to your IEAICS member for two MRO CICS systems:
SUBSYS=ACIC,RPGN=100 /* CICS SYSTEM ACIC HAS REPORTING */
TRXNAME=CEMT,RPGN=101 /* GROUP OF 100 AND THERE ARE */
TRXNAME=USER,RPGN=102 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=103 /* SEPARATE TRANSACTIONS */
SUBSYS=BCIC,RPGN=200 /* CICS SYSTEM BCIC HAS REPORTING */
TRXNAME=CEMT,RPGN=201 /* GROUP OF 200 AND THERE ARE */
TRXNAME=USER,RPGN=202 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=203 /* SEPARATE TRANSACTIONS */
Notes:
1. The reporting group (number 100) assigned to the ACIC subsystem reports on
all transactions in that system.
2. RMF reports on an individual transaction by name only if it is assigned a
unique reporting group. If multiple transactions are defined with one reporting
group, the name field is left blank in the RMF reports.
RMF operations
A RMF job has to be started and this includes the Monitor I session. The RMF job
should be started before initializing CICS. The RMF Monitor II session is started by
the command F RMF,S aa,MEMBER(xx) where ‘aa’ indicates alphabetic characters
| and ‘xx’ indicates alphanumeric characters.
|
| Using the CICS monitoring facility with Tivoli Performance Reporter for
| OS/390
| Tivoli Performance Reporter for OS/390 assists you in performance management
| and service-level management of a number of IBM products. The CICS
| Performance feature used by the Tivoli Performance Reporter provides reports for
| your use in analyzing the performance of CICS. See “Chapter 7. Tivoli Performance
| Reporter for OS/390” on page 113 for more information.
If you want to gather more performance class data than is provided at the
system-defined event monitoring points, you can code additional EMPs in your
application programs. At these points, you can add or change up to 16384 bytes of
user data in each performance record. Up to this maximum of 16384 bytes you can
have, for each ENTRYNAME qualifier, any combination of the following:
v Between 0 and 256 counters
v Between 0 and 256 clocks
v A single 8192-byte character string.
You could use these additional EMPs to count the number of times a certain event
occurs, or to time the interval between two events. If the performance class was
active when a transaction was started, but was not active when a user EMP was
issued, the operations defined in that user EMP would still execute on that
User EMPs can use the EXEC CICS MONITOR command. For programming
information about this command, refer to the CICS Application Programming
Reference.
Additional EMPs are provided in some IBM program products, such as DL/I.
From CICS’s point of view, these are like any other user-defined EMP. EMPs in
user applications and in IBM program products are identified by a decimal
number. The numbers 1 through 199 are available for EMPs in user applications,
and the numbers from 200 through 255 are for use in IBM program products. The
numbers can be qualified with an ‘entryname’, so that you can use each number
more than once. For example, PROGA.1, PROGB.1, and PROGC.1, identify three
different EMPs because they have different entrynames.
For each user-defined EMP there must be a corresponding monitoring control table
(MCT) entry, which has the same identification number and entryname as the EMP
that it describes.
You do not have to assign entrynames and numbers to system-defined EMPs, and
you do not have to code MCT entries for them.
Here are some ideas about how you might make use of the CICS and user fields
provided with the CICS monitoring facility:
v If you want to time how long it takes to do a table lookup routine within an
application, code an EMP with, say, ID=50 just before the table lookup routine
and an EMP with ID=51 just after the routine. The system programmer codes a
TYPE=EMP operand in the MCT for ID=50 to start user clock 1. You also code a
TYPE=EMP operand for ID=51 to stop user clock 1. The application executes.
When EMP 50 is processed, user clock 1 is started. When EMP 51 is processed,
the clock is stopped.
v One user field could be used to accumulate an installation accounting unit. For
example, you might count different amounts for different types of transaction.
Or, in a browsing application, you might count 1 unit for each record scanned
and not selected, and 3 for each record selected.
You can also treat the fullword count fields as 32-bit flag fields to indicate
special situations, for example, out-of-line situations in the applications, operator
errors, and so on. CICS includes facilities to turn individual bits or groups of
bits on or off in these counts.
v The performance clocks can be used for accumulating the time taken for I/O,
DL/I scheduling, and so on. It usually includes any waiting for the transaction
to regain control after the requested operation has completed. Because the
periods are counted as well as added, you can get the average time waiting for
I/O as well as the total. If you want to highlight an unusually long individual
case, set a flag on in a user count as explained above.
v One use of the performance character string is for systems in which one
transaction ID is used for widely differing functions. The application can enter a
subsidiary ID into the string to indicate which particular variant of the
transaction applies in each case.
Some users have a single transaction ID so that all user input is routed through
a common prologue program for security checking, for example. In this case, it
DFHMCT TYPE=EMP
There must be a DFHMCT TYPE=EMP macro definition for every user-coded EMP.
This macro has an ID operand, whose value must be made up of the
ENTRYNAME and POINT values specified on the EXEC CICS MONITOR
command. The PERFORM operand of the DFHMCT TYPE=EMP macro tells CICS
which user count fields, user clocks, and character values to expect at the
identified user EMP, and what operations to perform on them.
DFHMCT TYPE=RECORD
The DFHMCT TYPE=RECORD macro allows you to exclude specific system-defined
performance data from a CICS run. (Each performance monitoring record is
| approximately 1288 bytes long, without taking into account any user data that may
be added, or any excluded fields.)
Each field of the performance data that is gathered at the system-defined EMPs
belongs to a group of fields that has a group identifier. Each performance data
field also has its own numeric identifier that is unique within the group identifier.
For example, the transaction sequence number field in a performance record
belongs to the group DFHTASK, and has the numeric identifier ‘031’. Using these
identifiers, you can exclude specific fields or groups of fields, and reduce the size
of the performance records.
Full details of the MCT are provided in the CICS Resource Definition Guide, and
examples of MCT coding are included with the programming information in the
CICS Customization Guide.
These samples show how to use the EXCLUDE and INCLUDE operands to reduce
the size of the performance class record in order to reduce the volume of data
When CICS is initialized, you switch the monitoring facility on by specifying the
system initialization parameter MN=ON. MN=OFF is the default setting. You can
select the classes of monitoring data you want to be collected using the MNPER,
MNEXC, and MNEVE system initialization parameters. You can request the
collection of any combination of performance class data, exception class data, and
SYSEVENT data. The class settings can be changed whether monitoring itself is
ON or OFF. For guidance about system initialization parameters, refer to the CICS
System Definition Guide.
When CICS is running, you can control the monitoring facility dynamically. Just as
at CICS initialization, you can switch monitoring on or off, and you can change the
classes of monitoring data that are being collected. There are two ways of doing
this:
1. You can use the master terminal CEMT INQ|SET MONITOR command, which
is described in the CICS Supplied Transactions manual.
2. You can use the EXEC CICS INQUIRE and SET MONITOR commands;
programming information about these is in the CICS System Programming
Reference.
If you activate a class of monitoring data in the middle of a run, the data for that
class becomes available only for transactions that are started thereafter. You cannot
change the classes of monitoring data collected for a transaction after it has started.
It is often preferable, particularly for long-running transactions, to start all classes
of monitoring data at CICS initialization.
End of Product-sensitive programming interface
See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information.
Or, instead, you may want to write your own application program to process
output from the CICS monitoring facility. The CICS Customization Guide gives
programming information about the format of this output.
CICS provides a sample program, DFH$MOLS, which reads, formats, and prints
monitoring data. It is intended as a sample program that you can use as a skeleton
if you need to write your own program to analyze the data set. Comments within
the program may help you if you want to do your own processing of CICS
monitoring facility output. See the CICS Operations and Utilities Guide for further
information on the DFH$MOLS program.
End of Product-sensitive programming interface
All of the exception class data and all of the system-defined performance class data
that can be produced by CICS monitoring is listed below. Each of the data fields is
presented as a field description, followed by an explanation of the contents. The
field description has the format shown in Figure 4, which is taken from the
performance data group DFHTASK.
Note: References in Figure 4 to the associated dictionary entries apply only to the
performance class data descriptions. Exception class data is not defined in
the dictionary record.
Neither the 32-bit timer component of a clock nor its 24-bit period count are
protected against wraparound. The timer capacity is about 18 hours, and the
period count runs modulo 16 777 216.
Note: All times produced in the offline reports are in GMT (Greenwich Mean
Time) not local time. Times produced by online reporting can be expressed
in either GMT or local time.
The CMF performance class record also provides a more detailed breakdown of the
transaction suspend (wait) time into separate data fields. These include:
v Terminal I/O wait time
v File I/O wait time
v RLS File I/O wait time
v Journal I/O wait time
v Temporary Storage I/O wait time
| v Shared Temporary Storage I/O wait time
v Inter-Region I/O wait time
v Transient Data I/O wait time
v LU 6.1 I/O wait time
v LU 6.2 I/O wait time
v FEPI suspend time
| v Local ENQ delay time
| v Global ENQ delay time
| v RRMS/MVS Indoubt wait time
| v Socket I/O wait time
v RMI suspend time
v Lock Manager delay time
v EXEC CICS WAIT EXTERNAL wait time
v EXEC CICS WAITCICS and WAIT EVENT wait time
v Interval Control delay time
v ″Dispatchable Wait″ wait time
| v IMS(DBCTL) wait time
| v DB2 ready queue wait time
| v DB2 connection wait time
| v DB2 wait time
| v CFDT server syncpoint wait time
| v Syncpoint delay time
| v CICS BTS run process/activity synchronous wait time
| v CICS MAXOPENTCBS delay time
| v JVM suspend time
Figure 5 on page 76 shows the relationship of dispatch time, suspend time, and
CPU time with the response time.
S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Delay Wait M
M E
E
Dispatch Dispatch
and and
Suspend Time
CPU CPU
Time Time
Dispatch
Wait
Improvements to the CMF suspend time and wait time measurements allow you to
perform various calculations on the suspend time accurately. For example, the
"Total I/O Wait Time" can be calculated as follows:
The "other wait time" (that is, uncaptured wait (suspend) time) can be calculated as
follows:
Note: The First Dispatch Delay performance class data field includes the MXT and
TRANCLASS First Dispatch Delay fields.
Response Time
S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Wait Wait M
M E
E
PCload
Time
Dispatch
Wait
Figure 8 shows the relationship between the RMI elapsed time and the suspend
| time (fields 170 and 171).
| Note: In CICS Transaction Server for OS/390 Release 3, or later, the DB2 wait, the
| DB2 connection wait, and the DB2 readyq wait time fields as well as the
| IMS wait time field are included in the RMI suspend time.
| Care must be taken when using the JVM elapsed time (group name DFHTASK,
| field id: 253) and JVM suspend time (group name DFHTASK, field id: 254) fields in
| any calculation with other CMF timing fields. This is because of the likelihood of
| double accounting other CMF timing fields in the performance class record within
| the JVM time fields. For example, if a Java application program invoked by a
| transaction issues a read file (non-RLS) request using the Java API for CICS (JCICS)
| classes, the file I/O wait time will be included in the both the file I/O wait time
| field (group name DFHFILE, field id: 063), the transaction suspend time field
| (group name DFHTASK, field id: 014) as well as the JVM suspend time field.
| The JVM elapsed and suspend time fields are best evaluated from the overall
| transaction performance view and their relationship with the transaction response
| time, transaction dispatch time, and transaction suspend time. The performance
| class data also includes the amount of processor (CPU) time that a transaction used
| whilst in a JVM on a CICS J8 mode TCB in the J8CPUT field (group name:
| DFHTASK, field id: 260).
| Note: The number of Java API for CICS (JCICS) requests issued by the user task is
| included in the CICS OO foundation class request count field (group name:
| DFHCICS, field id: 025).
Dispatch Dispatch
and Dispatch and
CPU and CPU
Suspend Time CPU Suspend Time
Time Time Time
Dispatch Dispatch
Wait Wait
Figure 9 shows the relationship between the syncpoint elapsed time (field 173) and
the suspend time (field 14).
Note: All references to “Start time” and “Stop time” in the calculations below refer
to the middle 4 bytes of each 8 byte start/stop time field. Bit 51 of Start time
or Stop time represents a unit of 16 microseconds.
During the life of a user task, CICS measures, calculates, and accumulates the
storage occupancy at the following points:
v Before GETMAIN increases current user-storage values
v Before FREEMAIN reduces current user-storage values
v Just before the performance record is moved to the buffer.
S S
T T
A O
R P
T
.... .... .... ................... .............. ......... ......... ......... T
T I
I . . . M
M . . . E
E . . . . . . . .
. . . . . . . .
. . . . . . . .
G F G F F G F G
G = GETMAIN
F = FREEMAIN
Dotted line = Average storage occupancy
Note: On an XCTL event, the program storage currently in use is also decremented
by the size of the program issuing the XCTL, because the program is no
longer required.
Figure 11 on page 83 shows the relationships between the “high-water mark” data
fields that contain the maximum amounts of program storage in use by the user
task. Field PCSTGHWM (field ID 087) contains the maximum amount of program
storage in use by the task both above and below the 16MB line. Fields PC31AHWM
(139) and PC24BHWM (108) are subsets of PCSTGHWM, containing the maximum
amounts in use above and below the 16MB line, respectively. Further subset-fields
contain the maximum amounts of storage in use by the task in each of the CICS
dynamic storage areas (DSAs).
Note: The totaled values of all the subsets in a superset may not necessarily equate
to the value of the superset; for example, the value of PC31AHWM plus the
value of PC24BHWM may not equal the value of PCSTGHWM. This is
because the peaks in the different types of program storage acquired by the
user task do not necessarily occur simultaneously.
The “high-water mark” fields are described in detail in “User storage fields in
group DFHSTOR:” on page 92. For information about the program storage fields,
see “Program storage fields in group DFHSTOR:” on page 94.
16MB line
Figure 11. Relationships between the “high-water mark” program storage data fields
Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
006 (TYPE-T, ‘STOP’, 8 BYTES)
Finish time of measurement interval. This is either the time at which the user
task was detached, or the time at which data recording was completed in
support of the MCT user event monitoring point DELIVER option or the
monitoring options MNCONV, MNSYNC or FREQUENCY. For more
information, see “Clocks and time stamps” on page 73.
Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
| 025 (TYPE-A, ‘CFCAPICT’, 4 BYTES)
| Number of CICS OO foundation class requests, including the Java API for
| CICS (JCICS) classes, issued by the user task.
089 (TYPE-C, ‘USERID’, 8 BYTES)
User identification at task creation. This can also be the remote user identifier
for a task created as the result of receiving an ATTACH request across an MRO
or APPC link with attach-time security enabled.
103 (TYPE-S, ‘EXWTTIME’, 8 BYTES)
Accumulated data for exception conditions. The 32-bit clock contains the total
elapsed time for which the user waited on exception conditions. The 24-bit
period count equals the number of exception conditions that have occurred for
this task. For more information, see “Exception class data” on page 107
Note: The performance class data field ‘exception wait time’ will be updated
when exception conditions are encountered even when the exception
class is inactive.
112 (TYPE-C, ‘RTYPE’, 4 BYTES)
Performance record type (low-order byte-3):
C Record output for a terminal converse
D Record output for a user EMP DELIVER request
F Record output for a long-running transaction
S Record output for a syncpoint
T Record output for a task termination.
130 (TYPE-C, ‘RSYSID’, 4 bytes)
The name (sysid) of the remote system to which this transaction was routed
either statically or dynamically.
This field also includes the connection name (sysid) of the remote system to
which this transaction was routed when using the CRTE routing transaction.
The field will be null for those CRTE transactions which establish or cancel the
transaction routing session.
Note: If the transaction was not routed or was routed locally, this field is set to
null. Also see the program name (field 71).
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 187 (TYPE-S, ‘DB2RDYQW’, 8 bytes)
| The elapsed time in which the user task waited for a DB2 thread to become
| available.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 188 (TYPE-S, ‘DB2CONWT’, 8 bytes)
| The elapsed time in which the user task waited for a CICS DB2 subtask to
| become available.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 189 (TYPE-S, ‘DB2WAIT’, 8 bytes)
| The elapsed time in which the user task waited for DB2 to service the DB2
| EXEC SQL and IFI requests issued by the user task.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 157 (TYPE-A,‘SZALLCTO’, 4 bytes)
Number of times the user task timed out while waiting to allocate a
conversation.
158 (TYPE-A,‘SZRCVTO’, 4 bytes)
Number of times the user task timed out while waiting to receive data.
159 (TYPE-A,‘SZTOTCT’, 4 bytes)
Total number of all FEPI API and SPI requests made by the user task.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 070 (TYPE-A, ‘FCAMCT’, 4 BYTES)
Number of times the user task invoked file access-method interfaces. This
number excludes requests for OPEN and CLOSE.
How EXEC CICS file commands correspond to file control monitoring fields is
shown in Table 6.
Table 6. EXEC CICS file commands related to file control monitoring fields
EXEC CICS command Monitoring fields
READ FCGETCT and FCTOTCT
READ UPDATE FCGETCT and FCTOTCT
DELETE (after READ UPDATE) FCDELCT and FCTOTCT
DELETE (with RIDFLD) FCDELCT and FCTOTCT
REWRITE FCPUTCT and FCTOTCT
WRITE FCADDCT and FCTOTCT
STARTBR FCTOTCT
READNEXT FCBRWCT and FCTOTCT
READNEXT UPDATE FCBRWCT and FCTOTCT
READPREV FCBRWCT and FCTOTCT
READPREV UPDATE FCBRWCT and FCTOTCT
ENDBR FCTOTCT
RESETBR FCTOTCT
UNLOCK FCTOTCT
Note: The number of STARTBR, ENDBR, RESETBR, and UNLOCK file control
requests can be calculated by subtracting the file request counts,
FCGETCT, FCPUTCT, FCBRWCT, FCADDCT, and FCDELCT from the
total file request count, FCTOTCT.
174 (TYPE-S, ‘RLSWAIT’, 8 BYTES)
| Elapsed time in which the user task waited for RLS file I/O. For more
| information, see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 175 (TYPE-S, ‘RLSCPUT’, 8 BYTES)
The RLS File Request CPU (SRB) time field (RLSCPUT) is the SRB CPU time
this transaction spent processing RLS file requests. This field should be added
to the transaction CPU time field (USRCPUT) when considering the
measurement of the total CPU time consumed by a transaction. Also, this field
cannot be considered a subset of any other single CMF field (including
RLSWAIT). This is because the RLS field requests execute asynchronously
under an MVS SRB which can be running in parallel with the requesting
transaction. It is also possible for the SRB to complete its processing before the
requesting transaction waits for the RLS file request to complete.
Note: This clock field could contain a CPU time of zero with a count of greater
than zero. This is because the CMF timing granularity is measured in 16
microsecond units and the RLS file request(s) may complete in less than
that time unit.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 058 (TYPE-A, ‘JNLWRTCT’, 4 BYTES)
Number of journal write requests issued by the user task.
172 (TYPE-A, ‘LOGWRTCT’, 4 BYTES)
Number of CICS log stream write requests issued by the user task.
For a dynamic program link (DPL) mirror transaction, this field contains the
initial program name specified in the dynamic program LINK request. DPL
mirror transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.
For an ONC RPC or WEB alias transaction, this field contains the initial
application program name invoked by the alias transaction. ONC RPC or WEB
alias transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.
072 (TYPE-A, ‘PCLURMCT’, 4 BYTES)
Number of program LINK URM (user-replaceable module) requests issued by,
or on behalf of, the user task.
| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 242 (TYPE-A, ‘SOBYENCT’, 4 BYTES)
| The number of bytes encrypted by the secure sockets layer for the user task.
| 243 (TYPE-A, ‘SOBYDECT’, 4 BYTES)
| The number of bytes decrypted by the secure sockets layer for the user task.
| 244 (TYPE-C, ‘CLIPADDR’, 16 BYTES)
| The client IP address (nnn.nnn.nnn.nnn)
| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 196 (TYPE-S, ’SYNCDLY’, 8 BYTES)
| The elapsed time in which the user task waited for a syncpoint request to be
| issued by it’s parent transaction. The user task was executing as a result of the
| parent task issuing a CICS BTS run-process or run-activity request to execute a
| process or activity synchronously. For more information, see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
If the originating terminal is VTAM across an ISC APPC or IRC link, the
NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid.
derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ‘DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.
v A 4-byte field containing the address space id (ASID) in which the client
program is running. This field contains the 4-character EBCDIC
representation of the 2-byte hex address space id.
098 (TYPE-C, ‘NETUOWSX’, 8 BYTES)
Name by which the network unit of work id is known within the originating
system. This name is assigned at attach time using either an STCK-derived
token (when the task is attached to a local terminal), or the network unit of
work id passed as part of an ISC APPC or IRC attach header.
| The first six bytes of this field are a binary value derived from the system
| clock of the originating system and which can wrap round at intervals of
| several months.
The last two bytes of this field are for the period count. These may change
during the life of the task as a result of syncpoint activity.
Note: When using MRO or ISC, the NETUOWSX field must be combined with
the NETUOWPX field (097) to uniquely identify a task, because
NETUOWSX is unique only to the originating CICS system.
102 (TYPE-S, ‘DISPWTT’, 8 BYTES)
Elapsed time for which the user task waited for redispatch. This is the
aggregate of the wait times between each event completion and user-task
redispatch.
Note: This field does not include the elapsed time spent waiting for first
dispatch. This field is a component of the task suspend time, SUSPTIME
(014), field.
109 (TYPE-C, ‘TRANPRI’, 4 BYTES)
Transaction priority when monitoring of the task was initialized (low-order
byte-3).
| Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
| 124 (TYPE-C, ‘BRDGTRAN’, 4 BYTES)
Bridge listener transaction identifier.
125 (TYPE-S, ‘DSPDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field. For more information, see “Clocks and time stamps” on page 73.
126 (TYPE-S, ‘TCLDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set for this transaction’s transaction class, TCLSNAME (166), being
reached. For more information, see “Clocks and time stamps” on page 73.
Note: This field is a subset of the first dispatch delay, DSPDELAY (125), field.
127 (TYPE-S, ‘MXTDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set by the system parameter, MXT, being reached.
Note: The field is a subset of the first dispatch delay, DSPDELAY (125), field.
128 (TYPE-S, ‘LMDELAY’, 8 BYTES)
The elapsed time that the user task waited to acquire a lock on a resource. A
user task cannot explicitly acquire a lock on a resource, but many CICS
modules lock resources on behalf of user tasks using the CICS lock manager
(LM) domain.
For more information about CICS lock manager, see CICS Problem Determination
Guide.
For information about times, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
129 (TYPE-S, ‘ENQDELAY’, 8 BYTES)
The elapsed time waiting for a CICS task control local enqueue. For more
information, see “Clocks and time stamps” on page 73.
Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
132 (TYPE-C, ‘RMUOWID’, 8 BYTES)
The identifier of the unit of work (unit of recovery) for this task. Unit of
recovery values are used to synchronize recovery operations among CICS and
other resource managers, such as IMS and DB2.
163 (TYPE-C, ‘FCTYNAME’, 4 BYTES)
Transaction facility name. This field is null if the transaction is not associated
with a facility. The transaction facility type (if any) can be identified using byte
0 of the transaction flags, TRANFLAG, (164) field.
Note: The field is a subset of the task suspend time, SUSPTIME (014), field
and also the RMITIME (170) field.
181 (TYPE-S, ‘WTEXWAIT’, 8 BYTES)
The elapsed time that the user task waited for one or more ECBs, passed to
CICS by the user task using the EXEC CICS WAIT EXTERNAL ECBLIST
command, to be MVS POSTed. The user task can wait on one or more ECBs. If
it waits on more than one, it is dispatchable as soon as one of the ECBs is
posted. For more information, see “Clocks and time stamps” on page 73, and
“A note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, (SUSPTIME) (014),
field.
182 (TYPE-S, ‘WTCEWAIT’, 8 BYTES)
The elapsed time the user task waited for:
v One or more ECBs, passed to CICS by the user task using the EXEC CICS
WAITCICS ECBLIST command, to be MVS POSTed. The user task can wait
on one or more ECBs. If it waits on more than one, it is dispatchable as soon
as one of the ECBs is posted.
v Completion of an event initiated by the same or by another user task. The
event would normally be the posting, at the expiration time, of a timer-event
control area provided in response to an EXEC CICS POST command. The
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
183 (TYPE-S, ‘ICDELAY’, 8 BYTES)
The elapsed time the user task waited as a result of issuing either:
v An interval control EXEC CICS DELAY command for a specified time
interval, or
v A specified time of day to expire, or
v An interval control EXEC CICS RETRIEVE command with the WAIT option
specified. For more information, see “Clocks and time stamps” on page 73,
and “A note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
184 (TYPE-S, ‘GVUPWAIT’, 8 BYTES)
The elapsed time the user task waited as a result of giving up control to
another task. A user task can give up control in many ways. Some
examples are application programs that use one or more of the following
EXEC CICS API or SPI commands:
v Using the EXEC CICS SUSPEND command. This command causes the
issuing task to relinquish control to another task of higher or equal
dispatching priority. Control is returned to this task as soon as no other
task of a higher or equal priority is ready to be dispatched.
v Using the EXEC CICS CHANGE TASK PRIORITY command. This
command immediately changes the priority of the issuing task and
causes the task to give up control in order for it to be dispatched at its
new priority. The task is not redispatched until tasks of higher or equal
priority, and that are also dispatchable, have been dispatched.
v Using the EXEC CICS DELAY command with INTERVAL (0). This
command causes the issuing task to relinquish control to another task of
higher or equal dispatching priority. Control is returned to this task as
soon as no other task of a higher or equal priority is ready to be
dispatched.
v Using the EXEC CICS POST command requesting notification that a
specified time has expired. This command causes the issuing task to
relinquish control to give CICS the opportunity to post the time-event
control area.
v Using the EXEC CICS PERFORM RESETTIME command to synchronize
the CICS date and time with the MVS system date and time of day.
v Using the EXEC CICS START TRANSID command with the ATTACH
option.
For more information, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 195 (TYPE-S, ‘RUNTRWTT’, 8 BYTES)
| The elapsed time in which the user task waited for completion of a
| transaction that executed as a result of the user task issuing a CICS BTS
| run process, or run activity, request to execute a process, or activity,
| synchronously.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 248 (TYPE-A, ‘CHMODECT’, 4 BYTES)
| The number of CICS change-TCB modes issued by the user task.
| 249 (TYPE-S, ‘QRMODDLY’, 8 BYTES)
| The elapsed time for which the user task waited for redispatch on the
| CICS QR TCB. This is the aggregate of the wait times between each event
| completion. and user-task redispatch.
| Note: This field does not include the elapsed time spent waiting for the
| first dispatch. The QRMODDLY field is a component of the task
| suspend time, SUSPTIME (014), field.
| 250 (TYPE-S, ‘MXTOTDLY’, 8 BYTES)
| The elapsed time in which the user task waited to obtain a CICS open
| TCB, because the region had reached the limit set by the system parameter,
| MAXOPENTCBS.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a subset of the task suspend time, SUSPTIME (014),
| field.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 044 (TYPE-A, ‘TSGETCT’, 4 BYTES)
Number of temporary-storage GET requests issued by the user task.
046 (TYPE-A, ‘TSPUTACT’, 4 BYTES)
Number of PUT requests to auxiliary temporary storage issued by the user
task.
047 (TYPE-A, ‘TSPUTMCT’, 4 BYTES)
Number of PUT requests to main temporary storage issued by the user task.
092 (TYPE-A, ‘TSTOTCT’, 4 BYTES)
| Total number of temporary storage requests issued by the user task. This field
| is the sum of the temporary storage READQ (TSGETCT), WRITEQ AUX
| (TSPUTACT), WRITEQ MAIN (TSPUTMCT), and DELETEQ requests issued by
| the user task.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 034 (TYPE-A, ‘TCMSGIN1’, 4 BYTES)
Number of messages received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
035 (TYPE-A, ‘TCMSGOU1’, 4 BYTES)
Number of messages sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
067 (TYPE-A, ‘TCMSGIN2’, 4 BYTES)
Number of messages received from the LUTYPE6.1 alternate terminal facilities
by the user task.
068 (TYPE-A, ‘TCMSGOU2’, 4 BYTES)
Number of messages sent to the LUTYPE6.1 alternate terminal facilities by the
user task.
069 (TYPE-A, ‘TCALLOCT’, 4 BYTES)
Number of TCTTE ALLOCATE requests issued by the user task for LUTYPE6.2
(APPC), LUTYPE6.1, and IRC sessions.
083 (TYPE-A, ‘TCCHRIN1’, 4 BYTES)
Number of characters received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
084 (TYPE-A, ‘TCCHROU1’, 4 BYTES)
Number of characters sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
085 (TYPE-A, ‘TCCHRIN2’, 4 BYTES)
Number of characters received from the LUTYPE6.1 alternate terminal facilities
by the user task. (Not applicable to ISC APPC.)
086 (TYPE-A, ‘TCCHROU2’, 4 BYTES)
Number of characters sent to the LUTYPE6.1 alternate terminal facilities by the
user task. (Not applicable to ISC APPC.)
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 111 (TYPE-C, ‘LUNAME’, 8 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. If the task is executing in an application-owning or file-owning
region, the LUNAME is the generic applid of the originating connection for
MRO, LUTYPE6.1, and LUTYPE6.2 (APPC). The LUNAME is blank if the
originating connection is an external CICS interface (EXCI).
133 (TYPE-S, ‘LU61WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.1
connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.1 connections, but not the waits incurred due to
LUTYPE6.1 syncpoint flows. For more information see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 134 (TYPE-S, ‘LU62WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.2
(APPC) connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.2 (APPC) connections, but not the waits
incurred due to LUTYPE6.2 (APPC) syncpoint flows. For more information, see
“Clocks and time stamps” on page 73, and “A note about wait (suspend)
| times” on page 76
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 135 (TYPE-A, ‘TCM62IN2’, 4 BYTES)
Number of messages received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
136 (TYPE-A, ‘TCM62OU2’, 4 BYTES)
Number of messages sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
137 (TYPE-A, ‘TCC62IN2’, 4 BYTES)
Number of characters received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
138 (TYPE-A, ‘TCC62OU2’, 4 BYTES)
Number of characters sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
165 (TYPE-A, ‘TERMINFO’, 4 BYTES)
Terminal or session information for this task’s principal facility as identified in
the ‘TERM’ field id 002. This field is null if the task is not associated with a
terminal or session facility.
Byte 0 Identifies whether this task is associated with a terminal or session.
This field can be set to one of the following values:
X'00' None
For a list of the typeterm definitions, see the CICS Resource Definition
Guide.
169 (TYPE-C, ‘TERMCNNM’, 4 BYTES)
Terminal session connection name. If the terminal facility associated with this
transaction is a session, this field is the name of the owning connection (sysid).
|
End of Product-sensitive programming interface
Exception records are produced after each of the following conditions encountered
by a transaction has been resolved:
v Wait for storage in the CDSA
v Wait for storage in the UDSA
v Wait for storage in the SDSA
v Wait for storage in the RDSA
v Wait for storage in the ECDSA
v Wait for storage in the EUDSA
v Wait for storage in the ESDSA
v Wait for storage in the ERDSA
v Wait for auxiliary temporary storage
v Wait for auxiliary temporary storage string
v Wait for auxiliary temporary storage buffer
| v Wait for coupling facility data tables locking (request) slot
| v Wait for coupling facility data tables non-locking (request) slot (With coupling
| facility data tables each CICS has a number of slots available for requests in the
| CF data table. When all available slots are in use, any further request must wait.)
v Wait for file buffer
v Wait for file string
| v Wait for LSRPOOL buffer
v Wait for LSRPOOL string
These records are fixed format. The format of these exception records is as follows:
MNEXCDS DSECT
EXCMNTRN DS CL4 TRANSACTION IDENTIFICATION
EXCMNTER DS XL4 TERMINAL IDENTIFICATION
EXCMNUSR DS CL8 USER IDENTIFICATION
EXCMNTST DS CL4 TRANSACTION START TYPE
EXCMNSTA DS XL8 EXCEPTION START TIME
EXCMNSTO DS XL8 EXCEPTION STOP TIME
EXCMNTNO DS PL4 TRANSACTION NUMBER
EXCMNTPR DS XL4 TRANSACTION PRIORITY
DS CL4 RESERVED
EXCMNLUN DS CL8 LUNAME
DS CL4 RESERVED
EXCMNEXN DS XL4 EXCEPTION NUMBER
EXCMNRTY DS CL8 EXCEPTION RESOURCE TYPE
EXCMNRID DS CL8 EXCEPTION RESOURCE ID
EXCMNTYP DS XL2 EXCEPTION TYPE
Note: The performance class exception wait time field, EXWTTIME (103), is a
calculation based on subtracting the start time of the exception
(EXCMNSTA) from the finish time of the exception (EXCMNSTO).
EXCMNTNO (TYPE-P, 4 BYTES)
Transaction identification number.
EXCMNTPR (TYPE-C, 4 BYTES)
Transaction priority when monitoring was initialized for the task (low-order
byte).
EXCMNLUN (TYPE-C, 4 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. This field is nulls if the task is not associated with a terminal.
If the originating terminal is a VTAM device across an ISC APPC or IRC link,
the NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid
derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ’DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.
The first 6 bytes of this field are a binary value derived from the clock of the
originating system and wrapping round at intervals of several months. The last
two bytes of this field are for the period count. These may change during the
| life of the task as a result of syncpoint activity.
| Note: When using MRO or ISC, the EXCMNNSX field must be combined with
| the EXCMNNPX field to uniquely identify a task, because the
| EXCMNNSX field is unique only to the originating CICS system.
| EXCMNTRF (TYPE-C, 8 BYTES)
Transaction flags—a string of 64 bits used for signaling transaction definition
and status information:
Byte 0 Transaction facility identification
Bit 0 Transaction facility name = none
Bit 1 Transaction facility name = terminal
Bit 2 Transaction facility name = surrogate
Bit 3 Transaction facility name = destination
Bit 4 Transaction facility name = 3270 bridge
Bits 5–7
Reserved
Byte 1 Transaction identification information
Bit 0 System transaction
Bit 1 Mirror transaction
Bit 2 DPL mirror transaction
Bit 3 ONC RCP alias transaction
Bit 4 WEB alias transaction
Bit 5 3270 bridge transaction
| Bit 6 Reserved
| Bit 7 CICS BTS Run transaction
Byte 2 MVS Workload Manager information
Bit 0 Workload Manager report
Bit 1 Workload Manager notify, completion = yes
Bit 2 Workload Manager notify
Bits 3–7
Reserved
Byte 3 Transaction definition information
The following table shows the value and relationships of the fields EXCMNTYP,
EXCMNRTY, and EXCMNRID.
Overview.
Tivoli Performance Reporter for OS/390 is a reporting system which uses DB2. You
can use it to process utilization and throughput statistics written to log data sets by
computer systems. You can use it to analyze and store the data into DB2, and
present it in a variety of forms. Tivoli Performance Reporter consists of a base
product with several optional features that are used in systems management, as
shown in Table 9. Tivoli Performance Reporter for OS/390 uses Reporting Dialog/2
as the OS/2® reporting feature.
Table 9. Tivoli Performance Reporter for OS/390 and optional features
CICS IMS Network System Workstation AS/400® Reporting Accounting
Performance Performance Performance Performance Performance Performance Dialog/2
Tivoli Performance Reporter for OS/390 Base
The Tivoli Performance Reporter for OS/390 database can contain data from many
sources. For example, data from System Management Facilities (SMF), Resource
Measurement Facility (RMF), CICS, and Information Management System (IMS)
can be consolidated into a single report. In fact, you can define any non-standard
log data to Tivoli Performance Reporter for OS/390 and report on that data
together with data coming from the standard sources.
The Tivoli Performance Reporter for OS/390 CICS performance feature provides
reports for your use when analyzing the performance of CICS Transaction Server
for OS/390, and CICS/ESA, based on data from the CICS monitoring facility
(CMF) and, for CICS Transaction Server for OS/390, CICS statistics. These are
some of the areas that Tivoli Performance Reporter can report on:
The Tivoli Performance Reporter for OS/390 CICS performance feature collects
only the data required to meet CICS users’ needs. You can combine that data with
more data (called environment data), and present it in a variety of reports. Tivoli
Performance Reporter for OS/390 provides an administration dialog for
maintaining environment data. Figure 12 illustrates how data is organized for
presentation in Tivoli Performance Reporter for OS/390 reports.
Operating system
System data
Data written to
Logs various logs
Performance Reporter
Performance collects only
Reporter Performance relevant data
CICS Reporter
performance records
feature
User-supplied
Performance User- environment data
Reporter supplied maintained in the
tables data Performance Reporter
database
Required data
Report Report Report presented in
report format
The Tivoli Performance Reporter for OS/390 CICS performance feature processes
these records:
The following sections describe certain issues and concerns associated with
systems management and how you can use the Tivoli Performance Reporter for
OS/390 CICS performance feature.
S ---------------Response time------------------ F
T I
A -Suspend time-- --------Dispatch time------- N
R I
T ----Service time--- S
H
If both the Tivoli Performance Reporter for OS/390 CICS performance feature’s
statistics component and the Performance Reporter System Performance feature’s
MVS component are installed and active, these reports are available for analyzing
transaction rates and processor use by CICS region:
v The CICS Transaction Processor Utilization, Monthly report shows monthly
averages for the dates you specify.
v The CICS Transaction Processor Utilization, Daily report shows daily averages
for the dates you specify.
Tivoli Performance Reporter for OS/390 produces several reports that can help
analyze storage usage. For example, the CICS Dynamic Storage (DSA) Usage
report, shows pagepool usage.
Use this report to start verifying that you are meeting service-level objectives. First,
verify that the values for average response time are acceptable. Then check that the
transaction rates do not exceed agreed-to limits. If a transaction is not receiving the
appropriate level of service, you must determine the cause of the delay.
QWHCTOKN
Figure 16. Correlating a CICS performance-monitoring record with a DB2 accounting record
If you match the NETNAME and UOWID fields in a CICS record to the DB2
token, you can create reports that show the DB2 activity caused by a CICS
transaction.
The Tivoli Performance Reporter for OS/390 CICS performance feature creates
exception records for these incidents and exceptions:
v Wait for storage
v Wait for main temporary storage
v Wait for a file string
v Wait for a file buffer
v Wait for an auxiliary temporary storage string
v Wait for an auxiliary temporary storage buffer
v Transaction ABEND
v System ABEND
v Storage violations
v Short-of-storage conditions
v VTAM request rejections
v I/O errors on auxiliary temporary storage
v I/O errors on the intrapartition transient data set
v Autoinstall errors
v MXT reached
v DTB overflow
v Link errors for IRC and ISC
v Log stream buffer-full conditions
v CREAD and CWRITE fails (data space problems)
v Local shared resource (LSR) pool ( string waits (from A08BKTSW)
v Waits for a buffer in the LSR pool (from A09TBW)
v Errors writing to SMF
v No space on transient-data data set (from A11ANOSP)
v Waits for a transient-data string (from A11STNWT)
v Waits for a transient-data buffer (from A11ATNWT)
v Transaction restarts (from A02ATRCT)
v Maximum number of tasks in a class reached (CMXT) (from A15MXTM)
v Transmission errors (from A06TETE or AUSTETE).
CICS Incidents
DATE: '1998-09-20' to '1998-09-21'
Terminal
operator User Exception Exception
Sev Date Time ID ID ID description
--- ---------- -------- -------- -------- ------------------ ---------------------------
03 1995-09-20 15.42.03 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND AZTS
03 1995-09-21 00.00.00 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND APCT
03 1995-09-21 17.37.28 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL
03 1995-09-21 17.12.03 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL
The CICS UOW Response Times report in Figure 18 shows an example of how
Tivoli Performance Reporter for OS/390 presents CICS unit- of-work response
times.
Adjusted
UOW UOW Response
start Tran CICS Program tran time
time ID ID name count (sec)
-------- ---- -------- -------- ----- --------
09.59.25 OP22 CICSPROD DFHAPRT 2 0.436
OP22 CICSPRDC OEPCPI22
...
Tivoli Performance Reporter report: CICS902
Figure 18. Tivoli Performance Reporter for OS/390 CICS UOW response times report
Monitoring availability
Users of CICS applications depend on the availability of several types of resources:
v Central site hardware and the operating system environment in which the CICS
region runs
v Network hardware, such as communication controllers, teleprocessing lines, and
terminals through which users access the CICS region
v CICS region
v Application programs and data. Application programs can be distributed among
several CICS regions.
In some cases, an application depends on the availability of many resources of the
same and of different types, so reporting on availability requires a complex
analysis of data from different sources. Tivoli Performance Reporter for OS/390
can help you, because all the data is in one database.
When running under goal mode in MVS 5.1.0 and later, CICS performance can be
reported in workload groups, service classes, and periods. These are a few
examples of Tivoli Performance Reporter reports for CICS in this environment.
Figure 20 shows how service classes were served by other service classes. This
report is available only when the MVS system is running in goal mode.
2.50 Response
time (s)
Active
2.00
Ready
Response Time (sec)
Idle
1.50
Look wait
I/O wait
1.00
Conv wait
Distr wait
0.50
Syspl wait
Timer wait
0.00
Other wait
8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00
Figure 19. Example of an MVSPM response time breakdown, hourly trend report
Service MVS Total Activ Ready Idle Lock I/O Conv Distr Local Netw Syspl Timer Other Misc
Workload class sysstate state state state wait wait wait wait wait wait wait wait wait wait
group /Period Ph ID (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
-------- ---------- --- --------- ----- ----- ----- ----- ----- -- --- ----- ----- ----- ----- ----- ----- -----
CICS CICS-1 /1 BTE CA0 6.6 0.0 0.0 0.0 0.0 0.0 6.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 29.4 0.0 0.0 0.0 0.0 0.0 14.7 0.0 0.0 0.0 0.0 0.0 14.6 0.0
C90 3.8 0.4 1.3 1.5 0.0 0.2 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 13.3 0.1 0.5 0.5 0.0 0.1 7.2 0.0 0.0 0.0 0.0 0.0 4.9 0.0
/1 EXE CA0 16.0 0.1 0.2 0.1 0.0 15.5 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0
C80 14.9 0.1 0.1 0.1 0.0 3.7 0.0 0.0 0.0 0.0 0.0 0.0 11.0 0.0
C90 14.0 1.6 4.5 4.8 0.0 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 14.9 0.6 1.6 1.7 0.0 7.4 0.0 0.0 0.0 0.0 0.0 0.0 3.7 0.0
IMS IMS-1 /1 EXE CA0 20.7 0.4 0.7 0.0 0.0 0.0 19.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 1.1 0.2 0.1 0.7 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C90 22.2 5.3 11.9 1.2 0.0 0.2 3.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ---- ----- ----- ----- ----- ----- ----- -----
* 14.7 2.0 4.2 0.6 0.0 0.1 7.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Tivoli Performance Reporter report: MVSPM73
Figure 21 shows how much the various transaction states contribute to the average
response time. This report is available when the MVS system is running in goal
mode and when the subsystem is CICS or IMS.
Figure 19 on page 120 shows the average transaction response time trend and how
the various transaction states contribute to it. (The sum of the different states adds
up to the average execution time. The difference between the response time and
the execution time is mainly made up of switch time, for example, the time the
transactions spend being routed to another region for processing). This report is
available when the MVS system is running in goal mode and when the subsystem
is CICS or IMS.
To help you migrate to goal-oriented workload management, you can run any
MVS image in a sysplex in compatibility mode, using the performance management
tuning methods of releases of MVS before MVS/ESA 5.1.
Notes:
1. If you do not want to use the MVS workload management facility, you should
review your MVS performance definitions to ensure that they are still
appropriate for CICS Transaction Server for OS/390 Release 3. To do this,
review parameters in the IEAICS and IEAIPS members of the MVS PARMLIB
library. For more information about these MVS performance definitions, see the
OS/390 MVS Initialization and Tuning Guide.
| 2. If you use CICSPlex SM to control dynamic routing in a CICSplex or BTS-plex,
| you can base its actions on the CICS response time goals of the CICS
transactions as defined to the MVS workload manager. See “Using
CICSPlex SM workload management” on page 134. For full details, see the
CICSPlex SM Managing Workloads manual.
The main benefit is that you no longer have to continually monitor and tune CICS
to achieve optimum performance. You can set your workload objectives in the
service definition and let the workload component of MVS manage the resources
and the workload to achieve your objectives.
The MVS workload manager produces performance reports that you can use to
establish reasonable performance goals and for capacity planning.
For MVS workload manager operation across the CICS task-related user exit
interface to other subsystems, such as DB2 and DBCTL, you need the appropriate
releases of these products.
For more information about requirements for MVS workload management see the
following manuals: MVS Planning: Workload Management, and MVS Planning:
Sysplex Manager.
Resource usage
The CICS function for MVS workload management incurs negligible impact on
CICS storage.
All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload manager are subject to the effects of workload management.
If the CICS workload involves non-CICS resource managers, such as DB2 and
DBCTL, CICS can pass information through the resource manager interface (RMI1)
to enable MVS workload manager to relate the part of the workload within the
non-CICS resource managers to the part of the workload within CICS.
CICS does not pass information across ISC links to relate the parts of the task
execution thread on either side of the ISC link. If you use tasks that communicate
across ISC links, you must define separate performance goals, and service classes,
for the parts of the task execution thread on each side of the ISC link. These rules
apply to ISC links that are:
v Within the same MVS image (so called “intrahost ISC”)
v Between MVS images in the same sysplex (perhaps for compatibility reasons)
1. The CICS interface modules that handle the communication between a task-related user exit and the resource manager are usually
referred to as the resource manager interface (RMI) or the task-related user exit (TRUE) interface.
Workload management also collects performance and delay data, which can be
used by reporting and monitoring products, such as the Resource Measurement
Facility (RMF), the TIVOLI Performance Reporter for OS/390, or vendor products.
The service level administrator defines your installation’s performance goals, and
monitoring data, based on business needs and current performance. The complete
definition of workloads and performance goals is called a service definition. You
may already have this kind of information in a service level agreement (SLA).
This information helps you to set realistic goals for running your CICS work when
you switch to goal mode. The reporting data produced by RMF reports:
v Is organized by service class
v Contains reasons for any delays that affect the response time for the service class
(for example, because of the actions of a resource manager or an I/O
subsystem).
Note: It does not matter what goal you specify, since it is not used in
compatibility mode, but it cannot be discretionary.
– Specify the name of the service class under the classification rules for the
CICS subsystem:
Subsystem Type . . . . . . : CICS
Default Service Class . . : CICSALL
v In your ICS member in SYS1.PARMLIB (IEAICSxx), specify:
SUBSYS=CICS,
SRVCLASS=CICSALL,RPGN=100
v Install the workload definition in the coupling facility.
v Activate the test service policy, either by using options provided by the WLM
ISPF application, or by issuing the following MVS command:
VARY WLM,POLICY=CICSTEST
You receive response time information about CICS transactions in the RMF
Monitor I Workload Activity Report under report performance group 100. For more
information about defining performance goals and the use of SRVCLASS, see the
MVS Planning: Workload Management manual.
If you have varying performance goals, you can define several service policies.
You can activate only one service policy at a time for the whole sysplex, and, when
appropriate, switch to another policy.
Defining workloads
A workload comprises units of work that share some common characteristics that
makes it meaningful for an installation to manage or monitor as a group. For
example, all CICS work, or all CICS order entry work, or all CICS development
work.
You can also create service classes for started tasks and JES, and can assign
resource groups to those service classes. You can use such service classes to
manage the workload associated with CICS as it starts up, but before CICS
There is a default service class, called SYSOTHER. It is used for CICS transactions
for which MVS workload management cannot find a matching service class in the
classification rules—for example, if the couple data set becomes unavailable.
There is one set of classification rules for each service definition. The classification
rules apply to every service policy in the service definition; so there is one set of
rules for the sysplex.
You should use classification rules for every service class defined in your service
definition.
Classification rules categorize work into service classes and, optionally, report
classes, based on work qualifiers. You set up classification rules for each MVS
subsystem type that uses workload management. The work qualifiers that CICS
can use (and which identify CICS work requests to workload manager) are:
LU LU name
LUG LU name group
SI Subsystem instance (VTAM applid)
SIG Subsystem instance group
TN Transaction identifier
TNG Transaction identifier group
UI Userid
UIG Userid group.
Notes:
1. You should consider defining workloads for terminal-owning regions only.
Work requests do not normally originate in an application-owning region. They
(transactions) are normally routed to an application-owning region from a
terminal-owning region, and the work request is classified in the
terminal-owning region. In this case, the work is not reclassified in the
application-owning region.
If work orginates in the application-owning region it is classified in the
application-owning region; normally there would be no terminal.
2. You can use identifier group qualifiers to specify the name of a group of
qualifiers; for example, GRPACICS could specify a group of CICS tranids,
which you could specify on classification rules by TNG GRPACICS. This is a
useful alternative to specifying classification rules for each transaction
separately.
You can use classification groups to group disparate work under the same work
qualifier—if, for example, you want to assign it to the same service class.
Example of using classification rules: As an example, you might want all CICS
work to go into service class CICSB except for the following:
v All work from LU name S218, except the PAYR transaction, is to run in service
class CICSA
v Work for the PAYR transaction (payroll application) entered at LU name S218 is
to run in service class CICSC.
v All work from terminals other than LU name S218, and whose LU name begins
with S2, is to run in service class CICSD.
You could specify this by the following classification rules:
Subsystem Type . . . . . . . CICS
-------Qualifier----------- -------Class--------
Type Name Start Service Report
DEFAULTS: CICSB ________
1 LU S218 CICSA ________
2 TN PAYR CICSC ________
1 LU S2* CICSD ________
Note: In this classification, the PAYR transaction is nested as a sub-rule under the
classification rule for LU name S218, indicated by the number 2, and the
indentation of the type and name columns.
v For request 1, the work request for the payroll application runs in service class
CICSC. This is because the request is associated with the terminal with LU name
S218, and the TN—PAYR classification rule specifying service class CICSC is
nested under the LU—S218 classification rule qualifier.
v For request 2, the work request for the payroll application runs in service class
CICSB, because it is not associated with LU name S218, nor S2*, and there are
no other classification rules for the PAYR transaction. Likewise, any work
requests associated with LU names that do not start with S2 run in service class
CICSB, as there are classification rules for LU names S218 and S2* only.
v For request 3, the work request for the DEBT transaction runs in service class
CICSA, because it is associated with LU name S218, and there is no DEBT
classification rule nested under the LU—S218 classification rule qualifiers.
v For request 4, the work request for the ANOT transaction runs in service class
CICSD, because it is associated with an LU name starting S2, but not S218.
Note: It is helpful at this stage to record your service definition in a form that
will help you to enter it into the MVS workload manager ISPF
application. You are recommended to use the worksheets provided in
the MVS publication Planning: Workload Management.
9. Install MVS.
10. Set up a sysplex with a single MVS image, and run in workload manager
compatibility mode.
11. Upgrade your existing XCF couple data set.
12. Start the MVS workload manager ISPF application, and use it in the following
steps.
13. Allocate and format a new couple data set for workload management. (You
can do this from the ISPF application.)
14. Define your service definition.
15. Install your service definition on the couple data set for workload
management.
16. Activate a service policy.
17. Switch the MVS image into goal mode.
18. Start up a new MVS image in the sysplex. (That is, attach the new MVS image
to the couple data set for workload management, and link it to the service
policy.)
19. Switch the new MVS image into goal mode.
20. Repeat steps 18 and 19 for each new MVS image in the sysplex.
Notes:
1. CICS Transaction Server for OS/390 support for MVS workload manager is
initialized automatically during CICS startup.
2. All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload management are subject to the effects of workload manager.
In general, you should define CICS performance objectives to the MVS workload
manager first, and observe the effect on CICS performance. Once the MVS
workload manager definitions are working correctly, you can then consider tuning
the CICS parameters to further enhance CICS performance. However, you should
use CICS performance parameters as little as possible.
| For more information about CICSPlex SM, see the CICSPlex SM Concepts and
| Planning manual.
RMF provides data for subsystem work managers that support workload
management. In MVS these are IMS and CICS.
This chapter includes a discussion of some possible data that may be reported for
CICS and IMS, and provides some possible explanations for the data. Based on this
discussion and the explanations, you may decide to alter your service class
definitions. In some cases, there may be some actions that you can take, in which
case you can follow the suggestion. In other cases, the explanations are provided
only to help you better understand the data. For more information about using
RMF, see the RMF User’s Guide.
These explanations are given for two main sections of the reports:
v The response time breakdown in percentage section
v The state section, covering switched time.
The WAITING FOR main heading is further broken down into a number of
subsidiary headings. Where applicable, for waits other than those described for the
IDLE condition described above, CICS interprets the cause of the wait, and records
the ‘waiting for’ reason in the WLM performance block.
The waiting-for terms used in the RMF report equate to the WLM_WAIT_TYPE
parameter on the SUSPEND, WAIT_OLDC, WAIT_OLDW, and WAIT_MVS calls
used by the dispatcher, and the SUSPEND and WAIT_MVS calls used in the CICS
XPI. These are shown as follows (with the CICS WLM_WAIT_TYPE term, where
different from RMF, in parenthesis):
Term Description
LOCK Waiting on a lock. For example, waiting for:
v A lock on CICS resource
v A record lock on a recoverable VSAM file
v Exclusive control of a record in a BDAM file
v An application resource that has been locked by an EXEC CICS ENQ
command.
I/O (IO)
Waiting for an I/O request or I/O related request to complete. For
example:
v File control, transient data, temporary storage, or journal I/O.
v Waiting on I/O buffers or VSAM strings.
CONV
Waiting on a conversation between work manager subsystems. This
information is further analyzed under the SWITCHED TIME heading.
DIST Not used by CICS.
LOCAL (SESS_LOCALMVS)
Waiting on the establishment of a session with another CICS region in the
same MVS image in the sysplex.
SYSPL (SESS_SYSPLEX)
Waiting on establishment of a session with another CICS region in a
different MVS image in the sysplex.
REMOT (SESS_NETWORK)
Waiting on the establishment of an ISC session with another CICS region
(which may, or may not, be in the same MVS image).
TIMER
Waiting for a timer event or an interval control event to complete. For
example, an application has issued an EXEC CICS DELAY or EXEC CICS
WAIT EVENT command which has yet to complete.
PROD (OTHER_PRODUCT)
Waiting on another product to complete its function; for example, when
the work request has been passed to a DB2 or DBCTL subsystem.
For more information on the MVS workload manager states and resource names
used by CICS Transaction Server for OS/390 Release 3, see the CICS Problem
Determination Guide.
The text following the figure explains how to interpret the fields.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSHR RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
An RMF workload activity report contains “snapshot data” which is data collected
over a relatively short interval. The data for a given work request (CICS
transaction) in an MRO environment is generally collected for more than one CICS
region, which means there can be some apparent inconsistencies between the
execution (EXE) phase and the begin to end (BTE) data in the RMF reports. This is
caused by the end of a reporting interval occurring at a point when work has
completed in one region but not yet completed in an associated region. See
Figure 22.
For example, an AOR can finish processing transactions, the completion of which
are included in the current reporting interval, whilst the TOR may not complete its
processing of the same transactions during the same interval.
The fields in this RMF report describe an example CICS hotel reservations service
class (CICSHR), explained as follows:
CICS This field indicates that the subsystem work manager is CICS.
BTE This field indicates that the data in the row relates to the begin-to-end work
phase.
CICS transactions are analyzed over two phases: a begin-to-end (BTE)
phase, and an execution (EXE) phase.
The begin-to-end phase usually takes place in the terminal owning region
(TOR), which is responsible for starting and ending the transaction.
EXE This field indicates that the data in the row relates to the execution work
phase. The execution phase can take place in an application owning region
(AOR) and a resource-owning region such as an FOR. In our example, the
Note: In our example the two phases show the same number of
transactions completed, indicating that during the reporting interval
all the transactions routed by the TORs (ENDED) were completed
by the AORs (EXECUTD) and also completed by the TORs. This will
not normally be the case because of the way data is captured in
RMF reporting intervals. See “RMF reporting intervals” on page 137.
ACTUAL
Shown under TRANSACTION TIME, this field shows the average response
time as 0.114 seconds, for the 216 transactions completed in the BTE phase.
EXECUTION
Shown under TRANSACTION TIME, this field shows that on average it
took 0.078 seconds for the AORs to execute the transactions.
While executing these transactions, CICS records the states the transactions are
experiencing. RMF reports the states in the RESPONSE TIME BREAKDOWN IN
PERCENTAGE section of the report, with one line for the begin-to-end phase, and
another for the execution phase.
| The response time analysis for the BTE phase is described as follows:
| For BTE
| Explanation
| TOTAL
| The CICS BTE total field shows that the TORs have information covering
| 93.4% of the ACTUAL response time, the analysis of which is shown in the
| remainder of the row. This value is the ratio of sampled response times to
| actual response times. The sampled response times are derived by
| calculating the elapse times to be the number of active performance blocks
| (inflight transactions) multiplied by the sample interval time. The actual
| response times are those reported to RMF by CICS when each transaction
| ends. The proximity of the total value to 100% and a relatively small
| standard deviation value are measures of how accurately the sampled data
| represents the actual system behavior. “Possible explanations” on page 141
| shows how these reports can be distorted.
| ACTIVE
| On average, the work (transactions) was active in the TORs for only about
| 10.2% of the ACTUAL response time
| READY
| In this phase, the TORs did not detect that any part of the average
| response time was accounted for by work that was dispatchable but
| waiting behind other transactions.
| IDLE In this phase, the TORs did not detect that any part of the average
| response time was accounted for by transactions that were waiting for
| work.
| Note: In the analysis of the BTE phase, the values do not exactly add up to the
TOTAL value because of rounding—in our example, 10.2 + 83.3 = 93.5,
against a total shown as 93.4.
The response time analysis for the EXE phase is described as follows:
For EXE
Explanation
TOTAL
The CICS EXE total field shows that the AORs have information covering
67% of the ACTUAL response time.
ACTIVE
On average, the work is active in the AOR for only about 13.2% of the
average response time.
READY
On average the work is ready, but waiting behind other tasks in the region,
for about 7.1% of the average response time.
PROD On average, 46.7% of the average response time is spent outside the CICS
subsystem, waiting for another product to provide some service to these
transactions.
You can’t tell from this RMF report what the other product is, but the
probability is that the transactions are accessing data through a database
manager such as Database Control (DBCTL) or DB2.
Possible explanations
There several possible explanations for the unusual values shown in this sample
report:
v Long-running transactions
v Never-ending transactions
v Conversational transactions
v Dissimilar work in service class
Long-running transactions
| The RMF report in Figure 23 on page 138 shows both very high response times
| percentages and a large standard deviation of reported transaction times.
| The report shows for the recorded 15 minute interval that 1648 transactions
| completed in the TOR. These transactions had an actual average response time of
| 0.111seconds (note that this has a large standard deviation) giving a total of 182.9
| seconds running time (0.111 seconds multiplied by 1648 transactions). However, if
| there are a large number of long running transactions also running, these will be
| counted in the sampled data but not included in the the actual response time
| values. If the number of long running transactions is large, the distortion of the
| Total value will also be very large.
Never-ending transactions
Never-ending transactions differ from long-running transactions in that they persist
for the life of a region. For CICS, these could include the IBM reserved transactions
such as CSNC and CSSY, or customer defined transactions. Never-ending
transactions are reported in a similar way to long-running transactions, as
explained above. However, for never-ending CICS transactions, RMF might report
large percentages in IDLE, or under TIMER or MISC in the WAITING FOR section.
Possible actions
The following are some actions you could take for reports of this type:
Group similar work into the same service classes: Make sure your service classes
represent groups of similar work. This could require creating additional service
classes. For the sake of simplicity, you may have only a small number of service
classes for CICS work. If there are transactions for which you want the RMF
response time breakdown data, consider including them in their own service class.
Do nothing: For service classes representing dissimilar work such as the subsystem
default service class, recognize that the response time breakdown could include
long-running or never-ending transactions. Accept that RMF data for such service
classes does not make much sense.
Possible explanations
There are two possible explanations:
1. No transactions completed in the interval
2. RMF did not receive data from all systems in the sysplex.
RMF did not receive data from all systems in the sysplex.
The RMF post processor may have been given SMF records from only a subset of
the systems running in the sysplex. For example, the report may represent only a
single MVS image. If that MVS image has no TOR, its AORs receive CICS
transactions routed from another MVS image or from outside the sysplex. Since the
response time for the transactions is reported by the TOR, there is no transaction
response time for the work, nor are there any ended transactions.
Possible actions
The following are some actions you could take for reports of this type:
Do nothing
You may have created this service class especially to prevent the state samples of
long running transactions from distorting data for your production work. In this
case there is no action to take.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
CICS Trans not classified singly
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.091
MPL 0.00 QUEUED 000.00.00.020
ENDED 1731 EXECUTION 000.00.00.113
END/SEC 1.92 STANDARD DEVIATION 000.00.00.092
#SWAPS 0
EXECUTD 1086
Possible explanation
The situation illustrated by this example could be explained by the service class
containing a mixture of routed and non-routed transactions. In this case, the AORs
have recorded states which account for more time than the average response time
of all the transactions. The response time breakdown shown by RMF for the
execution phase of processing can again show percentages exceeding 100% of the
response time.
Possible actions
Define routed and non-routed transactions in different service classes.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.150
MPL 0.00 QUEUED 000.00.00.039
ENDED 3599 EXECUTION 000.00.00.134
END/SEC 4.00 STANDARD DEVIATION 000.00.00.446
#SWAPS 0
EXECUTD 2961
Possible actions
None.
Possible explanation
This situation could be caused by converting from ISC to MRO between the TOR
and the AOR.
When two CICS regions are connected via VTAM intersystem communication (ISC)
links, the perspective from a WLM viewpoint is that they behave differently from
when they are connected via multiregion (MRO) option. One key difference is that,
with ISC, both the TOR and the AOR are receiving a request from VTAM, so each
believes it is starting and ending a given transaction. So for a given user request
routed from the TOR via ISC to an AOR, there would be 2 completed transactions.
Let us assume they have response times of 1 second and .75 seconds respectively,
giving for an average of .875 seconds. When the TOR routes via MRO, the TOR
will describe a single completed transaction taking 1 second (in a begin-to-end
phase), and the AOR will report it’s .75 seconds as execution time. Therefore,
converting from an ISC link to an MRO connection, for the same workload, could
result in 1/2 the number of ended transactions and a corresponding increase in the
response time reported by RMF.
Possible action
Increase CICS transaction goals prior to your conversion to an MRO connection.
If you are in one of the first two categories, you can skip this chapter and the next
and go straight to “Chapter 12. CICS performance analysis” on page 169.
If the current performance does not meet your needs, you should consider tuning
the system. The basic rules of tuning are:
1. Identify the major constraints in the system.
2. Understand what changes could reduce the constraints, possibly at the expense
of other resources. (Tuning is usually a trade-off of one resource for another.)
3. Decide which resources could be used more heavily.
4. Adjust the parameters to relieve the constrained resources.
5. Review the performance of the resulting system in the light of:
v Your existing performance objectives
v Progress so far
v Tuning effort so far.
6. Stop if performance is acceptable; otherwise do one of the following:
v Continue tuning
v Add suitable hardware capacity
v Lower your system performance objectives.
Yes
Devise a tuning
Continue strategy that will:
monitoring the - Minimize usage
system as planned of resource
- Expand the
capacity of
the system
Identify
the variables
Predict
the effects
A typical measurement and evaluation plan might include the following items as
objectives, with statements of recording frequency and the measurement tool to be
used:
v Volume and response time for each department
v Network activity:
– Total transactions
– Tasks per second
– Total by transaction type
– Hourly transaction volume (total, and by transaction).
v Resource utilization examples:
– DSA utilization
– Processor utilization with CICS
– Paging rate for CICS and for the system
– Channel utilization
– Device utilization
– Data set utilization
– Line utilization.
v Unusual conditions:
– Network problems
– Application problems
– Operator problems
– Transaction count for entry to transaction classes
Performance degradation is often due to application growth that has not been
matched by corresponding increases in hardware resources. If this is the case, solve
the hardware resource problem first. You may still need to follow on with a plan
for multiple regions.
The tasks may simply be trying to do too much work for the system. You are
asking it to do too many things, it clearly takes time, and the users are simply
trying to put too much through a system that can’t do all the work that they want
done.
Another possibility is that the system is real-storage constrained, and therefore the
tasks progress more slowly than expected because of paging interrupts. These
would show as delays between successive requests recorded in the CICS trace.
Yet another possibility is that many of the CICS tasks are waiting because there is
contention for a particular function. There is a wait on strings on a particular data
set, for example, or there is an application enqueue such that all the tasks issue an
enqueue for a particular item, and most of them have to wait while one task
actually does the work. Auxiliary trace enables you to distinguish most of these
cases.
Again, CICS statistics may reveal heavy use of some resource. For example, you
may find a very large allocation of temporary storage in main storage, a very high
number of storage control requests per task (perhaps 50 or 100), or high program
use counts that may imply heavy use of program control LINK.
Both statistics and CICS monitoring may show exceptional conditions arising in the
CICS run. Statistics can show waits on strings, waits for VSAM shared resources,
waits for storage in GETMAIN requests, and so on. These also generate CICS
monitoring facility exception class records.
While these conditions are also evident in CICS auxiliary trace, they may not
appear so obviously, and the other information sources are useful in directing the
investigation of the trace data.
In addition, you may gain useful data from the investigation of CICS outages. If
there is a series of outages, common links between the outages should be
investigated.
The next chapter tells you how to identify the various forms of CICS constraints,
and Chapter 12 gives you more information on performance analysis techniques.
The fundamental thing that has to be understood is that practically every symptom
of poor performance arises in a system that is congested. For example, if there is a
slowdown in DASD, transactions doing data set activity pile up: there are waits on
strings; there are more transactions in the system, there is therefore a greater
virtual storage demand; there is a greater real storage demand; there is paging;
and, because there are more transactions in the system, the task dispatcher uses
more processor power scanning the task chains. You then get task constraints, your
MXT or transaction class limit is exceeded and adds to the processor overhead
because of retries, and so on.
The result is that the system shows heavy use of all its resources, and this is the
typical system stress. It does not mean that there is a problem with all of them; it
means that there is a constraint that has yet to be found. To find the constraint,
you have to find what is really affecting task life.
When checking whether the performance of a CICS system is in line with the
system’s expected or required capability, you should base this investigation on the
hardware, software, and applications that are present in the installation.
If, for example, an application requires 100 accesses to a database, a response time
of three to six seconds may be considered to be quite good. If an application
requires only one access, however, a response time of three to six seconds for disk
accesses would need to be investigated. Response times, however, depend on the
speed of the processor, and on the nature of the application being run on the
production system.
You should also observe how consistent the response times are. Sharp variations
indicate erratic system behavior.
The typical way in which the response time in the system may vary with
increasing transaction rate is gradual at first, then deteriorates rapidly and
suddenly. The typical curve shows a sharp change when, suddenly, the response
time increases dramatically for a relatively small increase in the transaction rate.
Response
time
C Unacceptable (poor) response time
Increasing load or
decreasing resource availability
Figure 29. Graph to show the effect of response time against increasing load
For stable performance, it is necessary to keep the system operating below this
point where the response time dramatically increases. In these circumstances, the
Response time can be considered as being made up of queue time and service
time. Service time is generally independent of usage, but queue time is not. For
example, 50% usage implies a queue time approximately equal to service time, and
80% usage implies a queue time approximately four times the service time. If
service time for a particular system is only a small component of the system
response, for example, in the processor, 80% usage may be acceptable. If it is a
greater portion of the system response time, for example, in a communication line,
50% usage may be considered high.
If you are trying to find the response time from a terminal to a terminal, you
should be aware that the most common “response time” obtainable from any aid
or tool that runs in the host is the “internal response time.” Trace can identify only
when the software in the host, that is, CICS and its attendant software, first “sees”
the message on the inbound side, and when it last “sees” the message on the
outbound side.
Internal response time gives no indication of how long a message took to get from
the terminal, through its control unit, across a line of whatever speed, through the
communication controller (whatever it is), through the communication access
method (whatever it is), and any delays before the channel program that initiated
the read is finally posted to CICS. Nor does it account for the time it might take
for CICS to start processing this input message. There may have been lots of work
for CICS to do before terminal control regained control and before terminal control
even found this posted event.
The same is true on the outbound side. CICS auxiliary trace knows when the
application issued its request, but that has little to do with when terminal control
found the request, when the access method ships it out, when the controllers can
get to the device, and so on.
While the outward symptom of poor performance is overall bad response, there
are progressive sets of early warning conditions which, if correctly interpreted, can
ease the problem of locating the constraint and removing it.
In the advice given so far, we have assumed that CICS is the only major program
running in your system. If batch programs or other online programs are running
simultaneously with CICS, you must ensure that CICS receives its fair share of the
system resources and that interference from other regions does not seriously
degrade CICS performance.
Storage stress
Stress is the term used in CICS for a shortage of free space in one of the dynamic
storage areas.
Storage stress can be a symptom of other resource constraints that cause CICS
tasks to occupy storage for longer than is normally necessary, or of a flood of tasks
which simply overwhelms available free storage, or of badly designed applications
that require unreasonably large amounts of storage.
User runtime control of storage usage is achieved through appropriate use of MXT
and transaction class limits. This is necessary to avoid the short-on-storage
condition that can result from unconstrained demand for storage.
Short-on-storage condition
CICS reserves a minimum number of free storage pages for use only when there is
not enough free storage to satisfy an unconditional GETMAIN request even when
all, not-in-use, nonresident programs have been deleted.
Whenever a request for storage results in the number of contiguous free pages in
one of the dynamic storage areas falling below its respective cushion size, or
failing to be satisfied even with the storage cushion, a cushion stress condition
exists. Details are given in the storage manager statistics (“Times request
suspended”, “Times cushion released”). CICS attempts to alleviate the storage
stress situation by releasing programs with no current user and slowing the
attachment of new tasks. If these actions fail to alleviate the situation or if the
stress condition is caused by a task that is suspended for SOS, a short-on-storage
condition is signaled. This is accompanied by message DFHSM0131 or
DFHSM0133.
If you have application programs that use temporary data sets, with a different
name for every data set created, it is important that your programs remove these
after use. See the CICS System Programming Reference for information about how
you can use the SET DSNAME command to remove unwanted temporary data sets
| from your CICS regions.
Purging of tasks
If a CICS task is suspended for longer than its DTIMOUT value, it may be purged
if SPURGE=YES is specified on the RDO transaction definition. That is, the task is
abended and its resources freed, thus allowing other tasks to use those resources.
In this way, CICS attempts to resolve what is effectively a deadlock on storage.
CICS hang
If purging tasks is not possible or not sufficient to solve the problem, CICS ceases
processing. You must then either cancel and restart the CICS system, or initiate or
allow an XRF takeover.
A page-in operation causes the MVS task which requires it to stop until the page
has been retrieved. If the page is to be retrieved from DASD, this has a significant
effect. When the page can be retrieved from expanded storage, the impact is only a
relatively small increase in processor usage.
The loading of a program into CICS storage can be a major cause of page-ins.
Because this is carried out under a subtask separate from CICS main activity, such
page-ins do not halt most other CICS activities.
What is paging?
The virtual storage of a processor may far exceed the size of the central storage
available in the configuration. Any excess must be maintained in auxiliary storage
(DASD), or in expanded storage. This virtual storage occurs in blocks of addresses
called “pages”. Only the most recently referenced pages of virtual storage are
assigned to occupy blocks of physical central storage. When reference is made to a
page of virtual storage that does not appear in central storage, the page is brought
in from DASD or expanded storage to replace a page in central storage that is not
in use and least recently used.
The newly referenced page is said to have been “paged in”. The displaced page
may need to be “paged out” if it has been changed.
A page-in from expanded storage incurs only a small processor usage cost, but a
page-in from DASD incurs a time cost for the physical I/O and a more significant
increase in processor usage.
Thus, extra DASD page-in activity slows down the rate at which transactions flow
through the CICS system, that is, transactions take longer to get through CICS, you
get more overlap of transactions in CICS, and so you need more virtual and real
storage.
If you suspect that a performance problem is related to excessive paging, you can
use RMF to obtain the paging rates.
Consider controlling CICS throughput by using MXT and transaction class limits in
CICS on the basis that a smaller number of concurrent transactions requires less
real storage, causes less paging, and may be processed faster than a larger number
of transactions.
What is an ideal CICS paging rate from DASD? Less than one page-in per second
is best to maximize the throughput capacity of the CICS region. Anything less than
five page-ins per second is probably acceptable; up to ten may be tolerable. Ten
per second is marginal, more is probably a major problem. Because CICS
performance can be affected by the waits associated with paging, you should not
allow paging to exceed more than five to ten pages per second.
Note: The degree of sensitivity of CICS systems to paging from DASD depends on
the transaction rate, the processor loading, and the average internal lifetime
of the CICS tasks. An ongoing, hour-on-hour rate of even five page-faults
per second may be excessive for some systems, particularly when you
realize that peak paging rates over periods of ten seconds or so could easily
be four times that figure.
What paging rates are excessive on various processors and are these rates
operating-system dependent? Excessive paging rates should be defined as those
which cause excessive delays to applications. The contribution caused by the
high-priority paging supervisor executing instructions and causing applications to
wait for the processor is probably a minor consideration as far as overall delays to
applications are concerned. Waiting on a DASD device is the dominant part of the
overall delays. This means that the penalty of “high” paging rates has almost
nothing to do with the processor type.
Storage violations can be reduced considerably if CICS has storage protection, and
transaction isolation, enabled.
See the CICS Problem Determination Guide for further information about diagnosing
and dealing with storage violations.
Hardware constraints
1. Processor cycles. It is not uncommon for transactions to execute more than one
million instructions. To execute these instructions, they must contend with
other tasks and jobs in the system. At different times, these tasks must wait for
such activities as file I/O. Transactions give up their use of the processor at
these points and must contend for use of the processor again when the activity
has completed. Dispatching priorities affect which transactions or jobs get use
of the processor, and batch or other online systems may affect response time
through receiving preferential access to the processor. Batch programs accessing
online databases also tie up those databases for longer periods of time if their
dispatching priority is low. At higher usages, the wait time for access to the
processor can be significant.
2. Real storage (working set). Just as transactions must contend for the processor,
they also must be given a certain amount of real storage. A real storage
shortage can be particularly significant in CICS performance because a normal
page fault to acquire real storage results in synchronous I/O. The basic design
of CICS is asynchronous, which means that CICS processes requests from
multiple tasks concurrently to make maximum use of the processor. Most
paging I/O is synchronous and causes the MVS task that CICS is using to wait,
and that part of CICS cannot do any further processing until the page
Software constraints
1. Database design. A data set or database needs to be designed to the needs of the
application it is supporting. Such factors as the pattern of access to the data set
(especially whether it is random or sequential), access methods chosen, and the
frequency of access determine the best database design. Such data set
characteristics as physical record size, blocking factors, the use of alternate or
secondary indexes, the hierarchical or relational structure of database segments,
database organization (HDAM, HIDAM, and so on), and pointer arrangements
are all factors in database performance.
The length of time between data set reorganizations can also affect
performance. The efficiency of accesses decreases as the data set becomes more
and more fragmented. This fragmentation can be kept to the minimum by
reducing the length of time between data set reorganizations.
2. Network design. This item can often be a major factor in response time because
the network links are much slower than most components of an online system.
Processor operations are measured in nanoseconds, line speeds in seconds.
Screen design can also have a significant effect on overall response time. A
1200-byte message takes one second to be transmitted on a relatively
high-speed 9600 bits-per-second link. If 600 bytes of the message are not
needed, half a second of response time is wasted. Besides screen design and
size, such factors as how many terminals are on a line, the protocols used
(SNA, bisynchronous), and full-or half-duplex capabilities can affect
performance.
3. Use of specific software interfaces or serial functions. The operating system, terminal
access method, database manager, data set access method, and CICS must all
communicate in the processing of a transaction. Only a given level of
concurrent processing can occur at these points, and this can also cause a
performance constraint. Examples of this include the VTAM receive any pool
(RAPOOL), VSAM data set access (strings), CICS temporary storage, CICS
One useful technique for isolating a performance constraint in a CICS system with
VTAM is to use the IBMTEST command issued from a user’s terminal. This
terminal must not be in session with CICS, but must be connected to VTAM.
where n is the number of times you want the data echoed, and data may consist of
any character string. If you enter no data, the alphabet and the numbers zero
through nine are returned to the terminal. This command is responded to by
VTAM.
IBMTEST is an echo test designed to give the user a rough idea of the VTAM
component of terminal response time. If the response time is fast in a
slow-response system, the constraint is not likely to be any component from VTAM
onward. If this response is slow, VTAM or the network may be the reason. This
sort of deductive process in general can be useful in isolating constraints.
To avoid going into session with CICS, you may have to remove APPLID= from
the LU statement or CONNECT=AUTO from the TERMINAL definition.
Resource contention
The major resources used or managed by CICS consist of the following:
v Processor
v Real storage
v Virtual storage
v Software (specification limits)
v Channels
v Control units
v Lines
v Devices
v Sessions to connected CICS systems.
Two sets of symptoms and solutions are provided in this chapter. The first set
provides suggested solutions for poor response, and the second set provides
suggested solutions for a variety of resource contention problems.
Solutions
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.
See “DASD tuning” on page 199 for suggested solutions.
Solutions
v Reduce the line utilization.
v Reduce delays in data transmission.
v Alter the network.
Solutions
v Control the amount of queuing which takes place for the use of the connections
to the remote systems.
v Improve the response time of the remote system.
See the “Virtual storage above and below 16MB line checklist” on page 182 for a
detailed list of suggested solutions.
Solutions
v Reduce the demands on real storage
v Tune the MVS system to obtain more real storage for CICS
v Obtain more central and expanded storage.
See the “Real storage checklist” on page 183 for a detailed list of suggested
solutions.
Solutions
v Increase the dispatching priority of CICS.
v Reevaluate the relative priorities of operating system jobs.
v Reduce the number of MVS regions (batch).
v Reduce the processor utilization for productive work.
v Use only the CICS facilities that you really require.
v Turn off any trace that is not being used.
v Minimize the data being traced by reducing the:
– Scope of the trace
– Frequency of running trace.
v Obtain a faster processor.
See the “Processor cycles checklist” on page 184 for a detailed list of suggested
solutions.
Application conditions
These conditions, measured both for individual transaction types and for the total
system, give you an estimate of the behavior of individual application programs.
You should gather data for each main transaction and average values for the total
system. This data includes:
v Program calls per transaction
v CICS storage GETMAINs and FREEMAINs (number and amount)
v Application program and transaction usage
v File control (data set, type of request)
v Terminal control (terminal, number of inputs and outputs)
v Transaction routing (source, target)
v Function shipping (source, target)
v Other CICS requests.
Rapid performance degradation often occurs after a threshold is exceeded and the
system approaches its ultimate load. You can see various indications only when the
Bear in mind that the performance constraints might possibly vary at different
times of the day. You might want to run a particular option that puts a particular
pressure on the system only at a certain time in the afternoon.
Before carrying out this analysis, you must have a clear picture of the functions
and the interactions of the following components:
v Operating system supervisor with the appropriate access methods
v CICS management modules and control tables
v VSAM data sets
v DL/I databases
v DB2
v External security managers
v Performance monitors
v CICS application programs
v Influence of other regions
v Hardware peripherals (disks and tapes).
Full-load measurement
A full-load measurement highlights latent problems in the system. It is important
that full-load measurement lives up to its name, that is, you should make the
measurement when, from production experience, the peak load is reached. Many
installations have a peak load for about one hour in the morning and again in the
afternoon. CICS statistics and various performance tools can provide valuable
information for full-load measurement. In addition to the overall results of these
tools, it may be useful to have the CICS auxiliary trace or RMF active for about
one minute.
Trace is a very heavy overhead. Use trace selectivity options to minimize this
overhead.
RMF
It is advisable to do the RMF measurement without any batch activity. (See
“Resource measurement facility (RMF)” on page 27 for a detailed description of
this tool. Guidance on how to use RMF with the CICS monitoring facility is given
in “Using CICS monitoring SYSEVENT information with RMF” on page 67.)
For full-load measurement, the system activity report and the DASD activity report
are important.
You should expect stagnant throughput and sharply climbing response times as the
processor load approaches 100%.
It is difficult to forecast the system paging rate that can be achieved without
serious detriment to performance, because too many factors interact. You should
observe the reported paging rates; note that short-duration severe paging leads to a
rapid increase in response times.
In addition to taking note of the count of start I/O operations and their average
length, you should also find out whether the system is waiting on one device only.
With disks, for example, it can happen that several frequently accessed data sets
are on one disk and the accesses interfere with each other. In each case, you should
investigate whether a system wait on a particular unit could not be minimized by
reorganizing the data sets.
Use IOQ(DASD) option in RMF monitor 1 to show DASD control unit contention.
After checking the relationship of accesses with and without arm movement, for
example, you may want to move to separate disks those data sets that are
periodically very frequently accessed.
Average-use Number
transaction Response
System
Paging rate
CICS
Maximum
DSA virtual storage
Average
Peak
Tasks
At MXT
CPU utilization
The use of this type of comparison chart requires the use of TPNS, RMF, and CICS
interval statistics running together for about 20 minutes, at a peak time for your
system. It also requires you to identify the following:
v A representative selection of terminal-oriented DL/I transactions accessing DL/I
databases
v A representative selection of terminal-oriented transactions processing VSAM
files
v The most heavily used transaction
v Two average-use nonterminal-oriented transactions writing data to intrapartition
transient data destinations
v The most heavily used volume in your system
v A representative average-use volume in your system.
To complete the comparison chart for each CICS run before and after a tuning
change, you can obtain the figures from the following sources:
Single-transaction measurement
You can use full-load measurement to evaluate the average loading of the system
per transaction. However, this type of measurement cannot provide you with
information on the behavior of a single transaction and its possible excessive
loading of the system. If, for example, nine different transaction types issue five
start I/Os (SIOs) each, but the tenth issues 55 SIOs, this results in an average of
ten SIOs per transaction type. This should not cause concern if they are executed
simultaneously. However, an increase of the transaction rate of the tenth
transaction type could possibly lead to poor performance overall.
Sometimes, response times are quite good with existing terminals, but adding a
few more terminals leads to unacceptable degradation of performance. In this case,
the performance problem may be present with the existing terminals, and has
simply been highlighted by the additional load.
You should measure each existing transaction that is used in a production system
or in a final test system. Test each transaction two or three times with different
data values, to exclude an especially unfavorable combination of data. Document
the sequence of transactions and the values entered for each test as a prerequisite
for subsequent analysis or interpretation.
Between the tests of each single transaction, there should be a pause of several
seconds, to make the trace easier to read. A copy of the production database or
data set should be used for the test, because a test data set containing 100 records
can very often result in completely different behavior when compared with a
production data set containing 100 000 records.
The condition of data sets has often been the main reason for performance
degradation, especially when many segments or records have been added to a
database or data set. Do not do the measurements directly after a reorganization,
because the database or data set is only in this condition for a short time. On the
other hand, if the measurement reveals an unusually large number of disk
accesses, you should reorganize the data and do a further measurement to evaluate
the effect of the data reorganization.
You may feel that single-transaction measurement under these conditions with only
one terminal is not an efficient tool for revealing a performance degradation that
might occur when, perhaps 40 or 50 terminals are in use. Practical experience has
shown, however, that this is usually the only means for revealing and rectifying,
with justifiable expense, performance degradation under full load. The main reason
for this is that it is sometimes a single transaction that throws the system behavior
out of balance. Single-transaction measurement can be used to detect this.
Ideally, single-transaction measurement should be carried out during the final test
phase of the transactions. This gives the following advantages:
v Any errors in the behavior of transactions may be revealed before production
starts, and these can be put right during validation, without loading the
production system unnecessarily.
v The application is documented during the measurement phase. This helps to
identify the effects of later changes.
From this trace, you can find out whether a specified application is running as it is
expected to run. In many cases, it may be necessary for the application
programmer responsible to be called in for the analysis, to explain what the
transaction should actually be doing.
If you have a very large number of transactions to analyze, you can select, in a
first pass, the transactions whose behavior does not comply with what is expected.
If, on the other hand, only a few transactions remain in this category, these
transactions should be analyzed next, because it is highly probable that most
performance problems to date arise from these.
A system is always constrained. You do not simply remove a constraint; you can
only choose the most satisfactory constraint. Consider which resources can accept
an additional load in the system without themselves becoming worse constraints.
Tuning usually involves a variety of actions that can be taken, each with its own
trade-off. For example, if you have determined virtual storage to be a constraint,
your tuning options may include reducing buffer allocations for data sets, or
reducing terminal scan delay (ICVTSD) to shorten the task life in the processor.
The first option increases data set I/O activity, and the second option increases
processor usage. If one or more of these resources are also constrained, tuning
could actually cause a performance degradation by causing the other resource to
be a greater constraint than the present constraint on virtual storage.
Important
Always tune DASD, the network, and the overall MVS system before tuning
any individual CICS subsystem through CICS parameters.
“Chapter 14. Performance checklists” on page 181 itemizes the actions you can take
to tune the performance of an operational CICS system.
The other chapters in this part contain the relevant performance tuning guidelines
for the following aspects of CICS:
v “Chapter 15. MVS and DASD” on page 187
v “Chapter 16. Networking and VTAM” on page 201
v “Chapter 18. VSAM and file control” on page 225
v “Chapter 21. Database management” on page 263
v “Chapter 22. Logging and journaling” on page 271
v “Chapter 23. Virtual and real storage” on page 283
v “Chapter 24. MRO and ISC” on page 305
v “Chapter 25. Programming considerations” on page 315
v “Chapter 26. CICS facilities” on page 321
v “Chapter 27. Improving CICS startup and normal shutdown time” on page 339.
There are four checklists, corresponding to four of the main contention areas
described in “Chapter 11. Identifying CICS constraints” on page 155.
1. I/O contention (this applies to data set and database subsystems, as well as to
the data communications network)
2. Virtual storage above and below the 16MB line
3. Real storage
4. Processor cycles.
The checklists are in the sequence of low-level to high-level resources, and the
items are ordered from those that probably have the greatest effect on performance
to those that have a lesser effect, from the highest likelihood of being a factor in a
normal system to the lowest, and from the easiest to the most difficult to
implement.
Before taking action on a particular item, you should review the item to:
v Determine whether the item is applicable in your particular environment
v Understand the nature of the change
v Identify the trade-offs involved in the change.
Note:
Ideally, I/O contention should be reduced by using very large data buffers
and keeping programs in storage. This would require adequate central and
expanded storage, and programs that can be loaded above the 16MB line
Item Page
VSAM considerations
Review use of LLA 197
Implement Hiperspace buffers 240
Review/increase data set buffer allocations within 235
LSR
Use data tables when appropriate 244
Database considerations
Replace DL/I function shipping with IMS/ESA 263
DBCTL facility
Reduce/replace shared database access to online 263
data sets
Review DB2 threads and buffers 266
Journaling
Miscellaneous
Reduce DFHRPL library contention 299
Review temporary storage strings 321
Review transient data strings 326
Note:
The lower the number of concurrent transactions in the system, the lower the
usage of virtual storage. Therefore, improving transaction internal response
time decreases virtual storage usage. Keeping programs in storage above the
16MB line, and minimizing physical I/Os makes the largest contribution to
well-designed transaction internal response time improvement.
Item Page
CICS region
Increase CICS region size 192
Reorganize program layout within region 299
Split the CICS region 284
DSA sizes
Specify optimal size of the dynamic storage areas 625
upper limits (DSALIM, EDSALIM)
Adjust maximum tasks (MXT) 287
Control certain tasks by transaction class 288
Put application programs above 16MB line 300
Database considerations
Increase use of DBCTL and reduce use of shared 263
database facility
Replace DL/I function shipping with IMS DBCTL 263
facility
Review use of DB2 threads and buffers 266
Applications
Compile COBOL programs RES, NODYNAM 316
Use PL/I shared library facility 317
Implement VS COBOL II 317
Journaling
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared 305
database programs
Miscellaneous
Reduce use of aligned maps 298
Prioritize transactions 291
Use only required CICS recovery facilities 334
Recycle job initiators with each CICS startup 193
Note:
Adequate central and expanded storage is vital to achieving good
performance with CICS.
Item Page
MVS considerations
Dedicate, or fence, real storage to CICS 190
Make CICS nonswappable 190
Move CICS code to the LPA/ELPA 297
VSAM considerations
Review the use of Hiperspace buffers 240
Use VSAM LSR where possible 240
Review the number of VSAM buffers 235
Review the number of VSAM strings 237
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared
database programs
Use CICS intercommunication facilities 305
Database considerations
Journaling
Applications
Use PL/I shared library facilities 317
Compile COBOL programs RES, NODYNAM 316
Miscellaneous
Decrease region exit interval 194
Reduce trace table size 332
Use only required CICS recovery facilities 334
Note:
Minimizing physical I/Os by employing large data buffers and keeping
programs in storage reduces processor use, if adequate central and expanded
storage is available.
Item Page
General
Reduce or turn off CICS trace 332
Increase CICS dispatching level or performance 192
group
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MRO fastpath facilities 305
Implement MVS cross-memory services with shared 263
database programs
Use CICS intercommunication facilities 305
Database considerations
Journaling
Increase activity keypoint frequency (AKPFREQ) 279
value
Miscellaneous
Use only required CICS monitoring facilities 331
Review use of required CICS recovery facilities 334
Review use of required CICS security facilities 334
Increase region exit interval 194
Review use of program storage 299
Use NPDELAY for unsolicited input errors on TCAM 214
lines
Prioritize transactions 291
Because tuning is a top-down activity, you should already have made a vigorous
effort to tune MVS before tuning CICS. Your main effort to reduce virtual storage
constraint and to get relief should be concentrated on reducing the life of the
various individual transactions: in other words, shortening task life.
This section describes some of the techniques that can contribute significantly to
shorter task life, and therefore, a reduction in virtual storage constraint.
Additional real storage, if page-ins are frequently occurring (if there are more than
5 to 10 page-ins per second, CICS performance is affected), can reduce waits for
the paging subsystem.
MVS provides storage isolation for an MVS performance group, which allows you
to reserve a specific range of real storage for the CICS address space and to control
the page-rates for that address space based on the task control block (TCB) time
absorbed by the CICS address space during execution.
So far (except when describing storage isolation and DASD sharing), we have
concentrated on CICS systems that run a stand-alone single CICS address space.
The sizes of all MVS address spaces are defined by the common requirements of
the largest subsystem. If you want to combine the workload from two or more
processors onto an MVS image, you must be aware of the virtual storage
requirements of each of the subsystems that are to execute on the single-image
ESA processor. Review the virtual storage effects of combining the following kinds
of workload on a single-image MVS system:
1. CICS and a large number (100 or more) of TSO users
2. CICS and a large IMS system
3. CICS and 5000 to 7500 VTAM LUs.
By its nature, CICS requires a large private region that may not be available when
the large system’s common requirements of these other subsystems are satisfied. If,
after tuning the operating system, VTAM, VSAM, and CICS, you find that your
address space requirements still exceed that available, you can split CICS using
one of three options:
1. Multiregion option (MRO)
2. Intersystem communication (ISC)
3. Multiple independent address spaces.
Adding large new applications or making major increases in the size of your
VTAM network places large demands on virtual storage, and you must analyze
them before implementing them in a production system. Careful analysis and
system specification can avoid performance problems arising from the addition of
new applications in a virtual-storage-constrained environment.
If you have not made the necessary preparations, you usually become aware of
problems associated with severe stress only after you have attempted to implement
the large application or major change in your production system. Some of these
symptoms are:
v Poor response times
v Short-on-storage
v Program compression
v Heavy paging activity
v Many well-tested applications suddenly abending with new symptoms
v S80A and S40D abends
v S822 abends
v Dramatic increase in I/O activity on DFHRPL program libraries.
Various chapters in the rest of this book deal with specific, individual operands
and techniques to overcome these problems. They tell you how to minimize the
use of virtual storage in the CICS address space, and how to split it into multiple
address spaces if your situation requires it.
For an overall description of ESA virtual storage, see “Appendix F. MVS and CICS
virtual storage” on page 615.
The availability of the overall system may be improved by splitting the system
because the effects of a failure can be limited or the time to recover from the
failure can be reduced.
Recommendations
If availability of your system is an important requirement, both splitting systems
and the use of XRF should be considered. The use of XRF can complement the
splitting of systems by automating the recovery of the components.
When splitting your system, you should try to separate the sources of failure so
that as much of the rest of the system as possible is protected against their failure,
and remains available for use. Critical components should be backed up, or
configured so that service can be restored with minimum delay. Since the
advantages of splitting regions for availability can be compromised if the queueing
of requests for remote regions is not controlled, you should also review
“Intersystems session queue management” on page 307.
Making CICS nonswappable prevents the address space from being swapped out
in MVS, and reduces the paging overhead. Consider leaving only very lightly used
test systems swappable.
How implemented
You should consider making your CICS region nonswappable by using the
PPTNSWP option in the MVS Program Properties Table (PPT).
Limitations
Using the PPT will make all CICS systems (including test systems) nonswappable.
As an alternative, use the IPS. For more information about defining entries in the
PPT see the OS/390 MVS Programming: Callable Services for HLL manual.
How monitored
The DISPLAY ACTIVE (DA) command on SDSF gives you an indication of the
number of real pages used and the paging rate. Use RMF, the RMFMON command
on TSO to provide additional information. For more information about RMF see
“Resource measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.
The target working set size of an XRF alternate CICS system can vary significantly
in different environments.
For the XRF alternate system that has a low activity while in the surveillance
phase, PPGRTR is a better choice because the target working set size is adjusted on
the basis of page-faults per second, rather than page-faults per execution second.
During catchup and while tracking, the real storage needs of the XRF alternate
CICS system are increased as it changes terminal session states and the contents of
the TCT. At takeover, the real storage needs also increase as the alternate CICS
system begins to switch terminal sessions and implement emergency restart. In
order to ensure good performance and minimize takeover time, the target working
set size should be increased. This can be done in several different ways, two of
which are:
1. Parameter “b” in PWSS=(a,b) can be set to “*” which allows the working set
size to increase without limit, if the maximum paging rate (parameter “d” in
PPGRTR=(c,d)) is exceeded.
2. A command can be put in the CLT to change the alternate CICS system’s
performance group at takeover to one which has different real storage isolation
parameters specified.
If you set PWSS=(*,*), and PPGRTR=(1,2), this allows CICS to use as much storage
as it wants when the paging rate is > 2 per second. The values depend very much
on the installation and the MVS setup. The values suggested here assume that
CICS is an important address space and therefore needs service to be resumed
quickly.
For the definition and format of the storage isolation parameters in IEAIPSxx, see
the OS/390 MVS Initialization and Tuning Reference manual.
How implemented
See the OS/390 MVS Initialization and Tuning Reference manual.
How monitored
Use RMF, the RMFMON command on TSO for additional information. The
DISPLAY ACTIVE (DA) command on SDSF will give you an indication of the
number of real pages used and the paging rate.
Changes to MVS and other subsystems over time generally reduce the amount of
storage required below the 16MB line. Thus the CICS region size may be able to be
increased when a new release of MVS or non-CICS subsystem is installed.
To get any further increase, operating-system functions and storage areas (such as
the local shared queue area, LSQA), or other programs must be reduced. The
LSQA is used by VTAM and other programs, and any increase in the CICS region
size decreases the area available for the LSQA, SWA, and subpools 229 and 230. A
shortage in these subpools can cause S80A, S40D, and S822 abends.
If you specify a larger region, the value of the relevant dsasize system initialization
parameter must be increased or the extra space is not used.
How implemented
The region size is defined in the startup job stream for CICS. Other definitions are
made to the operating system or through operating-system console commands.
To determine the maximum region size, determine the size of your private area
from RMF II or one of the storage monitors available.
To determine the maximum region size you should allocate, use the following
formula:
Max region possible = private area size – system region size – (LSQA + SWA +
subpools 229 and 230)
The remaining storage is available for the CICS region; for safety, use 80% or 90%
of this number. If the system is static or does not change much, use 90% of this
number for the REGION= parameter; if the system is dynamic, or changes
frequently, 80% would be more desirable.
Note: You must maintain a minimum of 200KB of free storage between the top of
the region and the bottom of the ESA high private area (the LSQA, the SWA,
and subpools 229 and 230).
How monitored
Use RMF, the RMFMON command on TSO for additional information. For more
information about RMF see “Resource measurement facility (RMF)” on page 27 or
the MVS RMF User’s Guide.
How implemented
Set the CICS priority above the automatic priority group (APG). See the OS/390
MVS Initialization and Tuning Reference manual for further information.
There are various ways to assign CICS a dispatching priority. The best is through
the ICS (PARMLIB member IEAICSxx). The ICS assigns performance group
numbers and enforces assignments. The dispatching priorities are specified in
PARMLIB member IEAIPSxx. Use APGRNG to capture the top ten priority sets (6
through 15). Specify a suitably high priority for CICS. There are priority levels that
change dynamically, but we recommend a simple fixed priority for CICS. Use
storage isolation only when necessary.
You cannot specify a response time, and you must give CICS enough resources to
achieve good performance.
See the OS/390 MVS Initialization and Tuning Reference manual for more
information.
How monitored
Use either the DISPLAY ACTIVE (DA) command on SDSF or use RMF, the
RMFMON command on TSO. For more information about RMF see “Resource
measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.
Some fragmentation can also occur in a region when a job initiator starts multiple
jobs without being stopped and then started again. If you define the region as
having the maximum allowable storage size, it is possible to start and stop the job
the first time the initiator is used, but to have an S822 abend (insufficient virtual
storage) the second time the job is started. This is because of the fragmentation
that occurs.
In this situation, either the region has to be decreased, or the job initiator has to be
stopped and restarted.
Effects
Some installations have had S822 abends after doing I/O generations or after
adding DD statements to large applications. An S822 abend occurs when you
request a REGION=nnnnK size that is larger than the amount available in the
address space.
The maximum region size that is available is difficult to define, and is usually
determined by trial and error. One of the reasons is that the size depends on the
system generation and on DD statements.
Limitations
Available virtual storage is increased by starting new initiators to run CICS, or by
using MVS START. Startup time may be minimally increased.
How implemented
CICS startup and use of initiators are defined in an installation’s startup
procedures.
How monitored
Part of the job termination message IEF374I 'VIRT=nnnnnK' shows you the virtual
storage below the 16MB line, and another part 'EXT=nnnnnnnK' shows the virtual
storage above the 16MB line.
In general, ICV can be used in low-volume systems to keep part of the CICS
management code paged in. Expiration of this interval results in a full terminal
control table (TCT) scan in non-VTAM environments, and controls the dispatching
of terminal control in VTAM systems with low activity. Redispatch of CICS by
MVS after the wait may be delayed because of activity in the supervisor or in
higher-priority regions, for example, VTAM. The ICV delay can affect the
shutdown time if no other activity is taking place.
The value of ICV acts as a backstop for MROBTCH (see “Batching requests
(MROBTCH)” on page 311).
Main effect
The region exit interval determines the maximum period between terminal control
full scans. However, the interval between full scans in very active systems may be
less than this, being controlled by the normally shorter terminal scan delay interval
(see “Terminal scan delay (ICVTSD)” on page 211). In such systems, ICV becomes
largely irrelevant unless ICVTSD has been set to zero.
Secondary effects
Whenever control returns to the task dispatcher from terminal control after a full
scan, ICV is added to the current time of day to give the provisional due time for
the next full scan. In idle systems, CICS then goes into an operating-system wait
state, setting the timer to expire at this time. If there are application tasks to
dispatch, however, CICS passes control to these and, if the due time arrives before
CICS has issued an operating-system WAIT, the scan is done as soon as the task
dispatcher next regains control.
In active systems, after the due time has been calculated by adding ICV, the scan
may be performed at an earlier time by application activity (see “Terminal scan
delay (ICVTSD)” on page 211).
Operating-system waits are not always for the duration of one ICV. They last only
until some event ends. One possible event is the expiry of a time interval, but
often CICS regains control because of the completion of an I/O operation. Before
issuing the operating-system WAIT macro, CICS sets an operating-system timer,
specifying the interval as the time remaining until the next time-dependent activity
becomes due for processing. This is usually the next terminal control scan,
controlled by either ICV or ICVTSD, but it can be the earliest ICE expiry time, or
even less.
In high-activity systems, where CICS is contending for processor time with very
active higher-priority subsystems (VTAM, TSO, other CICS systems, or DB/DC),
control may be seized from CICS so often that CICS always has work to do and
never issues an operating-system WAIT.
Limitations
Too low a value can impair concurrent batch performance by causing frequent and
unnecessary dispatches of CICS by MVS. Too high a value can lead to an
appreciable delay before the system handles time-dependent events (such as
abends for terminal read or deadlock timeouts) after the due time.
A low ICV value does not prevent all CICS modules from being paged out. When
the ICV time interval expires, the operating system dispatches CICS task control
which, in turn, dispatches terminal control. CICS references only task control,
terminal control, TCT, and the CSA. No other modules in CICS are referenced. If
there is storage constraint they do not stay in real storage.
The ICV delay can affect the shutdown time if no other activity is taking place.
Recommendations
The time interval can be any decimal value in the range from 100 through 3600000
milliseconds.
A low interval value can enable much of the CICS nucleus to be retained, and not
be paged out at times of low terminal activity. This reduces the amount of paging
necessary for CICS to process terminal transactions (thus representing a potential
reduction in response time), sometimes at the expense of concurrent batch region
throughput. Large networks with high terminal activity tend to drive CICS without
a need for this value, except to handle the occasional, but unpredictable, period of
inactivity. These networks can usually function with a large interval (10000 to
30000 milliseconds). After a task has been initiated, the system recognizes its
requests for terminal services and the completion of the services, and overrides this
maximum delay interval.
Small systems or those with low terminal activity are subject to paging introduced
by other jobs running in competition with CICS. If you specify a low interval
value, key portions of the CICS nucleus are referenced more frequently, thus
reducing the probability of these pages being paged-out. However, the execution of
the logic, such as terminal polling activity, without performing productive work
might be considered wasteful.
You must weigh the need to increase the probability of residency by frequent but
unproductive referencing, against the extra overhead and longer response times
incurred by allowing the paging to occur. If you increase the interval size, more
productive work is performed at the expense of performance if paging occurs
during the periods of CICS activity.
How implemented
ICV is specified in the SIT or at startup, and can be changed using either the
CEMT or EXEC CICS SET SYSTEM (time) command. It is defined in units of
milliseconds, rounded down to the nearest multiple of ten. The default is 1000
(that is, one second; usually too low).
How monitored
The region exit interval can be monitored by the frequency of CICS
operating-system WAITs that are counted in “Dispatcher domain” on page 367.
LLA manages modules (system or application) whose library names you have put
in the appropriate CSVLLA member in SYS1.PARMLIB.
There are two optional parameters in this member that affect the management of
specified libraries:
FREEZE
Tells the system always to use the copy of the directory that is maintained
in the LLA address space.
NOFREEZE
Tells the system always to search the directory that resides in DASD
storage.
However, FREEZE and NOFREEZE are only relevant when LLACOPY is not used.
When CICS issues a LOAD and specifies the directory entry (DE), it bypasses the
LLA directory processing, but determines from LLA whether the program is
already in VLF or must be fetched from DASD. For more information about the
FREEZE and NOFREEZE options, see the OS/390 MVS Initialization and Tuning
Guide.
The use of LLA to manage a very busy DFHRPL library can show two distinct
benefits:
1. Improved transaction response time
2. Better DASD utilization.
In addition to any USER-defined CICS DFHRPL libraries, LLA also manages the
system LNKLST. It is likely that staging some modules from the LNKLST could
have more effect than staging modules from the CICS libraries. LLA makes
decisions on what is staged to VLF only after observing the fetch activity in the
system for a certain period. For this reason it is possible to see I/O against a
program library even when it is managed by LLA.
Another contributing factor for continued I/O is the system becoming “MAXVIRT
constrained”, that is, the sum of bytes from the working set of modules is greater
than the MAXVIRT parameter for the LLA class of VLF objects. You can increase
this value by changing it in the COFVLF member in SYS1.PARMLIB. A value too
small can cause excessive movement of that VLF object class; a value too large can
cause excessive paging; both may increase the DASD activity significantly.
See the OS/390 MVS Initialization and Tuning Guide manual for information on LLA
and VLF parameters.
Effects of LLACOPY
CICS can use one of two methods for locating modules in the DFHRPL
concatenation. Either a build link-list (BLDL) macro or a LLACOPY macro is
issued to return the directory information to pass to the load request. Which macro
is issued is dependant upon the LLACOPY system initialization parameter and the
reason for the locate of the module.
The LLACOPY macro is used to update the LLA-managed directory entry for a
module or a list of modules. If a module which is LLA managed has an LLACOPY
issued against it, it results in a BLDL with physical I/O against the DCB specified.
If the directory information does not match that which is stored within LLA, the
LLA tables are then updated, keeping both subsystems synchronized. While this
activity takes place an ENQ for the resource SYSZLLA1.update is held. This is then
unavailable to any other LLACOPY request on the same MVS system and therefore
another LLACOPY request is delayed until the ENQ is released.
The BLDL macro also returns the directory information. When a BLDL is issued
against an LLA managed module, the information returned will be from the LLA
copy of the directory, if one exists. It will not necessarily result in physical I/O to
the dataset and may therefore be out of step with the actual dataset. BLDL does
not require the SYSZLLA1.update ENQ and is therefore less prone to being
delayed by BLDLs on the same MVS system. Note that it is not advisable to use a
NOCONNECT option when invoking the BLDL macro because the DFHRPL
concatenated dataset may contain partitioned data set extended (PDSE) datasets.
PDSE can contain more function than PDS, but CICS may not recognise some of
this function. PDSE also use more virtual storage .
If you code LLACOPY=NO, CICS never issues an LLACOPY macro. Instead, each
time the RPL dataset is searched for a module, a BLDL is issued.
DASD tuning
The main solutions to DASD problems are to:
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.
Take the following figures as guidelines for best DASD response times for online
systems:
v Channel busy: less than 30% (with CHP ids this can be higher)
v Device busy: less than 35% for randomly accessed files
v Average response time: less than 20 milliseconds.
Aim for multiple paths to disk controllers because this allows dynamic path
selection to work.
For TCAM, the DFHTCT TYPE=TERMINAL TIOAL=value macro, is the only way
to adjust this value.
One value defining the minimum size is used for non-SNA devices, while two
values specifying both the minimum and maximum size are used for SNA devices.
This book does not discuss the performance aspects of the CICS Front End
Programming Interface. See the CICS Front End Programming Interface User’s Guide
for more information.
Effects
When value1,0 is specified for IOAREALEN, value1 is the minimum size of the
terminal input/output area that is passed to an application program when a
RECEIVE command is issued. If the size of the input message exceeds value1, the
area passed to the application program is the size of the input message.
When value1, value2 is specified, value1 is the minimum size of the terminal
input/output area that is passed to an application program when a RECEIVE
command is issued. Whenever the size of the input message exceeds value1, CICS
will use value2. If the input message size exceeds value2, the node abnormal
condition program sends an exception response to the terminal.
If you specify ATI(YES), you must specify an IOAREALEN of at least one byte.
Limitations
Real storage can be wasted if the IOAREALEN (value1) or TIOAL value is too
large for most terminal inputs in the network. If IOAREALEN (value1) or TIOAL
is smaller than most initial terminal inputs, excessive GETMAIN requests can
occur, resulting in additional processor requirements, unless IOAREALEN(value1)
or TIOAL is zero.
Recommendations
IOAREALEN(value1) or TIOAL should be set to a value that is slightly larger than
the average input message length for the terminal. The maximum value that may
be specified for IOAREALEN/TIOAL is 32767 bytes.
If a value of nonzero is required, the best size to specify is the most commonly
encountered input message size. A multiple of 64 bytes minus 21 allows for SAA
requirements and ensures good use of operating system pages.
For VTAM, you can specify two values if inbound chaining is used. The first value
should be the length of the normal chain size for the terminal, and the second
value should be the maximum size of the chain. The length of the TIOA presented
to the task depends on the message length and the size specified for the TIOA.
(See the example in Figure 30.)
Avoid specifying too large a value1, for example, by matching it to the size of the
terminal display screen. This area is used only as input. If READ with SET is
specified, the same pointer is used by applications for an output area.
If too small a value is specified for value1, extra processing time is required for
chain assembly, or data is lost if inbound chaining is not used.
In general, a value of zero is best because it causes the optimum use of storage and
eliminates the second GETMAIN request. If automatic transaction initiation (ATI) is
used for that terminal, a minimum size of one byte is required.
How implemented
For VTAM, the TIOA value is specified in the CEDA DEFINE TYPETERM
IOAREALEN attribute.
For TCAM, the TIOAL value can be specified in the terminal control table (TCT)
TYPE=TERMINAL operand. TIOAL defaults to the INAREAL value specified in
the TCT TYPE=LINE operand.
How monitored
RMF and NetView Performance Monitor (NPM) can be used to show storage usage
and message size characteristics in the network.
Storage for the RAIAs, which is above the 16MB line, is allocated by the CICS
terminal control program during CICS initialization, and remains allocated for the
entire execution of the CICS job step. The size of this storage is the product of the
RAPOOL and RAMAX system initialization parameters.
Effects
VTAM attempts to put any incoming RU into the initial receive-any input area,
which has the size of RAMAX. If this is not large enough, VTAM indicates that
and also states how many extra bytes are waiting that cannot be accommodated.
RAMAX is the largest size of any RU that CICS can take directly in the receive-any
command, and is a limit against which CICS compares VTAM’s indication of the
overall size of the RU. If there is more, VTAM saves it, and CICS gets the rest in a
second request.
With a small RAMAX, you reduce the virtual storage taken up in RAIAs but risk
more processor usage in VTAM retries to get any data that could not fit into the
RAIA.
For many purposes, the default RAMAX value of 256 bytes is adequate. If you
know that many incoming RUs are larger than this, you can always increase
RAMAX to suit your system.
For individual terminals, there are separate parameters that determine how large
an RU is going to be from that device. It makes sense for RAMAX to be at least as
large as the largest CEDA SENDSIZE for any frequently-used terminals.
Limitations
Real storage can be wasted with a high RAMAX value, and additional processor
time can be required with a low RAMAX value. If the RAMAX value is set too
low, extra processor time is needed to acquire additional buffers to receive the
remaining data. Because most inputs are 256 bytes, this should normally be
specified.
Do not specify a RAMAX value that is less than the RUSIZE (from the CINIT) for
a pipeline terminal because pipelines cannot handle overlength data.
Recommendations
Code RAMAX with the size in bytes of the I/O area allocated for each receive-any
request issued by CICS. The maximum value is 32767.
Set RAMAX to be slightly larger than your CICS system input messages. If you
know the message length distribution for your system, set the value to
accommodate the majority of your input messages.
In any case, the size required for RAMAX need only take into account the first (or
only) RU of a message. Thus, messages sent using SNA chaining do not require
RAMAX based on their overall chain length, but only on the size of the constituent
RUs.
Receive-any input areas are taken from a fixed length subpool of storage. A size of
2048 may appear to be adequate for two such areas to fit on one 4KB page, but
only 4048 bytes are available in each page, so only one area fits on one page. A
size of 2024 should be defined to ensure that two areas, including page headers, fit
on one page.
How implemented
RAMAX is a system initialization parameter.
How monitored
The size of RUs or chains in a network can be identified with a VTAM line or
buffer trace. The maximum size RUs are defined in the CEDA SENDSIZE attribute.
Effects
Initially, task input from a terminal or session is received by the VTAM access
method and is passed to CICS if CICS has a receive-any request outstanding.
For each receive-any request, a VTAM request parameter list (RPL), a receive-any
control element (RACE), and a receive-any input area (RAIA)—the value specified
by RAMAX (see “Receive-any input areas (RAMAX)” on page 203) are set aside.
The total area set aside for VTAM receive-any operations is:
If HPO=YES, both RACE and RPL are above the 16MB line.
In general, input messages up to the value specified in RAPOOL are all processed
in one dispatch of the terminal control task. Because the processing of a
receive-any request is a short operation, at times more messages than the RAPOOL
value may be processed in one dispatch of terminal control. This happens when a
receive-any request completes before the terminal control program has finished
processing and there are additional messages from VTAM.
The pool is used only for the first input to start a task; it is not used for output or
conversational input. VTAM posts the event control block (ECB) associated with
the receive any input area. CICS then moves the data to the terminal I/O area
(TIOA) ready for task processing. The RAIA is then available for reuse.
Where useful
Use the RAPOOL operand in networks that use the VTAM access method for
terminals.
Limitations
If the RAPOOL value is set too low, this can result in terminal messages not being
processed in the earliest dispatch of the terminal control program, thereby
inducing transaction delays during high-activity periods. For example, if you use
the default and five terminal entries want to start up tasks, three tasks may be
delayed for at least the time required to complete the VTAM receive-any request
and copy the data and RPL. In general, no more than 5 to 10% of all receive-any
processing should be at the RAPOOL ceiling, with none being at the RAPOOL
ceiling if there is sufficient storage.
Recommendations
Whether RAPOOL is significant or not depends on the environment of the CICS
system: whether, for example, HPO is being used.
In some cases, it may sometimes be more economical for VTAM to store the
occasional peak of messages in its own areas rather than for CICS itself to have a
large number of RAIAs, many of which are unused most of the time.
CICS maintains a VTAM RECEIVE ANY for n of the RPLs, where n is either the
RAPOOL value, or the MXT value minus the number of currently active tasks,
whichever is the smaller. See the CICS System Definition Guide for more information
about these SIT parameters.
The RAPOOL value you set depends on the number of sessions, the number of
terminals, and the ICVTSD value (see page 211) in the system initialization table
(SIT). Initially, for non-HPO systems, you should set RAPOOL to 1.5 times your
peak local 2 transaction rate per second plus the autoinstall rate. This can then be
adjusted by analyzing the CICS VTAM statistics and by resetting the value to the
maximum RPLs reached.
For HPO systems, a small value (<= 5) is usually sufficient if specified through the
value2 in the RAPOOL system initialization parameter. Thus, RAPOOL=20, for
example, is specified either RAPOOL=(20) or RAPOOL=(20,5) to achieve the same
effect.
How implemented
RAPOOL is a system initialization parameter.
How monitored
The CICS VTAM statistics contain values for the maximum number of RPLs posted
on any one dispatch of the terminal control program, and the number of times the
RPL maximum was reached. This maximum value may be greater than the
RAPOOL value if the terminal control program is able to reuse an RPL during one
dispatch. See “VTAM statistics” on page 51 for more information.
2. The RAPOOL figure does not include MRO sessions, so you should set RAPOOL to a low value in application- or file-owning
regions (AORs or FORs).
Effects
| HPO bypasses some of the validating functions performed by MVS on I/O
operations, and implements service request block (SRB) scheduling. This shortens
the instruction pathlength and allows some concurrent processing on MVS images
for the VTAM operations because of the SRB scheduling. This makes it useful in a
multi processor environment, but not in a single processor environment.
Limitations
HPO requires CICS to be authorized, and some risks with MVS integrity are
involved because a user-written module could be made to replace one of the CICS
system initialization routines and run in authorized mode. This risk can be reduced
by RACF protecting the CICS SDFHAUTH data set.
Use of HPO saves processor time, and does not increase real or virtual storage
requirements or I/O contention. The only expense of HPO is the potential security
| exposure that arises because of a deficiency in validation.
Recommendations
| The general recommendation is that all production systems with vetted
| applications can use HPO. It is totally application-transparent and introduces no
function restrictions while providing a reduced pathlength through VTAM. In the
case of VTAM, the reduced validation does not induce any integrity loss for the
messages.
How implemented
The SVCs and use of HPO are specified in the system initialization table (SIT) and,
if the default SVC numbers are acceptable, no tailoring of the system is required.
How monitored
There is no direct measurement of HPO. One way to tell if it is working is to take
detailed measurements of processor usage with HPO turned on (SIT option) and
with it turned off. Depending on the workload, you may not see much difference.
Another way to check whether it is working is that you may see a small increase
in the SRB scheduling time with HPO turned on.
RMF can give general information on processor usage. An SVC trace can show
how HPO was used.
| Note that you should be take care when using HPO in a system that is being used
| for early testing of a new application or CICS code (a new release or PUT). Much
of the pathlength reduction is achieved by bypassing control block verification
| code in VTAM. Untested code might possibly corrupt the control blocks that CICS
| passes to VTAM, and unvalidated applications can lead to security exposure.
Effects
One of the options in Systems Network Architecture (SNA) is whether the
messages exchanged between CICS and a terminal are to be in definite or
exception response mode. Definite response mode requires both the terminal and
CICS to provide acknowledgment of receipt of messages from each other on a
one-to-one basis.
SNA also ensures message delivery through synchronous data link control (SDLC),
so definite response is not normally required. Specifying message integrity
(MSGINTEG) causes the sessions for which it is specified to operate in definite
response mode.
In other cases, the session between CICS and a terminal operates in exception
response mode, and this is the normal case.
In SNA, transactions are defined within brackets. A begin bracket (BB) command
defines the start of a transaction, and an end bracket (EB) command defines the
end of that transaction. Unless CICS knows ahead of time that a message is the last
of a transaction, it must send an EB separate from the last message if a transaction
terminates. The EB is an SNA command, and can be sent with the message,
eliminating one required transmission to the terminal.
Specifying the ONEWTE option for a transaction implies that only one output
message is to be sent to the terminal by that transaction, and allows CICS to send
the EB along with that message. Only one output message is allowed if ONEWTE
is specified and, if a second message is sent, the transaction is abended.
The second way to allow CICS to send the EB with a terminal message is to code
the LAST option on the last terminal control or basic mapping support SEND
command in a program. Multiple SEND commands can be used, but the LAST
option must be coded for the final SEND in a program.
The third (and most common) way is to issue SEND without WAIT as the final
terminal communication. The message is then sent as part of task termination.
Where useful
The above options can be used in all CICS systems that use VTAM.
Limitations
The MSGINTEG option causes additional transmissions to the terminal.
Transactions remain in CICS for a longer period, and tie up virtual storage and
When MSGINTEG is specified, the TIOA remains in storage until the response is
received from the terminal. This option can increase the virtual storage
requirements for the CICS region because of the longer duration of the storage
needs.
How implemented
With resource definition online (RDO) using the CEDA transaction, protection can
be specified in the PROFILE definition by means of the MSGINTEG, and ONEWTE
options. The MSGINTEG option is used with SNA LUs only. See the CICS Resource
Definition Guide for more information about defining a PROFILE.
How monitored
You can monitor the use of the above options from a VTAM trace by examining
the exchanges between terminals and CICS and, in particular, by examining the
contents of the request/response header (RH).
Input chain size and characteristics are normally dictated by the hardware
requirements of the terminal in question, and so the CEDA BUILDCHAIN and
RECEIVESIZE attributes have default values which depend on device attributes.
The size of an output chain is specified by the CEDA SENDSIZE attribute.
Effects
Because the network control program (NCP) also segments messages into 256-byte
blocks for normal LU Type 0, 1, 2, and 3 devices, a SENDSIZE value of zero
eliminates the overhead of output chaining. A value of 0 or 1536 is required for
local devices of this type.
If you specify the CEDA SENDSIZE attribute for intersystem communication (ISC)
sessions, this must match the CEDA RECEIVESIZE attribute in the other system.
The CEDA SENDSIZE attribute or TCT BUFFER operand controls the size of the
SNA element that is to be sent, and the CEDA RECEIVESIZEs need to match so
that there is a corresponding buffer of the same size able to receive the element.
Where useful
Chaining can be used in systems that use VTAM and SNA terminals of types that
tolerate chaining.
Limitations
If you specify a low CEDA SENDSIZE value, this causes additional processing and
real and virtual storage to be used to break the single logical message into multiple
parts.
Chaining may be required for some terminal devices. Output chaining can cause
flickering on display screens, which can annoy users. Chaining also causes
additional I/O overhead between VTAM and the NCP by requiring additional
VTAM subtasks and STARTIO operations. This additional overhead is eliminated
with applicable ACF/VTAM releases by making use of the large message
performance enhancement option (LMPEO).
Recommendations
The CEDA RECEIVESIZE value for IBM 3274-connected display terminals should
be 1024; for IBM 3276-connected display terminals it should be 2048. These values
give the best line characteristics while keeping processor usage to a minimum.
How implemented
Chaining characteristics are specified in the CEDA DEFINE TYPETERM statement
with the SENDSIZE, BUILDCHAIN, and RECEIVESIZE attributes.
How monitored
Use of chaining and chain size can be determined by examining a VTAM trace.
You can also use the CICS internal and auxiliary trace facilities, in which the VIO
ZCP trace shows the chain elements. Some of the network monitor tools such as
NetView Performance Monitor (NPM) give this data.
Each concurrent logon/logoff requires storage in the CICS dynamic storage areas
for the duration of that processing.
Where useful
The OPNDLIM system initialization parameter can be used in CICS systems that
use VTAM as the terminal access method.
The OPNDLIM system initialization parameter can also be useful if there are times
when all the user community tends to log on or log off at the same time, for
example, during lunch breaks.
Limitations
If too low a value is specified for OPNDLIM, real and virtual storage requirements
are reduced within CICS and VTAM buffer requirements may be cut back, but
session initializations and terminations take longer.
Recommendations
Use the default value initially and make adjustments if statistics indicate that too
much storage is required in your environment or that the startup time (DEFINE
TYPETERM AUTOCONNECT attribute in CEDA) is excessive.
OPNDLIM should be set to a value not less than the number of LUs connected to
any single VTAM line.
How implemented
OPNDLIM is a system initialization parameter.
How monitored
Logon and logoff activities are not reported directly by CICS or any measurement
tools, but can be analyzed using the information given in a VTAM trace or VTAM
display command.
This last case arises from the way that CICS scans active tasks.
On CICS non-VTAM systems, the delay value specifies how long the terminal
control program must wait after an application terminal request, before it carries
out a TCT scan. The value thus controls batching and delay in the associated
processing of terminal control requests. In a low-activity system, it controls the
dispatching of the terminal control program.
The batching of requests reduces processor time at the expense of longer response
times. On CICS VTAM systems, it influences how quickly the terminal control
program completes VTAM request processing, especially when the MVS high
performance option (HPO) is being used.
Effects
VTAM
In VTAM networks, a low ICVTSD value does not cause full TCT scans because
the input from or output to VTAM terminals is processed from the activate queue
chain, and only those terminal entries are scanned.
With VTAM terminals, CICS uses bracket protocol to indicate that the terminal is
currently connected to a transaction. The bracket is started when the transaction is
initiated, and ended when the transaction is terminated. This means that there
could be two outputs to the terminal per transaction: one for the data sent and one
when the transaction terminates containing the end bracket. In fact, only one
output is sent (except for WRITE/SEND with WAIT and definite response). CICS
holds the output data until the next terminal control request or termination. In this
way it saves processor cycles and line utilization by sending the message and end
bracket or change direction (if the next request was a READ/RECEIVE) together in
the same output message (PIU). When the system gets very busy, terminal control
is dispatched less frequently and becomes more dependent upon the value
specified in ICVTSD. Because CICS may not send the end bracket to VTAM for an
extended period of time, the life of a transaction can be extended. This keeps
storage allocated for that task for longer periods and potentially increases the
amount of virtual storage required for the total CICS dynamic storage areas.
Non-VTAM
ICVTSD is the major control on the frequency of full terminal control table (TCT)
scanning of non-VTAM terminals. In active systems, a full scan is done
approximately once every ICVTSD. The average extra delay before sending an
output message should be about half this period.
All networks
The ICVTSD parameter can be changed in the system initialization table (SIT) or
through JCL parameter overrides. If you are having virtual storage constraint
problems, it is highly recommended that you reduce the value specified in
ICVTSD. A value of zero causes the terminal control task to be dispatched most
frequently. If you also have a large number of non-VTAM terminals, this may
increase the amount of nonproductive processor cycles. A value of 100—300
milliseconds may be more appropriate for that situation. In a pure VTAM
environment, however, the overhead is not significant, unless the average
transaction has a very short pathlength, and ICVTSD should be set to zero for a
better response time and best virtual storage usage.
Where useful
The ICVTSD system initialization parameter can be used in all except very
low-activity CICS systems.
Limitations
In TCAM systems, a low ICVTSD value can cause excessive processor time to be
used in slower processor units, and can delay the dispatch of user tasks because
too many full TCT scans have to be done. A high ICVTSD value can increase
response time by an average of one half of the ICVTSD value, and can tie up
resources owned by the task because the task takes longer to terminate. This
applies to conversational tasks.
In VTAM systems, a low value adds the overhead of scanning the activate queue
TCTTE chain, which is normally a minor consideration. A high value in
high-volume systems can increase task life and tie up resources owned by that task
for a longer period of time; this can be a significant consideration.
A low, nonzero value of ICVTSD can cause CICS to be dispatched more frequently,
which increases the overhead of performance monitoring.
Recommendations
Set ICVTSD to a value less than the region exit time interval (ICV), which is also in
the system initialization table (see page 192). Use the value of zero in an
environment that contains only VTAM terminals and consoles, unless your
| workload consists of many short transactions. ICVTSD=0 in a VTAM terminal-only
| environment is not recommended for a CICS workload consisting of low terminal
| activity but with high TASK activity. Periods of low terminal activity can lead to
| delays in CSTP being dispatched. Setting ICVTSD=100-500 resolves this by causing
| CSTP to be dispatched regularly. For non-VTAM systems, specify the value of zero
only for small networks (1 through 30 terminals).
The recommended absolute minimum level, for systems that are not “pure”
VTAM, is approximately 250 milliseconds or, in really high-performance,
high-power systems that are “pure” VTAM, 100 milliseconds.
How implemented
The ICVTSD system initialization parameter is defined in units of milliseconds.
Use the commands CEMT or EXEC CICS SET SYSTEM SCANDELAY (nnnn) to
reset the value of ICVTSD.
In reasonably active systems, a nonzero ICVTSD virtually replaces ICV (see page
194) because the time to the next TCT full scan (non-VTAM) or sending of output
requests (VTAM) is the principal influence on operating system wait duration.
How monitored
Use RMF to monitor task duration and processor requirements. The dispatcher
domain statistics reports the value of ICVTSD.
Effects
If the preceding transaction fails to terminate during the NPDELAY interval, the
X'87' unsolicited-input error condition is raised.
Where useful
When several queues are defined for TCAM-to-CICS processing, CICS can suspend
the acceptance of input messages from one or more of the queues without
completely stopping the flow of input from TCAM to CICS.
Choosing an appropriate value for NPDELAY is a matter of tuning. Even with the
“cascade” list approach, some messages may be held up behind an unsolicited
message. The objective should be to find the minimum value that can be specified
for NPDELAY which is sufficient to eliminate the unsolicited-input errors.
Limitations
Some additional processor cycles are required to process the exit code, and the
coding of the exit logic also requires some effort. Use of a compression exit reduces
the storage requirements of VTAM or TCAM and NCP, and reduces line
transmission time.
Recommendations
The simplest operation is to replace redundant characters, especially blanks, with a
repeat-to-address sequence in the data stream for 3270-type devices.
Note: The repeat-to-address sequence is not handled very quickly on some types
of 3270 cluster controller. In some cases, alternatives may give superior
performance. For example, instead of sending a repeat-to-address sequence
for a series of blanks, you should consider sending an ERASE and then
set-buffer-address sequences to skip over the blank areas. This is satisfactory
if nulls are acceptable in the buffer as an alternative to blanks.
Another technique for reducing the amount of data transmitted is to turn off any
modified data tags on protected fields in an output data stream. This eliminates
the need for those characters to be transmitted back to the processor on the next
input message, but you should review application dependencies on those fields
before you try this.
There may be other opportunities for data compression in individual systems, but
you may need to investigate the design of those systems thoroughly before you
can implement them.
How monitored
The contents of output terminal data streams can be examined in either a VTAM or
TCAM trace.
The AIQMAX value does not limit the total number of devices that can be
autoinstalled.
Setting the restart delay to zero means that you do not want CICS to re-install the
autoinstalled terminal entries from the global catalog during emergency restart. In
this case, CICS does not write the terminal entries to the catalog while the terminal
is being autoinstalled. This can have positive performance effects on the following
processes:
Normal shutdown CICS deletes AI terminal entries from the GCD during normal
shutdown unless they were not cataloged (AIRDELAY=0) and the terminal has not
been deleted. If the restart delay is set to zero, CICS has not cataloged terminal
entries when they were autoinstalled, so they are not deleted. This can reduce
normal shutdown time.
XRF takeover The system initialization parameter, AIRDELAY, should not affect
XRF takeover. The tracking process still functions as before regardless of the value
of the restart delay. Thus, after a takeover, the alternate system still has all the
autoinstalled terminal entries. However, if a takeover occurs before the catchup
process completes, some of the autoinstalled terminals have to log on to CICS
again. The alternate CICS system has to rely on the catalog to complete the
catchup process and, if the restart delay is set to zero in the active system, the
alternate system is not able to restore the autoinstalled terminal entries that have
not been tracked. Those terminals have to log on to the new CICS system, rather
than being switched or rebound after takeover.
You have to weigh the risk of having some terminal users log on again because
tracking has not completed, against the benefits introduced by setting the restart
delay to zero. Because catchup takes only a few minutes, the chance of such a
takeover occurring is usually small.
In general, setting the delete delay to a nonzero value can improve the
performance of CICS when many autoinstalled terminals are logging on and off
during the day. However, this does mean that unused autoinstalled terminal entry
storage is not freed for use by other tasks until the delete delay interval has
expired. This parameter provides an effective way of defining a terminal whose
storage lifetime is somewhere between that of an autoinstalled terminal and a
statically defined terminal.
The effect of setting the delete delay to a nonzero value can have different effects
depending on the value of the restart delay:
Nonzero restart delay When the restart delay is nonzero, CICS catalogs
autoinstalled terminal entries in the global catalog.
If the delete delay is nonzero as well, CICS retains the terminal entry so that it is
re-used when the terminal logs back on. This can eliminate the overhead of:
v Deleting the terminal entry in virtual storage
v An I/O to the catalog and recovery log
v Re-building the terminal entry when the terminal logs on again.
If the delete delay is nonzero, CICS retains the terminal entry so that it is re-used
when the terminal logs back on. This can save the overhead of deleting the
terminal entry in virtual storage and the rebuilding of the terminal entry when the
terminal logs on again.
Effects
You can control the use of resource by autoinstall processing in three ways:
1. By using the transaction class limit to restrict the number of autoinstall tasks
that can concurrently exist (see page 288).
2. By using the CATA and CATD transactions to install and delete autoinstall
terminals dynamically. If you have a large number of devices autoinstalled,
shutdown can fail due to the MXT system initialization parameter being
reached or CICS becoming short on storage. To prevent this possible cause of
shutdown failure, you should consider putting the CATD transaction in a class
of its own to limit the number of concurrent CATD transactions.
3. By specifying AIQMAX to limit the number of devices that can be queued for
autoinstall. This protects against abnormal consumption of virtual storage by
the autoinstall process, caused as a result of some other abnormal event.
If this limit is reached, the AIQMAX system initialization parameter affects the
LOGON and BIND processing by CICS. CICS requests VTAM to stop passing
LOGON and BIND requests to CICS. VTAM holds such requests until CICS
indicates that it can accept further LOGONs and BINDs (this occurs when CICS
has processed a queued autoinstall request).
Recommendations
If the autoinstall process is noticeably slowed down by the AIQMAX limit, raise it.
If the CICS system shows signs of running out of storage, reduce the AIQMAX
limit. If possible, set the AIQMAX system initialization parameter to a value higher
than that reached during normal operations.
A value of zero for both restart delay and delete delay is the best overall setting
for many systems from an overall performance and virtual-storage usage point of
view.
Because a considerable number of messages are sent to transient data during logon
and logoff, the performance of these output destinations should also be taken into
consideration.
How monitored
Monitor the autoinstall rate during normal operations by inspecting the autoinstall
| statistics regularly.
|
| CICS Web performance in a sysplex
| The dynamic routing facility is extended to provide mechanisms for dynamically
| routing program—link requests received from outside CICS. The target program of
| a CICS Web application can be run anywhere in a sysplex by dynamically routing
| the EXEC CICS LINK to the target application. Web bridge transactions should
| either be not routed or always routed to the same region so that there are major
| affinitites. Using CICSPlex SM to route the program-link requests, the transaction
| ID becomes much more significant because CICSPlex SM’s routing logic is
| transaction-based. CICSPlex SM routes each DPL request according to the rules
| specified for its associated transaction. This dynamic routing means that there is
| extra pathlength for both routed and nonrouted links, and routing links.
| Analyzer and converter programs must run in the same region as the instance of
| DFHWBBLI which invokes them, which in the case of CICS Web support, is the
| CICS region on which the HTTP request is received.
| If the Web API is being used by the application program to process the HTTP
| request and build the HTTP response, the application program must also run in
| the same CICS region as the instance of DFHWBBLI which is linking to it.
| To achieve optimum performance when using templates, you should ensure you
| have defined the template as DOCTEMPLATE and installed the definition before
| using it, especially when using the DFHWBTL program. If the template is not
| preinstalled when this program is used, DFHWBTL attempts to install it for you,
| assuming that it is a member of the partitioned dataset referenced by the
| DFHHTML DD statement.
| The fastest results can be achieved by storing your templates as CICS load
| modules. For more information about this, see the CICS Internet Guide. These
| modules are managed like other CICS loaded programs and may be flushed out by
| program compression when storage is constrained.
| When the CICS Web Business Logic Interface is used, the TS queue prefix is
| always DFHWEB.
|
| CICS Web support of HTTP 1.0 persistent connections
| In most circumstances CICS Web performance will be improved by enabling
| support of the HTTP 1.0 Keepalive header.
| To enable CICS support of this header, you have to specify NO or a numeric value
| for the SOCKET CLOSE keyword on the relevant TCPIPSERVICE definition; if NO
| or a numeric value is specified, and the incoming HTTP request contains the
| Keepalive header, CICS keeps the socket open in order to allow further HTTP
| requests to be sent by the Web Browser. If a numeric value is specified, the interval
| between receipt of the last HTTP request and arrival of the next must be less than
| the interval specified on the TCPIPSERVICE, else CICS closes the socket. Some
| HTTP proxy servers do not allow the HTTP 1.0 Keepalive header to be passed to
| the end server (in this case, CICS), so Web Browsers which wish to use this header
| may not be able to pass it to CICS if the HTTP request arrives via such an HTTP
| proxy server.
|
| CICS Web security
| If Secure Sockets Layer is used to make CICS Web transactions more secure, there
| will be a significant increase in pathlength for these transactions. This increase can
| be minimized by use of the HTTP 1.0 Keepalive header. Keeping the socket open
| removes the need to perform a full SSL handshake on the second and any
| subsequent HTTP request. If CICS or the Web Browser closes the socket, the SSL
| handshake has to be executed again.
|
| CICS Web 3270 support
| Use of the HTTP 1.0 Keepalive header can improve the performance of CICS Web
| 3270 support, by removing the need for the Web Browser to open a new sockets
| connection for each leg of the 3270 conversation or pseudoconversation.
The costs of assigning additional buffers and providing for concurrent operations
on data sets are the additional virtual and real storage that is required for the
buffers and control blocks.
Several factors influence the performance of VSAM data sets. The rest of this
section reviews these and the following sections summarize the various related
parameters of file control.
Note that, in this section, a distinction is made between “files” and “data sets”:
v A “file” means a view of a data set as defined by an installed CICS file resource
definition and a VSAM ACB.
v A “data set” means a VSAM “sphere”, including the base cluster with any
associated AIX® paths.
CICS provides separate LSR buffer pools for data and index records. If only data
buffers are specified, only one set of buffers are built and used for both data and
index records.
LSR files share a common pool of buffers and a common pool of strings (that is,
control blocks supporting the I/O operations). Other control blocks define the file
and are unique to each file or data set. NSR files or data sets have their own set of
buffers and control blocks.
Some important differences exist between NSR and LSR in the way that VSAM
allocates and shares the buffers.
In NSR, the minimum number of data buffers is STRNO + 1, and the minimum
index buffers (for KSDSs and AIX paths) is STRNO. One data and one index buffer
are preallocated to each string, and one data buffer is kept in reserve for CI splits.
If there are extra data buffers, these are assigned to the first sequential operation;
they may also be used to speed VSAM CA splits by permitting chained I/O
operations. If there are extra index buffers, they are shared between the strings and
are used to hold high-level index records, thus providing an opportunity for saving
physical I/O.
Before issuing a read to disk when using LSR, VSAM first scans the buffers to
check if the control interval it requires is already in storage. If so, it may not have
to issue the read. This buffer “lookaside” can reduce I/O significantly.
The general recommendation is to use LSR for all VSAM data sets except where
you have one of the following situations:
v A file is very active but there is no opportunity for lookaside because, for
instance, the file is very large.
v High performance is required by the allocation of extra index buffers.
v Fast sequential browse or mass insert is required by the allocation of extra data
buffers.
v Control area (CA) splits are expected for a file, and extra data buffers are to be
allocated to speed up the CA splits.
If you have only one LSR pool, a particular data set cannot be isolated from others
using the same pool when it is competing for strings, and it can only be isolated
when it is competing for buffers by specifying unique CI sizes. In general, you get
more self-tuning effects by running with one large pool, but it is possible to isolate
busy files from the remainder or give additional buffers to a group of high
performance files by using several pools. It is possible that a highly active file has
more successful buffer lookaside and less I/O if it is set up as the only file in an
LSR subpool rather than using NSR. Also the use of multiple pools eases the
restriction of 255 strings for each pool.
Number of strings
The next decision to be made is the number of concurrent accesses to be supported
for each file and for each LSR pool.
VSAM requires one or more strings for each concurrent file operation. For
nonupdate requests (for example, a READ or BROWSE), an access using a base
needs one string, and an access using an AIX needs two strings (one to hold
position on the AIX and one to hold position on the base data set). For update
requests where no upgrade set is involved, a base still needs one string, and a path
two strings. For update requests where an upgrade set is involved, a base needs
1+n strings and a path needs 2+n strings, where n is the number of members in
the upgrade set (VSAM needs one string per upgrade set member to hold
position). Note that, for each concurrent request, VSAM can reuse the n strings
required for upgrade set processing because the upgrade set is updated serially.
See “CICS calculation of LSR pool parameters” on page 231.
| Note: There are some special considerations for setting the STRINGS value for an
| ESDS file (see “Number of strings considerations for ESDS files” on
| page 229).
| For LSR, it is possible to specify the precise numbers of strings, or to have CICS
calculate the numbers. The number specified in the LSR pool definition is the
actual number of strings in the pool. If CICS is left to calculate the number of
strings, it derives the pool STRINGS from the RDO file definition and interprets
this, as with NSR, as the actual number of concurrent requests. (For an explanation
of CICS calculation of LSR pool parameters, see “CICS calculation of LSR pool
parameters” on page 231.)
You must decide how many concurrent read, browse, updates, mass inserts, and so
on you need to support.
If access to a file is read only with no browsing, there is no need to have a large
number of strings; just one may be sufficient. Note that, while a read operation
only holds the VSAM string for the duration of the request, it may have to wait for
the completion of an update operation on the same CI.
| In general (but see“Number of strings considerations for ESDS files” on page 229)
| where some browsing or updates are used, STRINGS should be set to 2 or 3
initially and CICS file statistics should be checked regularly to see the proportion
of wait-on-strings encountered. Wait-on-strings of up to 5% of file accesses would
usually be considered quite acceptable. You should not try, with NSR files, to keep
wait-on-strings permanently zero.
CICS manages string usage for both files and LSR pools. For each file, whether it
uses LSR or NSR, CICS limits the number of concurrent VSAM requests to the
STRINGS= specified in the file definition. For each LSR pool, CICS also prevents
more requests being concurrently made to VSAM than can be handled by the
strings in the pool. Note that, if additional strings are required for upgrade-set
processing at update time, CICS anticipates this requirement by reserving the
additional strings at read-for-update time. If there are not enough file or LSR pool
strings available, the requesting task waits until they are freed. The CICS statistics
give details of the string waits.
If you want to distribute your strings across tasks of different types, the transaction
classes may also be useful. You can use transaction class limits to control the
transactions issuing the separate types of VSAM request, and for limiting the
number of task types that can use VSAM strings, thereby leaving a subset of
strings available for other uses.
All placeholder control blocks must contain a field long enough for the largest key
associated with any of the data sets sharing the pool. Assigning one inactive file
that has a very large key (primary or alternate) into an LSR pool with many strings
| may use excessive storage.
| If an ESDS is used as an ‘add-only’ file (that is, it is used only in write mode to
| add records to the end of the file), a string number of 1 is strongly recommended.
| Any string number greater than 1 can significantly affect performance, because of
| exclusive control conflicts that occur when more than one task attempts to write to
| the ESDS at the same time.
| If an ESDS is used for both writing and reading, with writing, say, being 80% of
| the activity, it is better to define two file definitions—using one file for writing and
| the other for reading.
In general, direct I/O runs slightly more quickly when data CIs are small, whereas
sequential I/O is quicker when data CIs are large. However, with NSR files, it is
possible to get a good compromise by using small data CIs but also assigning extra
buffers, which leads to chained and overlapped sequential I/O. However, all the
extra data buffers get assigned to the first string doing sequential I/O.
VSAM functions most efficiently when its control areas are the maximum size, and
it is generally best to have data CIs larger than index CIs. Thus, typical CI sizes for
data are 4KB to 12KB and, for index, 1KB to 2KB.
In general, you should specify the size of the data CI for a file, but allow VSAM to
select the appropriate index CI to match. An exception to this is if key compression
turns out to be less efficient than VSAM expects it to be. In this case, VSAM may
select too small an index CI size. You may find an unusually high rate of CA splits
occurring with poor use of DASD space. If this is suspected, specify a larger index
CI.
In the case of LSR, there may be a benefit in standardizing on the CI sizes, because
this allows more sharing of buffers between files and thereby allow a lower total
Chapter 18. VSAM and file control 229
number of buffers. Conversely, there may be a benefit in giving a file unique CI
sizes to prevent it from competing for buffers with other files using the same pool.
Try to keep CI sizes at 512, 1KB, 2KB, or any multiple of 4KB. Unusual CI sizes
like 26KB or 30KB should be avoided. A CI size of 26KB does not mean that
physical block size will be 26KB; the physical block size will most likely be 2KB in
this case (it is device-dependent).
Specify the number of data and index buffers for NSR using the DATABUFFER
and INDEXBUFFER parameters of the file definition. It is important to specify
sufficient index buffers. If a KSDS consists of just one control area (and, therefore,
just one index CI), the minimum index buffers equal to STRINGS is sufficient. But
when a KSDS is larger than this, at least one extra index buffer needs to be
specified so that at least the top level index buffer is shared by all strings. Further
index buffers reduces index I/O to some extent.
Note that when the file is an AIX path to a base, the same INDEXBUFFERS (if the
base is a KSDS) and DATABUFFERS are used for AIX and base buffers (but see
“Data set name sharing” on page 232).
Allowing CICS to calculate the LSR parameters is easy but it requires additional
overhead (when the first file that needs the LSR pool is opened) to build the pool
because CICS must read the VSAM catalog for every file that is specified to use the
pool. Also it cannot be fine-tuned by specifying actual quantities of each buffer
size. When making changes to the size of an LSR pool, refer to the CICS statistics
before and after the change is made. These statistics show whether the proportion
of VSAM reads satisfied by buffer lookaside is significantly changed or not.
In general, you would expect to benefit more by having extra index buffers for
lookaside, and less by having extra data buffers. This is a further reason for
standardizing on LSR data and index CI sizes, so that one subpool does not have a
mix of index and data CIs in it.
Note: Data and index buffers are specified separately with the LSRPOOL
definition. Thus, there is not a requirement to use CI size to differentiate
between data and index values.
Note: If you have specified only buffers or only strings, CICS performs the
calculation for what you have not specified.
The following information helps you calculate the buffers required. A particular file
may require more than one buffer size. For each file, CICS determines the buffer
sizes required for:
v The data component
v The index component (if a KSDS)
v The data and index components for the AIX (if it is an AIX path)
v The data and index components for each AIX in the upgrade set (if any).
When this has been done for all the files that use the pool, the total number of
buffers for each size is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT in the
LSRPOOL definition. The SHARELIMIT parameter takes precedence.
v If necessary, increased to a minimum of three buffers.
v Rounded up to the nearest 4KB boundary.
Note: If the LSR pool is calculated by CICS and the data sets have been archived
by HSM, when the first file that needs the LSR pool is opened, the startup
When the strings have been accumulated for all files, the total is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT
parameter in the LSR pool definition. The SHARELIMIT parameter takes
precedence.
v Reduced to 255 (the maximum number of strings allowed for a pool by VSAM).
v Increased to the largest specified STRINGS value for a particular file.
To avoid files failing to open because of the lack of adequate resources, you can
specify that CICS should include files opened in RLS mode when it is calculating
the size of an LSR pool using default values. To specify the inclusion of files
defined with RLSACCESS(YES) in an LSR pool being built using values that CICS
calculates, use the RLSTOLSR=YES system initialization parameter
(RLSTOLSR=NO is the default)
See the CICS System Definition Guide for more information about the RLSTOLSR
parameter.
DSN sharing is the default for files using both NSR and LSR. The only exception
to this default is made when opening a file that has been specified as read-only
(READ=YES or BROWSE=YES) and with DSNSHARING(MODIFYREQS) in the file
resource definition. CICS provides this option so that a file (represented by an
When the first member of a group of DSN-sharing NSR files is opened, CICS must
specify to VSAM the total number of strings to be allocated for all file entries in
the group, by means of the BSTRNO value in the ACB. VSAM builds its control
block structure at this time regardless of whether the first data set to be opened is
a path or a base. CICS calculates the value of BSTRNO used at the time of the
open by adding the STRINGS values in all the files that share the same
NSRGROUP= parameter.
If you do not provide the NSRGROUP= parameter, the VSAM control block
structure may be built with insufficient strings for later processing. This should be
avoided for performance reasons. In such a case, VSAM invokes the dynamic
string addition feature to provide the extra control blocks for the strings as they
are required, and the extra storage is not released until the end of the CICS run.
AIX considerations
For each AIX defined with the UPGRADE attribute, VSAM upgrades the AIX
automatically when the base cluster is updated.
For NSR, VSAM uses a special set of buffers associated with the base cluster to do
this. This set consists of two data buffers and one index buffer, which are used
serially for each AIX associated with a base cluster. It is not possible to tune this
part of the VSAM operation.
Care should be taken when specifying to VSAM that an AIX should be in the
upgrade set. Whenever a new record is added, an existing record deleted, or a
record updated with a changed attribute key, VSAM updates the AIXs in the
upgrade set. This involves extra processing and extra I/O operations.
Adding records to the end of a VSAM data set does not cause CI/CA splits.
Adding sequential records to anywhere but the end causes splits. An empty file
with a low-value dummy key tends to reduce splits; a high-value key increases the
number of splits.
Effects
The LSRPOOLID parameter specifies whether a file is to use LSR or NSR and, if
LSR, which pool.
Where useful
The LSRPOOLID parameter can be used in CICS systems with VSAM data sets.
Limitations
All files with the same base data set, except read-only files with
DSNSHARING(MODIFYREQS) specified in the file definition, must use either the
same LSR pool or all use NSR.
Recommendations
See “VSAM considerations: general objectives” on page 225. Consider removing
files from an LSR pool.
How implemented
The resource usage is defined by the LSRPOOL definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.
Effects
INDEXBUFFERS and DATABUFFERS specify the number of index and data buffers
for an NSR file.
The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings) and efficient sequential operations and
CA splits. Providing extra buffers for high-level index records can reduce physical
I/O operations.
Buffer allocations above the 16MB line represent a significant part of the virtual
storage requirement of most CICS systems.
INDEXBUFFERS and DATABUFFERS have no effect if they are specified for files
using LSR.
Where useful
The INDEXBUFFERS and DATABUFFERS parameters should be used in CICS
systems that use VSAM NSR files in CICS file control.
Limitations
These parameters can be overridden by VSAM if they are insufficient for the
strings specified for the VSAM data set. The maximum specification is 255. A
specification greater than this will automatically be reduced to 255. Overriding of
VSAM strings and buffers should never be done by specifying the AMP= attribute
on the DD statement.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The INDEXBUFFERS and DATABUFFERS parameters are defined in the file
definition on the CSD. They correspond exactly to VSAM ACB parameters:
INDEXBUFFERS is the number of index buffers, DATABUFFERS is the number of
data buffers.
Effects
The BUFFERS parameter allows for exact definition of specific buffers for the LSR
pool.
The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings). It can also increase the chance of
successful buffer lookaside with the resulting reduction in physical I/O operations.
The number of buffers should achieve an optimum between increasing the I/O
saving due to lookaside and increasing the real storage requirement. This optimum
is different for buffers used for indexes and buffers used for data. Note that the
optimum buffer allocation for LSR is likely to be significantly less than the buffer
allocation for the same files using NSR.
Where useful
The BUFFERS parameter should be used in CICS systems that use VSAM LSR files
in CICS file control.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The BUFFERS parameter is defined in the file definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.
How monitored
The effects of these parameters can be monitored through transaction response
times and data set and paging I/O rates. The effectiveness affects both file and
lsrpool statistics. The CICS file statistics show data set activity to VSAM data sets.
The VSAM catalog and RMF can show data set activity, I/O contention, space
usage, and CI size.
Effects
The STRINGS parameter for files using NSR has the following effects:
v It specifies the number of concurrent asynchronous requests that can be made
against that specific file.
v It is used as the STRINGS in the VSAM ACB.
v It is used, in conjunction with the BASE parameter, to calculate the VSAM
BSTRNO.
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| invalidating the buffers for each of the strings is greater than waiting for the
| string, and there can be a significant increase in the number of VSAM EXCP
| requests.
Strings represent a significant part of the virtual storage requirement of most CICS
systems. With CICS, this storage is above the 16MB line.
Where useful
The STRINGS parameter should be used in CICS systems that use VSAM NSR files
in CICS file control.
Limitations
A maximum of 255 strings can be used as the STRNO or BSTRNO in the ACB.
Recommendations
See “Number of strings considerations for ESDS files” on page 229 and “VSAM
considerations: general objectives” on page 225.
How implemented
| The number of strings is defined by the STRINGS parameter in the CICS file
definition on the CSD. It corresponds to the VSAM parameter in the ACB except
where a base file is opened as the first for a VSAM data set; in this case, the
CICS-accumulated BSTRNO value is used as the STRNO for the ACB.
How monitored
The effects of the STRINGS parameter can be seen in increased response times and
monitored by the string queueing statistics for each file definition. RMF can show
I/O contention in the DASD subsystem.
Effects
The STRINGS parameter relating to files using LSR has the following effects:
v It specifies the number of concurrent requests that can be made against that
specific file.
v It is used by CICS to calculate the number of strings and buffers for the LSR
pool.
v It is used as the STRINGS for the VSAM LSR pool.
v It is used by CICS to limit requests to the pools to prevent a VSAM
short-on-strings condition (note that CICS calculates the number of strings
required per request).
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| resolving exclusive control conflicts is greater than waiting for a string. Each
| time exclusive control is returned, a GETMAIN is issued for a message area,
| followed by a second call to VSAM to obtain the owner of the control interval.
Where useful
The STRINGS parameter can be used in CICS systems with VSAM data sets.
Limitations
A maximum of 255 strings is allowed per pool.
Recommendations
| See “Number of strings considerations for ESDS files” on page 229 and “VSAM
| considerations: general objectives” on page 225.
How implemented
The number of strings is defined by the STRNO parameter in the file definition on
the CSD, which limits the concurrent activity for that particular file.
How monitored
The effects of the STRINGS parameter can be seen in increased response times for
each file entry. The CICS LSRPOOL statistics give information on the number of
data set accesses and the highest number of requests for a string.
Examination of the string numbers in the CICS statistics shows that there is a
two-level check on string numbers available: one at the data set level (see “File
control” on page 385), and one at the shared resource pool level (see “LSRpool” on
page 416).
Effects
The KEYLENGTH parameter causes the “placeholder” control blocks to be built
with space for the largest key that can be used with the LSR pool. If the
KEYLENGTH specified is too small, it prevents requests for files that have a longer
key length.
Where useful
The KEYLENGTH parameter can be used in CICS systems with VSAM data sets.
Recommendations
See “VSAM considerations: general objectives” on page 225.
The key length should always be as large as, or larger than, the largest key for files
using the LSR pool.
How implemented
The size of the maximum keylength is defined in the KEYLEN parameter in the
file definition on the CSD. For more information about the CSD, see the CICS
Resource Definition Guide.
Effects
The method used by CICS to calculate LSR pool parameters and the use of the
SHARELIMIT value is described in “VSAM considerations: general objectives” on
page 225.
This parameter has no effect if both the BUFFERS and the STRINGS parameters
are specified for the pool.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The SHARELIMIT parameter is specified in the LSR pool definition. For more
information, see the CICS Resource Definition Guide.
Effects
CICS always builds a control block for LSR pool 1. CICS builds control blocks for
other pools if either a LSR pool definition is installed, or a file definition at CICS
initialization time has LSRPOOL= defined with the number of the pool.
Where useful
VSAM local shared resources can be used in CICS systems that use VSAM.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
CICS uses the parameters provided in the LSR pool definition to build the LSR
pool.
How monitored
VSAM LSR can be monitored by means of response times, paging rates, and CICS
LSRPOOL statistics. The CICS LSRPOOL statistics show string usage, data set
activity, and buffer lookasides (see “LSRpool” on page 416).
Hiperspace buffers
VSAM Hiperspace buffers reside in MVS expanded storage. These buffers are
backed only by expanded storage. If the system determines that a particular page
of this expanded storage is to be used for another purpose, the current page’s
contents are discarded rather than paged-out. If VSAM subsequently requires this
Effects
The use of a very large number of Hiperspace buffers can reduce both physical
I/O and pathlength when accessing your CICS files because the chance of finding
the required records already in storage is relatively high.
Limitations
Because the amount of expanded storage is limited, it is possible that the
installation will overcommit its use and VSAM may be unable to allocate all of the
Hiperspace buffers requested. MVS may use expanded storage pages for purposes
other than those allocated to VSAM Hiperspace buffers. In this case CICS
continues processing using whatever buffers are available.
If address space buffers are similarly overallocated then the system would have to
page. This overallocation of address space buffers is likely to seriously degrade
CICS performance whereas overallocation of Hiperspace buffers is not.
Hiperspace buffer contents are lost when an address space is swapped out. This
causes increased I/O activity when the address is swapped in again. If you use
Hiperspace buffers, you should consider making the CICS address space
nonswappable.
Recommendations
Keeping data in memory is usually very effective in reducing the CPU costs
provided adequate central and expanded storage is available. Using mostly
Hiperspace rather than all address space buffers can be the most effective option
especially in environments where there are more pressing demands for central
storage than VSAM data.
How implemented
CICS never requests Hiperspace buffers as a result of its own resource calculations.
You have to specify the size and number of virtual buffers and Hiperspace buffers
that you need.
You can use the RDO parameters of HSDATA and HSINDEX, which are added to
the LSRPOOL definition to specify Hiperspace buffers. Using this method you can
adjust the balance between Hiperspace buffers and virtual buffers for your system.
For further details of the CEDA transaction, see the CICS Resource Definition Guide.
Effects
The objective of subtasks is to increase the maximum throughput of a single CICS
system on multiprocessors. However, the intertask communication increases total
processor utilization.
When I/O is done on subtasks, any extended response time which would cause
the CICS region to stop, such as CI/CA splitting in NSR pools, causes only the
additional TCB to stop. This may allow more throughput in a region that has very
many CA splits in its file, but has to be assessed cautiously with regard to the
extra overhead associated with using the subtask.
Limitations
Subtasking can improve throughput only in multiprocessor MVS images, because
additional processor cycles are required to run the extra subtask. For that reason,
we do not recommend the use of this facility on uniprocessors (UPs). It should be
used only for a region that reaches the maximum capacity of one processor in a
complex that has spare processor capacity or has NSR files that undergo frequent
CI/CA splitting.
Regions that do not contain significant amounts of VSAM data set activity
(particularly update activity) do not gain from VSAM subtasking.
Application task elapsed time may increase or decrease because of conflict between
subtasking overheads and better use of multiprocessors. Task-related DSA
occupancy increases or decreases proportionately.
Recommendations
SUBTSKS=1 should normally be specified only when the CICS system is run on a
MVS image with two or more processors and the peak processor utilization due to
the CICS main TCB in a region exceeds, say, about 70% of one processor, and a
significant amount of I/O activity within the CICS address space is eligible for
subtasking.
The maximum system throughput of this sort of CICS region can be increased by
using the I/O subtask, but at the expense of some additional processing for
communication between the subtask and the MVS task under which the
transaction processing is performed. This additional processing is seldom justified
unless the CICS region has reached or is approaching its throughput limit.
A TOR that is largely or exclusively routing transactions to one or more AORs has
very little I/O that is eligible for subtasking. It is not, therefore, a good candidate
for subtasking.
Subtasking should be considered for a busy FOR that often has a significant
amount of VSAM I/O (but remember that DL/I processing of VSAM data sets is
not subtasked).
| How monitored
| CICS dispatcher domain statistics include information about the modes of TCB
| listed in “Subtasking: VSAM (SUBTSKS=1)” on page 241.
|
Data tables
Data tables enable you to build, maintain and have rapid access to data records
contained in tables held in virtual storage above the 16MB line. Therefore, they can
provide a substantial performance benefit by reducing DASD I/O and pathlength
resources. The pathlength to retrieve a record from a data table is significantly
shorter than that to retrieve a record already in a VSAM buffer.
Effects
v After the initial data table load operation, DASD I/O can be eliminated for all
user-maintained and for read-only CICS-maintained data tables.
v Reductions in DASD I/O for CICS-maintained data tables are dependent on the
READ/WRITE ratio. This is a ratio of the number of READs to WRITEs that
was experienced on the source data set, prior to the data table implementation.
They also depend on the data table READ-hit ratio, that is, the number of
READs that are satisfied by the table, compared with the number of requests
that go against the source data set.
v CICS file control processor consumption can be reduced by up to 70%. This is
dependent on the file design and activity, and is given here as a general
guideline only. Actual results vary from installation to installation.
For CICS-maintained data tables, CICS ensures the synchronization of source data
set and data table changes. When a file is recoverable, the necessary
synchronization is already effected by the existing record locking. When the file is
nonrecoverable, there is no CICS record locking and the note string position (NSP)
mechanism is used instead for all update requests. This may have a small
performance impact of additional VSAM ENDREQ requests in some instances.
Recommendations
v Remember that data tables are defined by two RDO parameters, TABLE and
MAXNUMRECS of the file definition. No other changes are required.
v Start off gradually by selecting only one or two candidates. You may want to
start with a CICS-maintained data table because this simplifies recovery
considerations.
v Select a CICS-maintained data table with a high READ to WRITE ratio. This
information can be found in the CICS LSRPOOL statistics (see page 416) by
running a VSAM LISTCAT job.
v READ INTO is recommended, because READ SET incurs slightly more internal
overhead.
How implemented
Data tables can be defined using either the DEFINE FILE command of the CEDx
transaction or the DFHCSDUP utility program. See the CICS Resource Definition
Guide for more information.
How monitored
Performance statistics are gathered to assess the effectiveness of the data table.
They are in addition to those available through the standard CICS file statistics.
| A CFDT is similar in many ways to a shared user-maintained data table, and the
| API used to store and retrieve the data is based on the file control API used for
| user-maintained data tables. The data, unlike a UMT, is not kept in a dataspace in
| CFDTs are particularly useful for informal shared data. Uses could include a
| sysplex-wide shared scratchpad, look-up tables of telephone numbers, and creating
| a subset of customers from a customer list. Compared with existing methods of
| sharing data of this kind, such as shared data tables, shared temporary storage or
| RLS files, CFDTs offer some distinct advantages:
| v If the data is frequently accessed for modification, CFDT provides superior
| performance compared with function-shipped UMT requests, or using an RLS
| file
| v CFDT-held data can be recoverable within a CICS transaction. Recovery of the
| structure is not supported, but the CFDT server can recover from a unit of work
| failure, and in the event of a CICS region failure, a CFDT server failure, and an
| MVS failure (that is, updates made by units of work that were in-flight at the
| time of the failure are backed out). Such recoverability is not provided by shared
| temporary storage.
| There are two models of coupling facility data table, a contention model or locking
| model.
| The locking model causes records to be locked following a read for update request
| so that multiple updates cannot occur.
| The relative cost of using update models and recovery is related to the number of
| coupling facility accesses needed to support a request. Contention requires the least
| number of accesses, but if the data is changed, additional programming and
| coupling facility accesses would be needed to handle this condition. Locking
| requires more coupling facility accesses, but does mean a request will not need to
| be retried, whereas retries can be required when using the contention model.
| Recovery also requires further coupling facility accesses, because the recovery data
| is kept in the coupling facility list structure.
| The following table shows the number of coupling facility accesses needed to
| support the CFDT request types by update model.
| Open, Close 3 3 6
| Read, Point 1 1 1
| Write new record 1 1 2
| Read for Update 1 2 2
| Unlock 0 1 1
| Rewrite 1 1 3
| Delete 1 1 2
| Delete by key 1 2 3
| Syncpoint 0 0 3
| Lock WAIT 0 2 2
| Lock POST 0 2 2
| Cross-system POST 0 2 per waiting 2 per waiting
| server server
| Locking model
| Records held in a coupling facility list structure are marked as locked by updating
| the adjunct area associated with the coupling facility list structure element that
| holds the data. Locking a record requires an additional coupling facility access to
| set the lock, having determined on the first access that the data was not already
| locked.
| If, however, there is an update conflict, a number of extra coupling facility accesses
| are needed, as described in the following sequence of events:
| 1. The request that hits lock contention is initially rejected.
| 2. The requester modifies the locked record adjunct area to express an interest in
| it. This is a second extra coupling facility access for the lock waiter.
| 3. The lock owner has its update rejected because the record adjunct area has
| been modified, requiring the CICS region to re-read and retry the update. This
| results in two extra coupling facility accesses.
| 4. The lock owner sends a lock release notification message. If the lock was
| requested by a different server, this results in a coupling facility access to write
| a notification message to the other server and a coupling facility access to read
| it on the other side.
| Contention model
| The contention update model uses the entry version number to keep track of
| changes. The entry version number is changed each time the record is updated.
| This allows an update request to check that the record has not been altered since
| its copy of the record was acquired.
| When an update conflict occurs, additional coupling facility accesses are needed:-
| v The request that detects that the record has changed is initially rejected and a
| CHANGED response is sent.
| v The application receiving the response has to decide whether to retry the
| request.
| Recommendations
| Choose an appropriate use of a CFDT. For example, for cross-system, recoverable
| scratchpad storage, where shared TS does not give the required functional, or
| VSAM RLS incurs too much overhead.
| A large file requires a large amount of coupling facility storage to contain it.
| Smaller files are better CFDT candidates (unless your application is written to
| control the number of records held in a CFDT).
| The additional cost of using a locking model compared with a contention model is
| not great. Considering that using the contention model may need application
| changes if you are using an existing program, locking is probably the best choice
| of update model for your CFDT. If coupling facility accesses are critical to you,
| they are minimized by the contention model.
| Recovery costs slightly more in CPU usage and in coupling facility utilisation.
| Allow for expansion when sizing the CFDT. The amount of coupling facility
| storage a structure occupies can be increased dynamically up to the maximum
| defined in the associated coupling facility resource management (CFRM) policy
| with a SETXCF ALTER command. The MAXTABLES value defined to the CFDT
| server should allow for expansion. Therefore, consider setting it to a value higher
| The utilization of the CFDT should be regularly monitored both through CICS and
| CFDT statistics and RMF. Check that the size of the structure is reasonable for the
| amount of data it contains. A maximum-used of 80% is a reasonable target.
| Defining a maximum coupling facility list structure size in the CFRM policy
| definition to be greater than the initial allocation size specified by the POOLSIZE
| parameter in the CFDT server startup parameters enables you to enlarge the
| structure dynamically with a SETXCF ALTER command if the structure does fill in
| extraordinary circumstances.
| Ensure that the AXMPGANY storage pool is large enough. This can be increased
| by increasing the REGION size for the CFDT server. Insufficient AXMPGANY
| storage may lead to 80A abends in the CFDT server.
| How implemented
| A CFDT is defined to a CICS region using a FILE definition with the following
| parameters:
| v TABLE(CF)
| v MAXNUMRECS(NOLIMIT|number(1 through 99999999))
| v CFDTPOOL(pool_name)
| v TABLENAME(name)
| v UPDATEMODEL(CONTENTION|LOCKING)
| v LOAD(NO│YES)
| MAXNUMRECS specifies the maximum number of records that that CFDT can
| hold.
| The first CICS region to open the CFDT determines the attributes for the file. Once
| opened successfully, these attributes remain associated with the CFDT through the
| data in the coupling facility list structure. Unless this table or coupling facility list
| structure is deleted or altered by a CFDT server operator command, the attributes
| persist even after CICS and CFDT server restarts. Other CICS regions attempting to
| open the CFDT must have a consistent definition of the CFDT, for example using
| the same update model.
| The CFDT server controls the coupling facility list structure and the data tables
| held in this structure. The parameters documented in the CICS System Definition
| Guide describe how initial structure size, structure element size, and
| entry-to-element ratio can be specified.
| How monitored
| Both CICS and the CFDT server produce statistics records. These are described in
| “Appendix C. Coupling facility data tables server statistics” on page 509.
| The CICS file statistics report the various requests by type issued against each
| CFDT. They also report if the CFDT becomes full, the highest number of records
| held and a Changed Response/Lock Wait count. This last item can be used to
| determine for a contention CFDT how many times the CHANGED condition was
| returned. For a locking CFDT this count reports how many times requests were
| made to wait because the requested record was already locked.
| This above example shows the amount of space currently used in a coupling
| facility list structure (Size) and the maximum size (Max size) defined for the
| structure. The structure size can be increased by using a SETXCF ALTER
| command. The number of lists defined is determined by the MAXTABLES
| parameter for the CFDT server. In this example, the structure can support up to
| 100 data tables (and 37 lists for control information).
| Each list entry comprises a fixed length section for entry controls and a variable
| number of data elements. The size of these elements is fixed when the structure is
| first allocated in the coupling facility, and is specified to the CFDT server by the
| ELEMSIZE parameter. The allocation of coupling facility space between entry
| controls and elements will be altered automatically and dynamically by the CFDT
| server to improve space utilization if necessary.
| The reserve space is used to ensure that rewrites and server internal operations can
| still function if a structure fills with user data.
| The amount of storage used with the CFDT region to support AXM requests is also
| reported. For example:-
| AXMPG0004I Usage statistics for storage page pool AXMPGANY:
| Size In Use Max Used Free Min Free
| 30852K 636K 672K 30216K 30180K
| 100% 2% 2% 98% 98%
| Gets Frees Retries Fails
| 3122 3098 0 0
| AXMPG0004I Usage statistics for storage page pool AXMPGLOW:
| Size In Use Max Used Free Min Free
| 440K 12K 12K 428K 428K
| 100% 3% 3% 97% 97%
| Gets Frees Retries Fails
| 3 0 0 0
| The CFDT server uses storage in its own region for AXMPGANY and
| AXMPGLOW storage pools. AXMPGANY accounts for most of the available
| RMF reports
| In addition to the statistics produced by CICS and the CFDT server, you can
| monitor the performance and use of the coupling facility list structure using the
| RMF facilities available on OS/390. A ‘Coupling Facility Activity’ report can be
| used to review the use of a coupling facility list structure. For example, this section
| of the report shows the DFHFCLS_PERFCFT2 structure size (12M), how much of
| the coupling facility is occupied (0.6%), some information on the requests handled,
| and how this structure has allocated and used the entries and data elements within
| this particular list structure.
| % OF % OF AVG LST/DIR DATA LOCK DIR REC/
| STRUCTURE ALLOC CF # ALL REQ/ ENTRIES ELEMENTS ENTRIES DIR REC
| TYPE NAME STATUS CHG SIZE STORAGE REQ REQ SEC TOT/CUR TOT/CUR TOT/CUR XI'S
|
| LIST DFHCFLS_PERFCFT2 ACTIVE 12M 0.6% 43530 93.2% 169.38 3837 39K N/A N/A
| 1508 11K N/A N/A
| RMF will also report on the activity (performance) of each structure, for example:-
|
|
| STRUCTURE NAME = DFHCFLS_PERFCFT2 TYPE = LIST
| # REQ -------------- REQUESTS ------------- -------------- DELAYED REQUESTS -------------
| SYSTEM TOTAL # % OF -SERV TIME(MIC)- REASON # % OF ---- AVG TIME(MIC) -----
| NAME AVG/SEC REQ ALL AVG STD_DEV REQ REQ /DEL STD_DEV /ALL
|
| MV2A 43530 SYNC 21K 49.3% 130.2 39.1
| 169.4 ASYNC 22K 50.7% 632.7 377.7 NO SCH 0 0.0% 0.0 0.0 0.0
| CHNGD 0 0.0% INCLUDED IN ASYNC
| DUMP 0 0.0% 0.0 0.0
| This report shows how many requests were processed for the structure
| DFHFCLS_PERFCFT2 and average service times (response times) for the two
| categories of requests, synchronous and asynchronous. Be aware that requests of
| greater then 4K are handled asynchronously. For an asynchronous request, the
| CICS region can continue to execute other work and is informed when the request
| completes. CICS waits for a synchronous request to complete, but these are
| generally very short periods. The example above shows an average service time of
| 130.2 microseconds (millionths of a second). CICS monitoring records show delay
| time for a transaction due waiting for a CFDT response. In the example above, a
| mixed workload of small and large files was used. You can see from the SERV
| TIME values that, on average, the ASYNC requests took nearly 5 times longer to
| process and that there was a wide variation in service times for these requests. The
| STD_DEV value for SYNC requests is much smaller.
|
| VSAM record-level sharing (RLS)
| VSAM record-level sharing (RLS) is a VSAM data set access mode, introduced in
| DFSMS™ Version 1 Release 3, and supported by CICS. RLS enables VSAM data to
| be shared, with full update capability, between many applications running in many
| CICS regions. With RLS, CICS regions that share VSAM data sets can reside in one
| or more MVS images within an MVS parallel sysplex.
| RLS also provides some benefits when data sets are being shared between CICS
| regions and batch jobs.
| Effects
| There is an increase CPU costs when using RLS compared with function-shipping
| to an FOR using MRO. When measuring CPU usage using the standard DSW
| workload, the following comparisons were noted:
| v Switching from local file access to function-shipping across MRO cross-memory
| (XM) connections incurred an increase of 7.02 ms per transaction in a single
| CPC.
| v Switching from MRO XM to RLS incurred an increase of 8.20ms per transaction
| in a single CPC.
| v Switching from XCF/MRO to RLS using two CPCs produced a reduction of
| 2.39ms per transaction.
| v Switching from RLS using one CPC to RLS using two CPCs there was no
| appreciable difference.
| However, performance measurements on their own don’t tell the whole story, and
| do not take account of other factors, such as:
| v As more and more applications need to share the same VSAM data, the load
| increases on the single file-owning region (FOR) to a point where the FOR can
| become a throughput bottleneck. The FOR is restricted, because of the CICS
| internal architecture, to the use of a single TCB for user tasks, which means that
| a CICS region generally does not exploit multiple CPs
| v Session management becomes more difficult as more and more AORs connect to
| to the FOR.
| v In some circumstances, high levels of activity can cause CI lock contention,
| causing transactions to wait for a lock even the specific record being accessed is
| not itself locked.
| These negative aspects of using an FOR are resolved by using RLS, which provides
| the scalability lacking in a FOR.
| How implemented
| To use RLS access mode with CICS files:
| 1. Define the required sharing control data sets
| 2. Specify the RLS_MAX_POOL_SIZE parameter in the IGDSMSxx SYS1.PARMLIB
| member.
| 3. Ensure the SMSVSAM server is started in the MVS image in which you want
| RLS support.
| 4. Specify the system initialization parameter RLS=YES. This enables CICS to
| register automatically with the SMSVSAM server by opening the control ACB
| during CICS initialization. RLS support cannot be enabled dynamically later if
| you start CICS with RLS=NO.
| 5. Ensure that the data sets you plan to use in RLS-access mode are defined, using
| Access Method Services (AMS), with the required recovery attributes using the
| LOG and LOGSTREAMID parameters on the IDCAMS DEFINE statements. If
| you are going to use an existing data set that was defined without these
| attributes, redefine the data set with them specified.
| 6. Specify RLSACCESS(YES) on the file resource definition.
| This chapter has covered the three different modes that CICS can use to access a
| VSAM file. These are non-shared resources (NSR) mode, local shared resources
| (LSR) mode, and record-level sharing (RLS) mode. (CICS does not support VSAM
| global shared resources (GSR) access mode.) The mode of access is not a property
| of the data set itself—it is a property of the way that the data set is opened. This
| means that a given data set can be opened by a user in NSR mode at one time,
| and RLS mode at another. The term non-RLS mode is used as a generic term to
| refer to the NSR or LSR access modes supported by CICS. Mixed-mode operation
| means a data set that is opened in RLS mode and a non-RLS mode concurrently,
| by different users.
| How monitored
| Using RLS-access mode for VSAM files involves SMSVSAM as well as the CICS
| region issuing the file control requests. This means monitoring the performance of
| both CICS and SMSVSAM to get the full picture, using a combination of CICS
| performance monitoring data and SMF Type 42 records written by SMSVSAM:
| CICS monitoring
| For RLS access, CICS writes performance class records to SMF containing:
| v RLS CPU time on the SMSVSAM SRB
| v RLS wait time.
| SMSVSAM SMF data
| SMSVSAM writes Type 42 records, subtypes 15, 16, 17, 18, and 19,
| providing information about coupling facility cache sets, structures, locking
| statistics, CPU usage, and so on. This information can be analyzed using
| RMF III post processing reports.
| The following is an example of the JCL that you can use to obtain a report of
| SMSVSAM data:
| //RMFCF JOB (accounting_information),MSGCLASS=A,MSGLEVEL=(1,1),CLASS=A
| //STEP1 EXEC PGM=IFASMFDP
| //DUMPIN DD DSN=SYS1.MV2A.MANA,DISP=SHR
| //DUMPOUT DD DSN=&&SMF,UNIT=SYSDA,
| // DISP=(NEW,PASS),SPACE=(CYL,(10,10))
| //SYSPRINT DD SYSOUT=*
| //SYSIN DD *
| INDD(DUMPIN,OPTIONS(DUMP))
| OUTDD(DUMPOUT,TYPE=000:255))
| //POST EXEC PGM=ERBRMFPP,REGION=0M
| //MFPINPUT DD DSN=&&SMF,DISP=(OLD,PASS)
| //SYSUDUMP DD SYSOUT=A
| //SYSOUT DD SYSOUT=A
| //SYSPRINT DD SYSOUT=A
| //MFPMSGDS DD SYSOUT=A
| //SYSIN DD *
| NOSUMMARY
| SYSRPTS(CF)
| SYSOUT(A)
| REPORTS(XCF)
| /*
|
| CICS file control statistics contain the usual information about the numbers of file
| control requests issued in the CICS region. They also identify which files are
| accessed in RLS mode and provide counts of RLS timeouts. They do not contain
| EXCP counts, ar any information about the SMSVSAM server, or its buffer usage,
| or its accesses to the coupling facility.
|
| Overview
| The high level of abstraction required for Java or any OO language involves
| increased layering and more dynamic runtime binding as a necessary part of the
| language. This incurs extra runtime performance cost.
| The benefits of using Java language support include the ease of use of Object
| Oriented programming, and access to existing CICS applications and data from
| Java program objects. The cost of these benefits is currently runtime CPU and
| storage. Although there is a significant initialization cost, even for a Java program
| object built with ET/390, that cost amounts to only a few milliseconds of CPU time
| on the latest S/390® G5 processors. You should not see a noticeable increase in
| response time for a transaction written in Java unless CPU is constrained, although
| there will be a noticeable increase in CPU utilization. You can, however, take
| advantage of the scalability of the CICSplex architecture, and in particular, its
| parallel sysplex capabilities, to scale transaction rates.
|
| Performance considerations
| The main areas that may affect the CPU costs associated with running Java
| program objects with CICS, are discussed in the following sections:
| v “DLL initialization”
| v “LE runtime options” on page 256
| v “API costs” on page 257
| v “CICS system storage” on page 257
| DLL initialization
| At run time, when a Java program is initialized, all dynamic link libraries (DLLs)
| that contain functions that are referenced within that program are loaded into CICS
| storage. They remain in CICS storage until program compression occurs or they
| are explicitly refreshed using the CEMT SET NEWCOPY command. DLLs that
| have functions in them that are referenced by any of the DLLs being loaded are
| also brought into storage. This is referred to as ’aggressive loading’. Although the
| DLLs remain in storage, when they are reused by subsequent transactions, address
| resolution for all the functions within the DLLs is recalculated. Keeping the
| number of extraneous functions in diverse DLLs to a minimum can therefore
| LE runtime options
| Language environment (LE) runtime options can have a major impact on both
| storage usage and CPU costs of Java application programs running under CICS.
| The key LE runtime options for Java are STACK, HEAP and ANYHEAP. If the
| initial size for any of these options is too large, excessive storage will be allocated,
| which may result in a short-on-storage condition in the CICS region. If an initial
| value is too small, LE will issue a GETMAIN to allocate additional storage, which
| increases the CPU cost. Additional CPU cost can also be incurred due to extra
| GETMAINs and FREEMAINs if the FREE parameter is specified for any option
| where the initial size is too small.
| LE runtime options for a Java program can be specified using the -lerunopts option
| of the hpj command, which is used to invoke the VisualAge® for Java, Enterprise
| Toolkit for OS/390 (ET/390) bytecode binder to bind Java bytecodes into a fully
| bound program object. For example,
| -lerunopts="STACK(24K,4080,ANY,KEEP)"
|
| The VisualAge for Java documentation is supplied in HTML format with the
| product. For more information about LE runtime options, see the LE for OS/390 and
| VM Programming Reference manual, (SC28–1940), and the LE for OS/390
| Customization Guide, (SC28–1941).
| To get a report on the storage used by your Java program object, specify the
| following runtime option on the hpj command
|
| -lerunopts="RPTSTG(ON)"
|
| When the Java program object is invoked in a CICS system, a storage report will
| be written to the CICS CESE transient data destination, which is usually directed
| to the data set defined by the CEEMSG DD statement. The report shows the
| number of system level get storage calls, such as EXEC CICS GETMAIN, that were
| required while the application was running. To improve performance, use the
| storage report numbers as an aid in setting the initial and increment size for
| STACK, HEAP and ANYHEAP to values which will reduce the number of times
| that the language environment storage manager makes requests to acquire storage.
| RPTSTG should only be used in a test environment because of the overheads
| incurred in writing the storage report each time the Java program object is
| executed.
| Performance can also be improved by turning off the Java garbage collection
| routines. You do this by setting the following LE run time option:
| 'lerunopts="(envar('IBMHPJ_OPTS=-Xskipgc'))"
|
| If you do not specify values for STACK, HEAP, and ANYHEAP when you use the
| hpj command to bind your Java program object, the program inherits default
| values from hpj. The defaults are:
| LE run-time options can only be explicitly specified for Java program objects which
| are built with the -exe option used on the hpj command. Program objects which
| are built with the -jll option, such as CICS CORBA server programs, inherit their
| LE runtime options from the invoking program. In the case of CICS CORBA server
| programs, this is DFJIIOP. The values used by DFJIIOP are:
| STACK
| 24K, 4080, ANY, KEEP
| HEAP 3200K, 300K, ANY, KEEP, 4K, 4080
| ANYHEAP
| 20K, 4080, ANY, KEEP
| API costs
| When a Java program is executing it gains access to CICS resources via the JCICS
| classes. These classes ’wrap’ a subset of the standard CICS API and give the Java
| application the ability to read a record from a VSAM KSDS file, for example. When
| accessing these CICS resources from a Java application there is an additional cost
| over and above that associated with the invocation of the API from other
| languages which use the CICS transalator. The costs associated with these various
| CICS APIs for the other languages are documented in “Appendix G. Performance
| data” on page 641. Although the additional cost with Java can vary slightly
| depending on the number of arguments passed, for the purposes of capacity
| planning and in keeping with the methodology stated in “Appendix G.
| Performance data” on page 641, you should add 6.5K instructions to any cost listed
| for the other languages.
| There are three ways of balancing an IIOP workload over a number of CICS
| regions:
| v CICS Dynamic Program Routing
| v TCP/IP port sharing
| v Dynamic Domain Name Server (DNS) registration for TCP/IP
|
| Overview
| Java application programs can be run under CICS control in CICS Transaction
| Server for OS/390 Release 3 and later releases, using the MVS Java Virtual
| Machine (JVM), which runs unchanged within CICS.
| You can write CICS applications in Java and compile them to bytecode using any
| standard Java compiler, such as VisualAge for Java, or javac. Such programs will
| be referred to as JVM programs in order to distinguish them from Java Program
| Objects that are built using VisualAge for Java, Enterprise Toolkit for OS/390.
| When a JVM program executes, an MVS JVM running inside CICS is interpreting
| the Java bytecodes. When a Java Program Object executes, it is running OS/390
| machine code with runtime support from Language Environment (LE/370).
| Java Program Objects are restricted to a subset of the core Java classes whereas a
| JVM program can use the full Java package set.
|
| Performance considerations
| JVM programs are not recommended to be used for high volume, high priority
| transactions. Java Program Objects provide better performance in the CICS
| environment. Indeed it is expected that the vast majority of Java programs running
| in CICS will be Java Program Objects and that the restrictions concerning which
| core classes can be used by Java Program Objects will not be an issue for most
| programs.
| JVM programs execute by means of the MVS JVM interpreting the Java bytecodes.
| This interpretation will involve more CPU usage than for conventionally compiled
| programs executing platform—specific machine code. JVM programs cannot take
| advantage of the JVM Just-in-time (JIT) compiler because at present CICS cannot
| safely reuse a JVM created for one JVM program in order to run a second JVM
| program. A JVM is created and destroyed for each JVM program that is run. It is
| recommended that the JVM is run with the JIT disabled. This is the default setting
| shipped by CICS in the DFHJVMEV member of the SDFHENV dataset which
| contains the JVM tailorable options.
| A large part of the CPU overhead associated with running JVM programs is
| consumed with creating and destroying a JVM each time a JVM program is run.
| A Java Virtual Machine executing a JVM program is run inside CICS under its own
| TCB. The CICS-JVM Interface uses the Open Transaction Environment (OTE)
| function provided in CICS Transaction Server for OS/390 Release 3 to provide the
| ″Open TCB″ under which the JVM is run. Each JVM program running in CICS is
| running its own JVM on its own open TCB. The particular type of open TCB
| provided for use by the JVM is called a J8 TCB. The priority of J8 TCBs is set
| significantly lower than that of the main CICS QR TCB to ensure that JVM
| programs, which have a high CPU cost, are treated as low priority transactions and
| so do not affect the main CICS workload being processed under the CICS QR TCB.
| Storage usage
| An MVS JVM runs under a J8 TCB within the CICS address space. It runs as a
| Unix System Services process and utilizes MVS Language Environment® services
| rather than CICS Language Environment services. That is to say, it uses the variant
| of Language Environment (LE) normally used outside of CICS, for example, by a
| batch COBOL program. The JVM uses MVS LE and not CICS LE because CICS
| LE—conforming applications do not support threading. As a result, all storage
| obtained by the JVM is MVS storage, acquired with a GETMAIN, within the CICS
| region, but outside of the CICS DSAs.
| An MVS JVM uses a significant amount of storage in the region. Multiple JVM
| instances running within CICS will require a significant increase in the CICS region
| size. Each JVM requires the following:
| v A minimum of 112K storage below the line. If the Java application utilises Java
| threads, there is an additional 5-6K of storage consumed for each thread.
| v A minimum of 19M of storage above the 16MB line. This very substantial
| amount of storage includes stack and heap storage used to create Java objects.
| The figure is based upon running the JVM with the IBM recommended values
| for stack and heap as shipped in the DFHJVMEV member of the SDFHENV
| dataset.
| The amount of storage required above the 16MB line by the JVM means that a
| minimum region size of 40M is required to run a single JVM inside CICS.
| Allowing for multiple JVM instances to run inside CICS will require a significant
| increase in region size. This may require changes to installation exits, IEALIMIT or
| IEFUSI, that are used to limit the region size of a job. Note that running with a
| default IEFUSI and specifying REGION=0M will result in a region size of 32M
| which is not enough to support a JVM.
| The amount of storage required below the line by the JVM effectively puts a
| maximum limit of 30 JVMs in a CICS address space, assuming a DSALIMIT of 4M
| and assuming enough storage above the line is available. It is recommended that
| transactions running JVM programs are limited using techniques such as
| TRANCLASS. An alternative approach could be to have a JOR, a JVM owning
| region, to which all JVM program executions are routed. Such a region would run
| only JVM workloads thereby minimising the amount of CICS DSA storage required
| and allowing the maximum amount of MVS storage to be allocated for use by
| JVMs.
Effects
The DRA allocates control blocks for the specified number of threads at DBCTL
connection time. One thread is equivalent to one MVS TCB, thus giving more
concurrency on multiprocessors. Because these threads are available for the
duration of the DBCTL connection, there is no pathlength overhead for collapsing
and reallocating thread related storage, and throughput should, therefore, be faster.
The number you specify should be large enough to cover average DL/I transaction
loads. After the MINTHRD limit is reached, additional threads are allocated up to
the MAXTHRD limit, the number specified in the MAXREGN, or the maximum of
255, whichever is the lowest.
When multiple CICS systems or Batch message processing programs (BMPs) are
connected to DBCTL, the sum of MINTHRD and BMPs must be less than or equal
to MAXREGN (MAXREGN is specified in the IMS sysgen macros).
Where useful
MINTHRD can be used in DBCTL systems to synchronize thread allocation with
workload requirements.
Limitations
There is a storage allocation of about 9KB per thread in the local system queue
area (LSQA) below the 16MB line.
Implementation
The MINTHRD and MAXTHRD parameters are specified in the DRA startup table
(DFSPZP).
Effects
This parameter controls the maximum number of tasks for which this CICS system
can have PSBs scheduled in DBCTL. Any requests to schedule a PSB when the
MAXTHRD limit is reached is queued by the DRA.
Where useful
MAXTHRD can be used in DBCTL systems to ensure that, at peak loads,
additional threads can be built in addition to those already allocated as a result of
MINTHRD, thus avoiding waiting for threads.
Limitations
After the MINTHRD limit is exceeded, threads continue to be built up to the
MAXTHRD limit but, because each thread’s control blocks are allocated during
PSB scheduling, the pathlength is greater for the tasks running after the MINTHRD
limit has been reached.
Implementation
The MINTHRD and MAXTHRD parameters are specified in the DRA startup table
(DFSPZP).
How monitored
DBCTL statistics are available when the CICS/DBCTL interface is shut down
normally. The MAXTHRD value is recorded (see page 364 for further information).
You can also use CICS auxiliary trace to check for queueing for threads and PSBs.
If you use DEDBs, you must define the characteristics and usage of the IMS/ESA
DEDB buffer pool. You do this by specifying parameters (including DRA startup
parameters) during IMS/ESA system definition or execution.
The number remaining when you subtract the value specified for DBFX from the
value specified for DBBF is the number of buffers available for the needs of CICS
threads. In this discussion, we have assumed a fixed number for DBFX. DBBF
must, therefore, be large enough to accommodate all batch message processing
programs (BMPs) and CICS systems that you want to connect to this DBCTL
system.
When a CICS thread connects to IMS/ESA, its DEDB buffer requirements are
specified using a normal buffer allocation (NBA) parameter. For a CICS system,
there are two NBA parameters in the DRA startup table:
1. CNBA buffers needed for the CICS system. This is taken from the total
specified in DBBF.
2. FPBUF buffers to be given to each CICS thread. This is taken from the number
specified in CNBA. FPBUF is used for each thread that requests DEDB
resources, and so should be large enough to handle the requirements of any
application that can run in the CICS system.
A CICS system may fail to connect to DBCTL if its CNBA value is more than that
available from DBBF. An application may receive schedule failure if the FPBUF
value is more than that available from CNBA. The FPBUF value is used when an
application tries to schedule a PSB that contains DEDBs.
When a CICS system has connected to DBCTL successfully, and the application has
successfully scheduled a PSB containing DEDBs, the DRA startup parameter
FPBOF becomes relevant. FPBOF specifies the number of overflow buffers each
thread gets if it exceeds FPBUF. These buffers are not taken from CNBA. Instead,
they are buffers that are serially shared by all CICS applications or other dependent
regions that are currently exceeding their normal buffer allocation (NBA)
allocation.
Where useful
The DBCTL DEDB parameters are useful in tuning a CICS/DBCTL DEDB fastpath
environment.
Recommendations
In a CICS/DBCTL environment, the main performance concern is the trade-off
between speed and concurrency. The size of this trade-off is dictated by the kind of
applications you are running in the CICS system.
The more the buffer requirements of your applications vary, the greater the
trade-off. If you want to maintain speed of access (because OBAs are not being
used) but decrease concurrency, you should increase the value of FPBUF. If you
prefer to maintain concurrency, do not increase the value of FPBUF. However,
speed of access decreases because this and possibly other threads may need to use
the OBA function.
For further guidance on DEDB buffer specification and tuning, see the information
on DEDBs in the IMS/ESA Database Administration Guide, and the IMS/ESA System
Administration Guide.
How implemented
DBBF and DBFX are parameters defined during DBCTL system generation or at
DBCTL initialization. CNBA, FPBUF, and FPBOF are defined in the DRA startup
table (DFSPZP).
How monitored
Monitoring data at the transaction level is returned to CICS by DBCTL at schedule
end and transaction termination. This data includes information on DEDB
statistics.
Note: To obtain the monitoring data, two event monitoring points (EMPs) must be
added to your CICS monitoring control table (MCT). For information about
coding the DBCTL EMPs, see the CICS Customization Guide.
Effects
| The THREADWAIT parameter of DB2CONN, and DB2ENTRY define whether the
| requests for a thread should be queued, abended, or sent to the pool thread in the
| case of a shortage of entry or command threads. If THREADWAIT=YES is
| specified instead of THREADWAIT=POOL the transaction is queued rather than
| sent to the pool thread. Using THREADWAIT=YES, therefore, avoids the thread
| initialization and termination overhead. If a transaction is made to wait because of
the lack of entry threads, a queueing arrangement is necessary. This is done by the
CICS DB2 attachment facility. The advantages of this are that, once the entry
thread finishes its current piece of work, it continues with the next transaction
immediately.
You can optimize performance between CICS and DB2 by adjusting the transaction
| class limits, MXT system parameters of CICS and the THREADWAIT, TCBLIMIT,
| THREADLIMIT, and PRIORITY attributes of DB2CONN, and DB2ENTRY.
Where useful
In a high-volume, highly-utilized system using DB2.
How implemented
THREADWAIT is defined in the DB2CONN and DB2ENTRY definitions of the
CICS DB2 attachment facility.
How monitored
The following facilities are available to monitor the CICS DB2 attachment facility.
v The CICS auxiliary trace facility and the CICS monitoring facility may be used
to trace and monitor the SQL calls issued by a specific CICS application
program.
v The CICS DB2 attachment facility command (DSNC DISPLAY) provides
information about CICS transactions accessing DB2 data, or statistical
information associated with entries in resource definition online.
v There are also various DB2 facilities which can be used. (See the DB2
Administration Guide for more information.)
| The CICS performance class monitoring records include the following DB2–related
| data fields:
| v The total number of DB2 EXEC SQL and instrumentation facility interface (IFI)
| requests issued by a transaction.
| v The elapsed time the transaction waited for a DB2 thread to become available.
| CICS monitoring is used in the CICS DB2 environment with the DB2 accounting
| facility, to monitor performance and to collect accounting information.
The sum of all the active threads from TSO users, all CICS and IMS systems and
| other systems accessing DB2 should not exceed CTHREAD. Otherwise, the result
could be unpredictable response times. When this occurs, a CICS DB2 attachment
facility “create thread” request is queued by DB2, and the CICS transaction is
placed is a wait state until a thread is available.
Effect
Each thread linking CICS to DB2 has a corresponding TCB in the CICS address
space. Too many TCBs per address space involve the MVS dispatcher scanning the
TCBs to identify an active TCB. If there is a large number of TCBs then there may
be a significant cost of processor time.
Limitations
| Increasing the TCBLIMIT value or setting up an additional CICS system with
| access to the same DB2 system may require increasing the CTHREAD parameter of
| DB2.
Recommendations
For a protected entry thread environment, implementation involves reviewing the
number of application plans and, if possible, reducing the number of plans by
combining infrequently used ones while balancing the issues of plan size and
security.
Initially, you should start with one thread per plan. In a high-volume transaction
processing environment, you can estimate the initial number by using the
occupancy time of a thread by a transaction and multiplying it with the expected
transaction rate. For example, an occupancy time of 0.2 seconds and a transaction
rate of 20 transactions per second (0.2 x 20) would give an initial thread number of
between three and four.
Effects
| When PRIORITY=HIGH is specified, transactions run at a higher priority than
CICS thus saving virtual storage, releasing locks, and avoiding other transactions
deadlocking or timing out. However, if all threads are specified with
PRIORITY=HIGH, CICS itself may be effectively at too low a priority.
Where useful
| Setting PRIORITY=HIGH is useful for high-priority and high-volume transactions.
Limitations
A complex SQL call could spend a long time in DB2, and the CICS TCB may not
be dispatched.
Recommendations
| Set PRIORITY=HIGH for your transactions with the highest weighted average
number of SQL calls. The highest weighted average is equal to the number of SQL
| calls per transaction multiplied by the frequency of transaction. Set
| PRIORITY=LOW or EQUAL for other transactions. If the CPU usage per call is
| high, you should not set PRIORITY=HIGH.
How implemented
PRIORITY is a parameter of the DB2CONN and DB2ENTRY definitions of the
CICS attachment facility.
How monitored
The following facilities are available to monitor CICS attachment facility.
The CICS log manager supports the DASD-only option of the MVS system logger.
This means that individual CICS log streams can use either coupling facility log
structures or DASD-only logging. (For more information about the types of storage
used by CICS log streams, see the CICS Transaction Server for OS/390 Installation
Guide.)
If you have a coupling facility, the CICS Transaction Server for OS/390 Installation
Guide contains advice on how you could define each log stream, based on its
usage. For information about the relative performance of CF and DASD-only log
streams, see Table 197 on page 643.
The MVS system logger writes SMF Type 88 records containing statistics for each
connected log stream. MVS supplies in SYS1.SAMPLIB a sample reporting
If these events occur frequently, this indicates that the logger cannot write data to
secondary storage quickly enough to keep up with incoming data, which causes
CICS to wait before it can write more data. Consider the following solutions to
resolve such problems:
v Increase the size of primary storage (that is, the size of the coupling facility
structure or, for a DASD-only log stream, the size of the staging data set), in
order to smooth out spikes in logger load.
v Reduce the data written to the log stream by not merging so many journals or
forward recovery logs onto the same stream.
v Reduce the HIGHOFFLOAD threshold percentage, the point at which the system
logger begins offloading data from primary storage to offload data sets.
v Review the size of the offload data sets. These should be large enough to avoid
too many “DASD shifts”—that is, new data set allocations. Aim for no more
than one DASD shift per hour. You can monitor the number of DASD shifts
using the SMF88EDS record.
v Examine device I/O statistics for possible contention on the I/O subsystem used
for offload data sets.
v Use faster DASD devices
For CICS system logs, the best performance is achieved when CICS can delete log
tail data that is no longer needed before it is written to secondary storage by the
MVS system logger. To monitor that this is being achieved, your reporting program
should examine the numbers in the SMF88SIB and SMF88SAB SMF Type 88
records. These values indicate:
SMF88SIB
Data deleted from primary storage without first being written to DASD
offload data sets. For a system log stream, this value should be high in
relation to the value of SMF88SAB. For a general log stream, this value
should normally be zero.
SMF88SAB
Data deleted from primary storage after being written to DASD offload
data sets. For a system log stream, this value should be low in relation to
the value of SMF88SIB. For a general log stream, this value should
normally be high.
Note: In any SMF interval, the total number of bytes deleted from primary storage
(SMF88SIB plus SMF88SAB) may not match the total number of bytes
written to secondary storage, because data is only written to offload data
sets and then deleted from primary storage when the HIGHOFFLOAD
threshold limit is reached.
If the SMF88SAB record frequently contains high values for a CICS system log:
Average blocksize
Important
This section applies only to log streams that use coupling facility structures.
Although consideration of the average blocksize written to the coupling facility can
happen only at the level of application design, it is still worth bearing in mind
when considering the performance implications of the CICS log manager.
If the average blocksize of data being written to the coupling facility is less than
4K, the write request is processed synchronously. Not only is the operation
synchronous to CICS, but the System/390® instruction used to access the coupling
facility is also synchronous, in that it executes for as long as it takes to place the
data in the structure. For this reason, it is unwise to mix fast CPUs with slow
coupling facilities. If the access time to a particular coupling facility remains
constant, then, for synchronous accesses, the faster the CPU the more CPU cycles
are consumed by the request.
If the average blocksize of data being written to the coupling facility is greater than
4K bytes, the write request is processed asynchronously; the CICS task gives up
control and the MVS system logger posts the ECB when the write request has been
satisfied. This can result in an asynchronous request taking longer to complete than
a synchronous one. However, there is no System/390 “long instruction” to place
data into the coupling facility.
Figure 31. RMF report showing numbers of synchronous and asyncronous writes to a coupling facility
Important
This section applies only to log streams that use coupling facility structures.
Coupling facility space is divided into structures by the CFRM policy, the
maximum permitted being 255 structures. Multiple log streams can use the same
structure. Generally, the more log streams per structure, the more difficult it is to
tune the various parameters that affect the efficiency and performance of the CICS
log manager.
As far as performance considerations go, you should try to ensure that log streams
used by applications that write similar sized data records share the same structure.
The reasons for this relate to the values defined in the AVGBUFSIZE and
MAXBUFSIZE parameters on the structure definition.
List elements are units of logged data and are either 256 bytes or 512 bytes long.
List entries are index pointers to the list elements. There is one list entry per log
record. There is at least one element per log record.
If you define MAXBUFSIZE with a value greater than 65276, data is written in
512-byte elements. If you define MAXBUFSIZE with a value less than, or equal to,
65276, data is written in 256-byte elements. The maximum value permitted for this
parameter is 65532.
The proportion of the areas occupied by the list entries and the list elements is
determined by a ratio calculated as follows:
AVGBUFSIZE / element size
The resulting ratio represents the ratio, nn : 1, where nn represents element storage,
and ’1’ represents entry storage. This is subject to a minimum of 1:1.
Each log record places an entry in the list entry area of the structure, and the data
is loaded as one or more elements in the list element area. If the list entry area
exceeds 90% of its capacity, all log streams are offloaded to DASD. DASD
offloading commences at this point, regardless of the current utilization of the log
stream, and continues until an amount of data equal to the difference between the
HIGHOFFLOAD threshold and the LOWOFFLOAD threshold has been offloaded.
For example, the list entry area may exceed 90% of its capacity while log stream A
is only 50% utilized. Its HIGHOFFLOAD threshold is 80% and its LOWOFFLOAD
threshold is 60%. Even though log stream A has not reached its HIGHOFFLOAD
threshold, or even its LOWOFFLOAD threshold, data is offloaded until 20% of the
log stream has been offloaded. This is the difference between 80% and 60%. After
the offloading operation has completed, log stream A is at 30% utilization (50%
minus 20%).
Thus, the log stream used by an application issuing very few journal write
requests may be offloaded to DASD because of frequent journal write requests by
other applications using other log streams in the same structure.
However, if multiple log streams share the same structure, a situation where list
entry storage reaches 90% utilization should only occur where all the log streams
have a similar amount of logging activity.
Recommendations
A value of 64000 for MAXBUFSIZE should be appropriate for most environments
and should be suitable for most purposes.
Limitations
If MAXBUFSIZE is set to greater than 65276, the element size is 512 bytes. With an
512-byte element, there is more likelihood of space being unused, and, therefore,
How implemented
AVGBUFSIZE and MAXBUFSIZE are parameters for use in the IXCMIAPU
program which you would run to define coupling facility structures. For more
information, see the System/390 MVS Setting up a Sysplex manual.
How monitored
The following facilities are available to monitor the data traffic to log streams on
structures, and from log streams to DASD:
v The CICS log stream statistics. These provide a range of statistical information
including a value for ’average bytes written per write’ which you can calculate
by dividing the ’TOTAL BYTES’ value by the ’TOTAL WRITES’ value. This may
help you to tune the value for AVGBUFSIZE.
v RMF provides statistics including a value ’elements per entry’ which you can
calculate by dividing the ’TOTAL NUMBER OF ELEMENTS’ value by the
’TOTAL NUMBER OF ENTRIES’ value. This allows you to check the activity in
element units on the log stream. RMF also informs you of the proportion of
requests, per structure, that have been processed synchronously and
asynchronously. This enables you to isolate structures that hold synchronously
processed log stream requests from those that hold asynchronously processed
log stream requests.
v SMF88 records. These provide a range of statistical information, including the
number of bytes offloaded.
Important
This section assumes you are using log streams that use coupling facility
structures. However, much of it applies also to DASD-only log streams. This
is clarified in “DASD-only logging” on page 281.
Offloading to DASD data sets of data from a log stream may occur when usage of
the log stream (either in the coupling facility or the staging data set) reaches its
HIGHOFFLOAD limit, specified when the log stream is defined. For a system log,
all records that have been marked for deletion are physically deleted; if, after this
has been done, the LOWOFFLOAD limit has not been reached, the oldest active
records are offloaded to DASD until LOWOFFLOAD is reached. For a general log,
the oldest data is offloaded to DASD until the LOWOFFLOAD limit is reached.
There are also situations where offloading of data from the log stream data set
occurs although the HIGHOFFLOAD threshold (and LOWOFFLOAD threshold in
some circumstances) of the log stream has not been reached. These are:
v When the HIGHOFFLOAD threshold is reached in the staging data set. If the
size of the staging data set is proportionally smaller than the log stream, the
HIGHOFFLOAD threshold is reached on the staging data set before it is reached
on the log stream data set.
In these situations, the amount of data offloaded from the log stream is determined
as follows:
(Current utilization or HIGHOFFLOAD, whichever is the greater) - LOWOFFLOAD
This is the percentage of the log stream data set that is offloaded.
Recommendations
Due to the different requirements that you will have for data on the system log,
and data on general logs, different recommendations apply in each case.
System log
When an activity keypoint happens, CICS deletes the “tail” of the primary system
log, DFHLOG. This means that data for completed units of work older than the
previous activity keypoint is deleted. Data for each incomplete unit of work older
than the previous activity keypoint is moved onto the secondary system log,
DFHSHUNT, provided that the UOW has done no logging in the current activity
keypoint interval.
To minimize the frequency of DASD offloading, try to ensure that system log data
produced during the current activity keypoint interval, plus data not deleted at the
previous activity keypoint, is always in the CF structure. To avoid offloading this
data to DASD, you are recommended to:
v Ensure that the value of LOWOFFLOAD is greater than the space required for
the sum of:
1. The system log data generated during one complete activity keypoint
interval
2. The system log data generated (between syncpoints) by your longest-running
transaction.
and Calculate a value for LOWOFFLOAD using the following formula:
trandur * 90
LOWOFFLOAD = ------------------ + 10 (where RETPD=0 is specified)
akpintvl + trandur
or
trandur * 90
LOWOFFLOAD = ------------------ (where RETPD=dddd is specified)
akpintvl + trandur
where:
– akpintvl is the interval between activity keypoints. It varies according to
workload and its calculation should be based on peak workload activity, as
follows:
AKPFREQ
akpintvl = ---------------------------------
(N1 * R1) + (N2 * R2) + (Nn * Rn)
where:
- N1, N2 ... Nn is the transaction rate for each transaction (trans/sec).
- R1, R2 ... Rn is the number of log records written by each transaction.
General logs
The recommendations for forward recovery logs and user journals are different to
those for the system log. There is no requirement here to retain logged data in the
CF structure. Rather, due to the typical use of such data, you may only need a
small structure and offload the data rapidly to DASD. If this is the case, allow
HIGHOFFLOAD and LOWOFFLOAD to default (to 80 and 0 respectively).
How implemented
HIGHOFFLOAD and LOWOFFLOAD are parameters for use in the IXCMIAPU
program which you would run to define log stream models and explicitly named
| individual log streams. For more information, see the System/390 MVS Setting up a
| Sysplex manual.
How monitored
SMF88 records and RMF provide a range of statistical information that helps you
in the tuning of these parameters.
Important
This section assumes you are using log streams that use coupling facility
structures. For related information about DASD-only log streams, see
“DASD-only logging” on page 281.
MVS keeps a second copy of data written to the coupling facility in a data space,
for use when rebuilding a coupling facility in the event of an error. This is
satisfactory as long as the coupling facility is failure-independent (in a separate
CPC and non-volatile) from MVS.
Elements (groups of log records) are written to staging data sets in blocks of 4K
bytes (not in 256-byte or 512-byte units as for log stream data sets).
Recommendations
Use the following formulae to help you tune the size of your staging data sets:
staging data set size= (NR * AVGBUFSIZE rounded up to next unit of 4096)
where NR is the number of records to fill the coupling facility structure. This can be
calculated as follows:
NR = coupling facility structure size / (AVGBUFSIZE rounded up to next element)
Ensure that the coupling facility structure and staging data set can hold the same
number of records. Staging data sets are subject to the same offloading thresholds
as log streams are. It is sensible, therefore, to ensure as far as possible that
offloading activity will be at the same frequency.
You are recommended to overestimate, rather than underestimate, staging data set
size. To calculate staging data set size to accommodate the maximum number of
records (where there is one record per element), use the following formulae:
maximum staging data set size = 8 * coupling facility structure size
Investigate using DASD FastWrite facilities with a view to storing data in the
DASD cache, as opposed to writing it directly to the staging data set. This also
enables a faster retrieval of data should it be required. Be aware, however, that if
you fill the cache, data is also then written out to the staging data set whenever
data is written to the cache.
The CICS delayed flush algorithm for log streams results in CICS writing log
blocks to the system log. As activity increases on the system, the size of the log
blocks increases (rather than the number of blocks written).
In summary, the AKPFREQ value determines the amount of writing to the log
stream buffer between keypoint frequencies.
Limitations
Increasing the AKPFREQ value has the following effects:
v Restart and XRF takeover times tend to increase.
v The amount of primary storage required for the system log increases.
Although the last two effects have an impact on system performance, the impact is
not overly significant.
Setting the frequency to zero means that emergency restart takes longer. If
AKPFREQ=0, CICS cannot perform log tail deletion until shutdown, by which time
the system log will have spilled to secondary storage. As CICS needs to read the
whole of the system log on an emergency restart, it needs to retrieve the spilled
system log from DASD offload data sets.
Short term variations in the arrival rate of transactions means that some mirror
transactions waiting to process an implicit forget can persist for some time. This is
particularly the case where such mirror transactions have been allocated to
high-numbered sessions during a peak period, now passed, of transaction arrival
rate.
Recommendations
If you set AKPFREQ too high and thus make your keypoint frequency too low, the
writing of the keypoints causes the system to slow down for only a short time. If
you set AKPFREQ too low and make your keypoint frequency too high, you may
get a short emergency restart time but you also incur increased processing, because
more activity keypoints are processed.
You are recommended to set AKPFREQ to the default value of 4000. The optimum
setting of AKPFREQ allows the whole of the system log to remain in the coupling
facility.
How implemented
Activity keypoint frequency is determined by the AKPFREQ system initialization
parameter. AKPFREQ can be altered with the CEMT SET SYSTEM[AKP(value)]
while CICS is running.
How monitored
A message, DFHRM0205, is written to the CSMT transient data destination each
time a keypoint is taken.
DASD-only logging
The primary storage used by a DASD-only log stream consists of:
v A data space owned by the MVS logger
v Staging data sets.
No data is written to coupling facility structures. In its use of staging data sets, a
DASD-only log stream is similar to a CF log stream defined with DUPLEX(YES)
COND(NO).
When the staging data set reaches its HIGHOFFLOAD limit, data is either deleted
or offloaded until the LOWOFFLOAD limit is reached.
The principle of sizing the staging data set for a DASD-only log stream is the same
as that for sizing a staging data set for a coupling facility log stream. If you are
migrating from CICS/ESA 4.1 or CICS/ESA 3.3, you are strongly recommended to
use the DFHLSCU program to size your staging data sets. For information about
DFHLSCU, see the CICS Operations and Utilities Guide.
Staging
DS size = (AKP duration) * No. of log writes per second for system log
(No. of 4K blocks) CICS TS 390 AKPFREQ
where: AKP duration = -----------------------------
No. of buffer puts per second
The values for the number of log writes per second and buffer puts per second can
be taken from your CICS/ESA 4.1 statistics. (The value for log writes per second
should not exceed 30.)
If data, programs, or terminals must be shared between the systems, CICS provides
intercommunication facilities for this sharing. Two types of intercommunication are
possible:
1. Intersystem communication (ISC). ISC is implemented through the VTAM LU6.1
or LU6.2. These give program-to-program communication with System
Network Architecture (SNA) protocols. ISC includes facilities for function
shipping, distributed transaction processing, and transaction routing.
2. Multiregion operation (MRO). MRO is implemented through MVS cross-memory
facilities. An alternative method is to use operating system supervisor calls
(SVCs). For communication across MVS images within a SYSPLEX, MRO/XCF
is implemented using the MVS cross-system coupling facility. It includes
function shipping, distributed transaction processing, and transaction routing.
The definition of too many MRO sessions can unduly increase the processor time
used to test their associated ECBs. Use the CICS-produced statistics (see “ISC/IRC
system and mode entries” on page 396) to determine the number of MRO sessions
defined and used. For more detailed information on ISC and MRO, see the CICS
Intercommunication Guide.
MRO also allows you to use multiprocessors more fully, and the multiple address
spaces can be dispatched concurrently. MRO is implemented primarily through
changes to CICS resource definitions and job control statements for the various
regions. To relieve constraints on virtual storage, it may be effective to split the
CICS address space in this manner.
Function shipping allows you to define data sets, transient data, temporary storage,
IMS databases, or interval control functions as being remote. This facility allows
applications to request data set services from a remote region (that is, the other
CICS address space where the data sets are physically defined). Heavy use of
VSAM and DL/I resources requires large amounts of virtual storage. If, for
example, 500 VSAM KSDS data sets are removed to a remote region from the
region where the application is being run, this can potentially save more than one
megabyte.
The DL/I call and EXEC interfaces are supported for function shipping. CICS
handles the access to remote resources and returns the requested items to a
program without the need for recoding the program. Use of DL/I through DBCTL
is usually a better alternative, and IMS data sharing might also be considered.
Where useful
Most CICS systems can be split.
Limitations
Splitting a CICS region requires increased real storage, increased processor cycles,
and extensive planning.
If you only want transaction routing with MRO, the processor overhead is
relatively small. The figure is release- and system-dependent (for example, it
depends on whether you are using cross-memory hardware), but for safety, assume
a total cost somewhere in the range of 15–30KB instructions per message-pair. This
is a small proportion of most transactions: commonly 10% or less.
The cost of MRO function shipping can be very much greater, because there are
normally many more inter-CICS flows per transaction. It depends greatly on the
disposition of resources across the separate CICS systems.
MRO can affect response time as well as processor time. There are delays in
getting requests from one CICS to the next. These arise because CICS terminal
control in either CICS system has to detect any request sent from the other, and
then has to process it; and also because, if you have a uniprocessor, MVS has to
arrange dispatching of two CICS systems and that must imply extra
WAIT/DISPATCH overheads and delays.
The system initialization parameter ICVTSD (see page 211) can influence the
frequency with which the terminal control program is dispatched. An ICVTSD
value in the range 300–1000 milliseconds is typical in non-MRO systems, and a
value in the range 150–300 is typical for MRO systems (and even lower if you are
using function-shipping). Another system initialization parameter is MROLRM,
which should be coded yes if you want to establish a long-running mirror task.
This saves re-establishing communications with the mirror transaction if the
application makes many function shipping requests in a unit of work.
You also have to ensure that you have enough MRO sessions defined between the
CICS systems to take your expected traffic load. They do not cost much in storage
and you certainly do not want to queue. Examine the ISC/IRC statistics to ensure
that no allocates have been queued, also ensure that all sessions are being used.
Other parameters, such as MXT, may need to be adjusted when CICS systems are
split. In an MRO system with function shipping, tasks of longer duration might
also require further adjustment of MXT together with other parameters (for
example, file string numbers, virtual storage allocation). Finally, if you plan to use
MRO, you may want to consider whether it would be advantageous to share CICS
code or application code using the MVS link pack area (LPA). Note that this is to
save real storage, not virtual storage, and other non-CICS address spaces. Use of
LPA for the eligible modules in CICS is controlled by the system initialization
parameter, LPA=YES; this tells CICS to search for the modules in the LPA. For
further information on the use of LPA, see “Using modules in the link pack area
(LPA/ELPA)” on page 297.
A system can be split by application function, by CICS function (such as a data set
owning or terminal owning CICS), or by a combination of the two functions.
Ideally, you should split the system completely, with no communication required
between the two parts. This can reduce overheads and planning. If this is not
possible, you must use one of the intercommunication facilities.
You can provide transaction routing between multiple copies of CICS. If additional
virtual storage is needed, it would be reasonable, for example, to split the AOR
into two or more additional CICS copies. When you have split the system either
partially or completely, you can reduce the amount of virtual storage needed for
each region by removing any unused resident programs. One consequence of this
is reduce the size of the relevant DSA.
Admittedly, MRO uses additional processor cycles and requires more real storage
for the new address spaces. Many installations have several megabytes of program
storage, however, so the potential virtual storage savings are significant.
You should also remember that only a local or remote PSB can be scheduled at one
time with function shipping, affecting the integrity of the combined databases.
Distributed transaction processing can allow for transactions in both systems to
concurrently schedule the PSBs.
MRO generally involves less overhead than ISC because the processing of the
telecommunications access method is avoided. VTAM logons and logoffs can
provide an alternative to transaction routing if the logons and logoffs are
infrequent.
How implemented
You must define resources in the CSD (CICS system definition) data set, such as
program files and terminal definitions. You must also create links to other systems,
together with the connection and session definitions that substantiate such links.
You can also use XCF/MRO for distributed transaction processing, provided the
LU6.1 protocol is adequate for your purpose.
Effects
MXT primarily controls virtual storage usage, particularly to avoid
short-on-storage (SOS) conditions. It also controls contention for resources, the
length of queues (this can avoid excessive processor usage), and real storage usage.
MXT controls the number of user tasks that are eligible for dispatch. When MXT is
set (either at startup, when an EXEC CICS SET SYSTEM command is processed, or
when using a CEMT transaction) the kernel and dispatcher attempt to preallocate
sufficient control blocks to guarantee that MXT user tasks can be created
concurrently. The majority of the storage used in this preallocation is obtained from
the CDSA or ECDSA, although a small amount of MVS storage is required for each
task (approximately 256 bytes above the 16MB line, and 32 bytes below the 16MB
line for each user task). It is interrelated with the DSA size limits that you set
(DSALIM, EDSALIM).
Limitations
If you set MXT too low, throughput and response time can suffer when system
resources (processor, real storage, and virtual storage) are not constrained.
If you set MXT too high at startup, CICS forces a smaller maximum number of
tasks consistent with available storage.
If you set MXT too high while running, you get the error message: “CEILING
REACHED”.
For more information about MRO considerations, and the secondary effects of the
region exit interval (ICV), see “Region exit interval (ICV)” on page 194.
Recommendations
Initially, set MXT to the number of user tasks you require concurrently in your
system by totaling the following:
v The number of concurrent long-running tasks
v Each terminal running conversational tasks
How implemented
The MXT system initialization parameter has a default value of 5, and a minimum
setting of 1. It can be altered with either CEMT or EXEC CICS SET SYSTEM
MAXTASKS commands while CICS is running.
How monitored
The CICS transaction manager statistics show the number of times the MXT ceiling
has been reached.
Effects
Together with MXT, transaction classes control the transaction “mix”, that is, it
ensures that one type of transaction does not monopolize CICS.
When the number of tasks within a class is at the specified ceiling, no additional
tasks within that class are attached until one of them terminates.
Limitations
Transaction classes are unsuitable in normal use for conversational transactions,
because the (n+1) user may be locked out for a long time.
Recommendations
The MAXACTIVE attribute of the transaction class definition can be used to
control a specific set of tasks that may be heavy resource users, tasks of lesser
importance (for example, “Good morning” broadcast messages), and so on,
allowing processor time or storage for other tasks.
How implemented
You specify the maximum number of tasks in each transaction class using the
MAXACTIVE attribute. You specify the value of the class associated with a
particular task using the CEDA transaction definition with the TRANCLASS
attribute. Most CICS Cxxx transaction identifiers are not eligible.
How monitored
If you have divided your tasks into classes, you can use the CEMT INQUIRE
TCLASS command to provide an online report. The CICS transaction class statistics
show the number of times that the number of active transactions in the transaction
class reached the MAXACTIVE value (“Times MaxAct”).
CICS defines two Tclasses for its own use, DFHTCLSX and DFHTCLQ2. For
information about the effects these have, see “Using transaction classes DFHTCLSX
and DFHTCLQ2” on page 309.
They occupy small amounts of storage, but if the queue becomes very long CICS
can become short-on-storage and take a considerable time to recover. Systems
where a heavy transaction load is controlled by the TRANCLASS mechanism are
most prone to being overwhelmed by the queue.
The tasks on the queue are not counted by the MXT mechanism. MXT limits the
total number of tasks that have already been admitted to the system within
TRANCLASS constraints.
Where useful
The PURGETHRESH attribute should be specified only where the transaction load
in a TRANCLASS is heavy. This is the case in a system which uses a
terminal-owning region (TOR) and multiple application-owning regions (AORs)
and where the TRANCLASSes are associated with the AORs and are used to
control the numbers of transactions attempting to use the respective AORs. In this
configuration, an AOR can slow down or stall and the associated TRANCLASS fills
(up to the value defined by MAXACTIVE) with tasks that are unable to complete
their work in the AOR. New transactions are then queued and the queue can grow
to occupy all the available storage in the CICS DSA within a few minutes,
depending on the transaction volume.
Recommendations
The size of each entry in the queue is the size of a transaction (256 bytes) plus the
size of the TIOA holding any terminal input to the transaction. There can be any
number of queues, one for each TRANCLASS that is installed in the TOR.
You can estimate a reasonable size purge threshold for the queue by multiplying
the maximum length of time you are prepared for users to wait before a
transaction is started by the maximum arrival rate of transactions in the
TRANCLASS.
Make sure that the queues cannot occupy excessive amounts of storage at their
maximum lengths.
The PURGETHRESH queuing limit should not be set so low that CICS abends
transactions unnecessarily, for example when an AOR slows down due to a
variation in the load on the CPU.
How implemented
The PURGETHRESH attribute of a TRANCLASS is used to set the limit of the
queue for that transaction class. The default action is not to limit the length of the
queue.
How monitored
To monitor the lengths of the queues for each transaction class you should use
CICS transaction class statistics. Many statistics are kept for each transaction class.
Those that are particularly relevant here are:
You can also tell how many tasks are queued and active in a transaction class at
any one time by using the CEMT INQUIRE TRANCLASS command.
You can monitor the number of AKCC abends in the CSMT log. These abends
indicate the periods when the queue limit was reached. You must correlate the
transaction codes in the abend messages with the transaction classes to determine
which limit was being reached. The tasks on the queue are not counted by the
MXT mechanism. MXT limits the total number of tasks that have already been
admitted to the system within TRANCLASS constraints.
Task prioritization
Prioritization is a method of giving specific tasks preference in being dispatched.
The overall priority is determined by summing the priorities in all three definitions
for any given task, with the maximum priority being 255.
TERMPRIORITY+PRIORITY+OPPRTY <= 255
The value of the PRTYAGE system initialization parameter also influences the
dispatching order; for example, PRTYAGE=1000 causes the task’s priority to
increase by 1 every 1000ms it spends on the ready queue.
Effects
With CICS, the dispatching priority of a task is reassessed each time it becomes
ready for dispatch, based on clock time as well as defined priority.
A task of priority n+1 that has just become ready for dispatch is usually dispatched
ahead of a task of priority n, but only if PRTYAGE milliseconds have not elapsed
since the latter last became ready for dispatch.
Thus, a low priority task may be overtaken by many higher priority tasks in a
busy system, but eventually arrives at the top of the ready queue for a single
dispatch.
Limitations
Prioritization increases the response time for lower-priority tasks, and can distort
the regulating effects of MXT and the MAXACTIVE attribute of the transaction
class definition.
Priorities do not affect the order of servicing terminal input messages and,
therefore, the time they wait to be attached to the transaction manager.
Recommendations
Use prioritization sparingly, if at all, and only after you have already adjusted task
levels using MXT and the MAXACTIVE attribute of the transaction class definition.
It is probably best to set all tasks to the same priority, and then prioritize some
transactions either higher or lower on an exception basis, and according to the
specific constraints within a system.
Do not prioritize against slow tasks unless you can accept the longer task life and
greater dispatch overhead; these tasks are slow, in any case, and give up control
each time they have to wait for I/O.
Use small priority values and differences. Concentrate on transaction priority. Give
priority to control operator tasks rather than the person, or at least to the control
operator’s signon ID rather than to a specific physical terminal (the control
operator may move around).
Also consider for high priority those transactions that cause enqueues to system
resources, thus locking out other transactions. As a result, these can process
quickly and then release resources. Examples of these are:
v Using intrapartition transient data with logical recovery
v Updating frequently used records
v Automatic logging
v Tasks needing fast application response time, for example, data entry.
PRTYAGE should usually be left to its default value, unless certain transactions get
stuck behind higher priority transactions during very busy periods.
How implemented
You specify the priority of a transaction in the CEDA TRANSACTION definition
with the PRIORITY attribute. You specify the priority for a terminal in the CEDA
terminal definition with the TERMPRIORITY attribute. You specify the priority for
an operator with the OPPRTY operand in the user segment of the external security
manager (ESM).
How monitored
There is no direct measurement of transaction priority. Indirect measurement can
be made from:
v Task priorities
v Observed transaction responses
v Overall processor, storage, and data set I/O usage.
In situations where one of the EDSAs attempts to acquire an additional extent and
there are no free extents, empty extents belonging to other EDSAs are used.
Program compression may be triggered when EDSALIM is approached and there
are few free or empty extents available. The EUDSA no longer contains programs,
and so program compression does not occur in it. The other EDSAs are evaluated
individually to determine if program compression is required.
Estimating EDSALIM
Specify EDSALIM so that there is sufficient space to accommodate all the EDSAs.
v The EDSAs (ECDSA, ESDSA, EUDSA and ERDSA) are managed by CICS as part
of EDSALIM. Because the EDSAs are managed in 1 megabyte increments
(extents), it is important to allow for fragmentation and partially used extents by
rounding up the value of EDSALIM accordingly. Because there are 4 extended
DSAs, consider rounding up each EDSA’s requirement to a megabyte boundary.
v If TRANISO=NO, you must allow 64K per concurrent active task for the
EUDSA. The safest estimate is to assume MXT as the number of concurrent
active tasks. If your applications use more than 64K per task, you must adjust
the formulas accordingly (use multiples of 64K increments if adjusting the
formula).
v If TRANISO=YES, you must allow 1 megabyte per concurrent active task for the
EUDSA. Again, the safest estimate would be to assume MXT as the number of
concurrent active tasks. If your applications use more than 1 meg per task, you
must adjust the formulas accordingly (use multiples of 1 meg increments if
adjusting the formula).
Kernel stack storage is allocated out of EDSA, and for more information about
kernel storage see “CICS kernel storage” on page 639.
Note: In each of the components of the calculations that follow remember to round
their values up to a megabyte boundary.
1. If you would like to specify a generous EDSA limit:
For TRANISO=NO:
ECDSA + ERDSA + EUDSA + (64K * MXT)
For TRANISO=YES:
ECDSA + ERDSA + EUDSA + (1MB * MXT)
2. If your current installation EDSALIM and MXT values are set to values larger
than necessary:
For TRANISO=NO:
Peak ECDSA Used + Peak ERDSA Used + (Peak EUDSA Used) -
(EUDSA Peak Page Storage in Task Subpools) + (64K * (Peak number of
tasks))
For TRANISO=YES:
Peak ECDSA Used + Peak ERDSA Used + (Peak EUDSA Used) -
(EUDSA Peak Page Storage in Task Subpools) + (1M * (Peak number of
tasks))
The minimum EDSALIM is 10MB and the default value is 20MB. The maximum
EDSALIM size is (2 gigabytes - 1 megabyte).
These are guidelines for specifying initial values for the EDSA limit. The EDSALIM
can be dynamically adjusted using the CEMT command without having to stop
and restart your CICS system. The safest approach is to:
v Slightly over-specify EDSALIM initially.
v Monitor each EDSA’s usage while your system is running near peak loads.
v Tune your EDSALIM size using CEMT SET SYSTEM commands.
If you under-specify EDSALIM, your system can go short on storage and it you
may not be able to issue CEMT commands to increase the limit. If this happens
you can use CPSM to increase the EDSA limit.
You may find that there is slightly more storage available below the line for DSA
storage. CICS pre-allocates approximately 3KB or less of kernel stack storage below
the line per task. The majority of kernel stack storage is allocated out of CICS
DSAs instead of MVS storage.
Estimating DSALIM
If you have sufficient virtual storage to adjust your DSA limit to a value greater
than the sum of your current CDSA + UDSA, the following formulas may be used
Note: In each of the components of the calculations that follow remember to round
their values up to a 256KB boundary.
1. If you can afford to specify a generous DSA limit:
CDSA + UDSA + 256K (if both RDSA and SDSA used)
2. If your current installation DSALIM and MXT values are set to values larger
than necessary:
Peak CDSA Used + Peak UDSA Used + 256K (if both RDSA and SDSA used)
The minimum DSALIM is 2MB and the default value is 5MB. (The maximum
DSALIM size is 16MB).
A reduction of DSALIM or EDSALIM cannot take place if there are no DSA extents
free to MVS FREEMAIN. The storage manager will MVS FREEMAIN extent as
they become available until the new DSALIM or EDSALIM value is reached. A
short-on-storage condition may occur when reducing DSALIM or EDSALIM. A
Effects
The benefits of placing code in the LPA or ELPA are:
v The code is protected from possible corruption by user applications. Because the
LPA or ELPA is in protected storage, it is virtually impossible to modify the
contents of these programs.
v Performance can be improved and the demand for real storage reduced if you
use the LPA or ELPA for program modules. If more than one copy of the same
release of CICS is running in multiple address spaces of the same processor, each
address space requires access to the CICS nucleus modules. These modules may
either be loaded into each of the address spaces or shared in the LPA or ELPA. If
they are shared in the LPA or ELPA, this can reduce the working set and
therefore, the demand for real storage (paging).
v You can decrease the storage requirement in the private area by judicious
allocation of the unused storage in the LPA or ELPA created by rounding to the
next segment.
Limitations
Putting modules in the LPA or ELPA requires an IPL of the operating system.
Maintenance requirements should also be considered. If test and production
systems are sharing LPA or ELPA modules, it may be desirable to run the test
system without the LPA or ELPA modules when new maintenance is being tested.
The disadvantage of placing too many modules in the LPA (but not the ELPA) is
that it may become excessively large. Because the boundary between the CSA and
the private area is on a segment boundary, this means that the boundary may
move down one megabyte. The size of the ELPA is not usually a problem.
Recommendations
Use the SMP/E USERMOD called LPAUMOD to select those modules that you
want to use for the LPA. This indicates the modules that are eligible for LPA or
ELPA. You can use this USERMOD to move the modules into your LPA library.
The objective is to use the LPA wisely to derive the maximum benefit from placing
modules in the LPA.
All users with multiple CICS address spaces should put all eligible modules in the
ELPA.
For information on installing modules in the LPA, see the CICS Transaction Server
for OS/390 Installation Guide.
Map alignment
CICS maps that are used by basic mapping support (BMS) can be defined as
aligned or unaligned. In aligned maps, the length field associated with a BMS data
field in the BMS DSECT is always aligned on a halfword boundary. In unaligned
maps, the length field follows on immediately from the preceding data field in the
map DSECT.
Effects
In unaligned maps, there is no guarantee that the length fields in the BMS DSECT
are halfword-aligned. Some COBOL and PL/I compilers, in this case, generate
extra code in the program, copying the contents of any such length field to, or
from, a halfword-aligned work area when its contents are referenced or changed.
Specifying map alignment removes this overhead in the application program but
increases the size of the BMS DSECT, at worst by one padding byte per map data
field, and marginally increases the internal pathlength of BMS in processing the
map. The best approach, therefore, is to use unaligned maps, except where the
compiler being used would generate inefficient application program code.
Some of the VS COBOL compilers have an option that does not generate the extra
copy statements associated with an unsynchronized structure, but other COBOL
compilers do. If this option is available, it should be specified because you do not
then need aligned maps.
Limitations
In CICS, BMS maps are always generated in groups (“map sets”). An entire map
set must be defined as aligned or unaligned. Also, maps may be used by
application programs written in a variety of languages. In these cases, it is
How implemented
Map alignment is defined when maps are assembled. Aligned maps use the
SYSPARM(A) option. The BMS=ALIGN/UNALIGN system initialization parameter
defines which type of map is being used.
The map and map set alignment option can also be specified when maps and map
sets are defined using the screen definition facility (SDF II) licensed program
product. For more information, see the Screen Definition Facility II Primer for
CICS/BMS Programs.
How monitored
The importance of map alignment may be found by inspecting programs that
handle screens with a large number of fields. Try recompiling the program when
the BMS DSECT is generated first without, and then with, the map alignment
option. If the program size, as indicated in the linkage edit map, drops
significantly in the second case, it is reasonable to assume there is high overhead
for the unaligned maps, and aligned maps should be used if possible.
Effects
Any program defined in the CSD is loaded into the CDSA, RDSA, SDSA, ECDSA,
ERDSA, or ESDSA on first usage. RELOAD(YES) programs cannot be shared or
reused. A program with RELOAD(YES) defined is only removed following an
explicit EXEC CICS FREEMAIN. USAGE(TRANSIENT) programs can be shared,
but are deleted when the use count falls to zero. RESIDENT(NO) programs become
eligible for deletion when the use count falls to zero. The CICS loader domain
progressively deletes these programs as DSA storage becomes shorter, on a
least-recently-used basis.
On a CICS warm start, an initial free area for the various resident program
subpools is allocated. The size of this area is based on the total lengths of all
currently loaded resident programs as recorded during the preceding CICS
shutdown. When a resident program is loaded, CICS attempts to fit it into the
initial free area. If it does not fit, it is loaded outside the initial free area, and the
space inside the initial free area remains unallocated until other (smaller) resident
Recommendations
Because programs that are not in use are deleted on a least-recently-used (LRU)
basis, they should be defined as RESIDENT(NO) unless there are particular reasons
to favor particular programs by keeping them permanently resident. Variations in
program usage over time are automatically taken account of by the LRU algorithm.
For programs written to run above the 16MB line, you should be able to specify
EDSALIM large enough such that virtual storage is not a constraint.
If a program is very large or frequently updated such that its size increases,
consider defining it as non-resident and issuing a LOAD with the HOLD option as
part of PLTPI processing. The program will not be released during program
compression, but also ensures that there will not be a significant amount of initial
free storage reserved for resident programs which may go unused because the new
(larger) program will not fit into it.
How monitored
The tuning objective is to optimize throughput at an acceptable response time by
minimizing virtual storage constraint. There are specific loader domain statistics
for each program.
Effects
It is possible to LINK or XCTL between 31-bit mode programs and 24-bit mode
programs. You can convert programs to 31-bit mode programs and move them
See the CICS Operations and Utilities Guide for information on using programs from
the LPA or extended link pack area (ELPA).
Using the ELPA is usually better than using the extended private area when
multiple address spaces are employed, because the program is already loaded
when CICS needs it, and real-storage usage is minimized.
Where useful
This facility is useful where there is demand for virtual storage up to the 16MB
line and there is sufficient real storage.
Limitations
Because the purpose of using virtual storage above the 16MB line is to make the
space below this available for other purposes, there is an overall increase in the
demand for real storage when programs are moved above the 16MB line.
How implemented
Programs that are to reside above the 16MB line must be link-edited with the
AMODE(31),RMODE(ANY) options on the MODE statement of the link-edit. See
the CICS Operations and Utilities Guide for further information.
Since the pagesize of the EUDSA is one MB, EDSALIM is likely to be very large
for a CICS system which has transaction isolation active. Since this virtual storage
needs to be mapped with page and segment tables using real storage, an increase
in the real storage usage can occur. In addition to the real storage used to map the
virtual storage for the EDSALIM, subspaces also require real storage. For example:
v Each subspace requires 2.5 pages.
v Assuming each transaction in the system requires a unique subspace,
(transaction definition TASKDATAKEY(USER) and ISOLATE(YES)), real storage
required is MXT * 2.5 pages.
The figures for the real storage usage is in addition to that required for a CICS
system that does not have transaction isolation active.
PACING controls the flow of traffic from the network control program (NCP) to
the terminal and does not affect the processor activity as such. VPACING on the
other hand controls the flow of traffic between the host and the NCP.
The VPACING parameter of the CICS APPL statement determines how many
messages can be sent in a session to the VTAM application program by another
VTAM logical unit without requiring that an acknowledgment (called a “pacing
response”) be returned. The host sends data path information units (PIUs)
according to the definition of VPACING. The first PIU in a group carries a pacing
indicator in the RH. When this PIU is processed by the NCP, the NCP sends a
response to the host with the same pacing indicator set to request a new pacing
group. This means that, for every x PIUs to a terminal and every y PIUs to a
printer, the pacing response traffic must flow from the NCP to the host which,
based on the volume of traffic, could cause a significant increase in host activity.
The PACING parameter is required for most printers, to match the buffer capacity
with the speed of printing the received data. Terminals do not normally require
pacing unless there is a requirement to limit huge amounts of data to one LU, as is
the case with some graphics applications. Use of pacing to terminals causes
response time degradation. The combination of PACING and VPACING causes
both response time degradation and increased processor activity, and increased
network traffic.
Recommendations
PACING and VPACING should be specified for all terminals to prevent a
“runaway” transaction from flooding the VTAM network with messages and
requiring large amounts of buffer storage. If a transaction loops while issuing
SENDs to a terminal, IOBUF (CSA storage) and NCP buffers may fill up causing
slowdowns and CSA shortage conditions.
How implemented
For secondary to primary pacing, you must code:
v SSNDPAC=nonzero value in the LOGMODE entry pointed to by the secondary
application program
v VPACING=nonzero value on the APPL definition for the secondary application.
The value used is coded on the VPACING parameter. If either of these values are
zero, no pacing occurs.
Specify VPACING on the APPL statement defining the CICS region, and any
nonzero value for the SSNDPAC parameter on the LU statement defining the batch
device. You should ensure that the device supports this form of pacing by referring
to the component description manual for that device.
For further information on the selection criteria for values for the PACING and
VPACING parameters, see the ACF/VTAM Version 2 Planning and Installation
Reference manual.
Transaction routing, in most cases, involves one input and one output between
systems, and the overhead is minimal.
For situations where ISC is used across MVS images, consider using XCF/MRO.
XCF/MRO consumes less processor overhead than ISC.
Your sysplex configuration may offer you a choice of XCF connectivity for
XCF/MRO interregion communication. You should check whether your sysplex
allows a choice of channel-to-channel connectivity, or whether you can use
coupling facility channel links only. If you have a choice, channel-to-channel links
generally offer better performance for XCF/MRO operations
ISC mirror transactions can be prioritized. The CSMI transaction is for data set
requests, CSM1 is for communication with IMS/ESA systems, CSM2 is for interval
control, CSM3 is for transient data and temporary storage, and CSM5 is for
IMS/ESA DB requests. If one of these functions is particularly important, it can be
prioritized above the rest. This prioritization is not effective with MRO because
any attached mirror transaction services any MRO request while it is attached.
If ISC facilities tend to flood a system, this can be controlled with the VTAM
VPACING facility. Specifying multiple sessions (VTAM parallel sessions) increases
throughput by allowing multiple paths between the systems.
CICS also allows you to specify a VTAM class of service (COS) table with LU6.2
sessions, which can prioritize ISC traffic in a network. Compare the performance of
CICS function shipping with that of IMS/ESA data sharing.
Limitations
v Use of intercommunication entails trade-offs as described in “Splitting online
systems: virtual storage” on page 284 and “Splitting online systems: availability”
on page 189.
v Increased numbers of sessions can minimally increase real and virtual storage
but reduce task life. The probable overall effect is to save storage.
v MVS cross-memory services reduce CSA and cycle requirements.
v MRO high performance facilities reduce processing requirements.
v IMS/ESA data sharing usually reduces processor requirements.
v Accessing DL/I databases via the IMS DBCTL facility reduces processor
requirements relative to function shipping.
v For MRO considerations, read about the secondary effects of the region exit
interval (ICV) on page “Region exit interval (ICV)” on page 194.
How implemented
See the CICS Transaction Server for OS/390 Installation Guide for information about
resetting the system for MRO or ISC. See also “Splitting online systems: virtual
storage” on page 284.
Both mechanisms produce the same effect on the application program which
issued the allocate; a SYSIDERR condition is returned. Return codes are also
provided to the dynamic routing program to indicate the state of the queue of
allocate requests.
The CICS Resource Definition Guide contains more description of the CEDA
commands; and the CICS Customization Guide gives programming information
about the XZIQUE exit and its relationship with the rest of CICS, including
application programs and the dynamic routing program.
Relevant statistics
For each connection CICS records the following:
For each of the queue control mechanisms CICS records the following statistics for
each connection:
v The number of allocates which were rejected due to the queue becoming too
large
v The number of times the queue was purged because the throughput was too
slow
v The number of allocates purged due to slow throughput.
“ISC/IRC system and mode entry statistics” on page 57 also contains an
explanation of these, and other connection statistics.
You should allow sufficient intersystems sessions to enable their free availability
during normal running. Session definitions do not occupy excessive storage, and
the occupancy of transaction storage probably outweighs the extra storage for the
session. The number of sessions should correspond to the peak number of
transactions in the system which are likely to use the connection—you can see the
maximum number of sessions being used from the terminal statistics for the
connection. If all sessions were used, the connections statistics show the number of
times allocates were queued compared with the total number of requests.
Even in a system that has no problems, there are significant variations in the
numbers of transactions that are active at any time, and the actual peak number
may be larger than the average over a few minutes at the peak time for your
system. You should use the average rather than the actual peak; the queueing
mechanism is intended to cope with short-term variations, and the existence of a
queue for a short time is not a cause for concern.
The start of a queue is used by the queue limiting mechanism as a signal to start
monitoring the response rate of the connection. If queues never form until there is
a big problem, the detection mechanism is insensitive. If there are always queues
in the system, it will be prone to false diagnosis.
You should set the queue limit to a number that is roughly the same size as the
number of sessions—within the limits imposed by MXT if there are many
connections whose cumulative queue capacity would reach MXT. In this latter case,
you might need to design your own method—using ZXIQUE—of controlling
queue lengths so that the allocation of queue slots to connections is more dynamic.
The number of times the queue is purged should indicate the number of times a
serious problem occurred on the remote system. If the purges do not happen when
the remote system fails to respond, examine the setting of the MAXQTIME
parameter—it may be too high, and insensitive. If the indication of a problem is
too frequent and causes false alarms simply due to variations in response time of
the remote system, the parameter may be too low, or the QUEUELIMIT value too
low.
Effects
These tasks execute the activities needed to acquire an APPC conversation
(CLS1/2), and to resynchronize units of work for MRO and APPC connections
(CLQ2). Usually there are not many tasks, and they need no control. However, if
your CICS system has many connection definitions, these may be acquired
simultaneously as a result of initializing the system at startup, or as a result of a
SET VTAM OPEN, or SET IRC OPEN command.
How implemented
The system definitions are optional. Install resource group DFHISCT to activate
them. As supplied, the MAXACTIVE parameter in the DFHTCLSX and
DFHTCLQ2 is 25. This should give sufficient control to prevent the system
reaching a short-on-storage situation. (Tasks CLS1 and CLS2 each require 12K of
dynamic storage, and CLQ2 tasks require up to 17K). The purge threshold should
not be set to a non-zero number, and the maxactive should not be set to 0. They
both prevent CICS executing tasks necessary to intersystems functions.
It is not advisable to set the MAXACTIVE value too low because network delays
or errors may cause one of the tasks in the TCLASS to wait and block the use of
the TCLASS by succeeding transactions. Setting a low value can also extend
shutdown time in a system with a large number of connections.
Effects
The IOAREALEN value controls the length of the TIOA which is used to build a
message transmitted to the other CICS system (that is, an outgoing message).
Two values (value1 and value2) can be specified. Value1 specifies the initial size of
the TIOA to be used in each session defined for the MRO connection. If the size of
the message exceeds value1, CICS acquires a larger TIOA to accommodate the
message.
Only one value is required, however if value2 is specified CICS will use value2
whenever the message cannot be accommodated by the value1.
A value of zero causes CICS to get a storage area exactly the size of the outgoing
message, plus 24 bytes for CICS requirements.
Where useful
The IOAREALEN attribute can be used in the definition of sessions for either MRO
transaction routing or function shipping. In the case of MRO transaction routing,
the value determines the initial size of the TIOA, whereas the value presents some
tuning opportunities in the MRO function shipping environment.
Limitations
Real and virtual storage can be wasted if the IOAREALEN value is too large for
most messages transmitted on your MRO link. If IOAREALEN is smaller than
most messages, or zero, excessive FREEMAIN and GETMAIN requests can occur,
resulting in additional processor requirements.
Recommendations
For optimum storage and processor utilization, IOAREALEN should be made
slightly larger than the length of the most commonly encountered formatted
application data transmitted across the MRO link for which the sessions are
defined. For efficient operating system paging, add 24 bytes for CICS requirements
and round the total up to a multiple of 64 bytes. A multiple of 64 bytes (or less)
minus 24 bytes for CICS requirements ensures a good use of operating system
pages.
How implemented
The TIOA size can be specified in the IOAREALEN attribute of the SESSIONS
definition.
Effects
Compared to no batching (MROBTCH=1, that is, the default), setting
MROBTCH=n has the following effects:
v Up to [(n−1)*100/n]% saving in the processor usage for waiting and posting of
that TCB. Thus, for n=2, 50% savings may be achieved, for n=3, 66% savings, for
n=6, 83% savings, and so on.
v An average cost of (n+1)/2 times the average arrival time for each request
actually batched.
v Increased response time may cause an increase in overall virtual storage usage
as the average number of concurrent transactions increases.
v In heavily loaded systems at peak usage, some batching can happen as a natural
consequence of queueing for a busy resource. Using a low MROBTCH value
greater than one may then decrease any difference between peak and off-peak
response times.
You require a relatively low value of MROBTCH for ICV to maintain reasonable
response time during periods of low utilization.
Recommendations
Depending on the amount of response time degradation you can afford, you can
set MROBTCH to different values using either CEMT or EXEC CICS SET SYSTEM
MROBATCH(value).
During slow periods the ICV unconditionally dispatches the region, even if the
batch is not complete and provides a minimum delay. In this case, set ICV to 500
milliseconds in each region.
Setting MROLRM=NO causes the mirror to be attached and detached for each
function-shipped request until the first request for a recoverable resource or a file
control start browse is received. After such a request is received, the mirror
remains attached to the session until the calling transaction reaches syncpoint.
Effects
The DSHIPIDL system initialization parameter determines the period of time for
which a shipped terminal definition is allowed to remain inactive before it may be
flagged for deletion. The DSHIPINT system initialization parameter determines the
time interval between invocations of the CRMF transaction. CRMF examines all
shipped terminal definitions to determine which of them have been idle for longer
than the time interval specified by DSHIPIDL. If CRMF identifies any redundant
terminal definitions, it invokes CRMD to delete them.
Where useful
The CRMF/CRMD processing is most effective in a transaction routing
environment in which there may be shipped terminal definitions in an AOR which
remain idle for considerable lengths of time.
Limitations
After CRMF/CRMD processing has deleted a shipped terminal definition, the
terminal definition must be re-shipped when the terminal user next routes a
transaction from the TOR to the AOR. Take care, therefore, not to set DSHIPIDL to
a value that is low enough to cause shipped terminal definitions to be frequently
deleted between transactions. Such processing could incur CPU processing costs,
not just for the deletion of the shipped terminal definition, but also for the
subsequent re-installation when the next transaction is routed.
Consider that a large value chosen for DSHIPINT, influences the length of time
that a shipped terminal definition survives. The period of time for which a shipped
terminal definition remains idle before deletion is extended by an average of half
of the DSHIPINT value. This occurs because a terminal, after it has exceeded the
limit for idle terminals set by the DSHIPIDL parameter, has to wait (for half of the
DSHIPINT interval) before CRMF is scheduled to identify the terminal definition
as idle and flag it for CRMD to delete. When the DSHIPINT interval is
significantly longer than the DSHIPIDL interval (which is the case if the default
values of 120000 for DSHIPINT and 020000 for DSHIPIDL are accepted),
DSHIPINT becomes the dominant factor in determining how long an idle shipped
terminal definition survives before being deleted.
Recommendations
Do not assign too low a value to DSHIPIDL. The storage occupied by the shipped
terminal definitions is not normally a concern, so the default value, which specifies
a maximum idle time of 2 hours is reasonable, unless other concerns (such as
security) suggest that it should be shorter.
Decide whether you wish to delete idle shipped terminal definitions incrementally
or altogether. CRMF processing in itself causes negligible CPU overhead, so a low
value for DSHIPINT may therefore be specified at little cost, if a sensible value for
DSHIPIDL has been chosen. Specifying a low value for DSHIPINT so that CRMF
How implemented
The maximum length of time for which a shipped terminal definition may remain
idle before it can be flagged for deletion is specified by the CICS system
initialization parameter DSHIPIDL. The interval between scans to test for idle
definitions is specified by the CICS system initialization parameter DSHIPINT.
How monitored
The CICS terminal autoinstall statistics provide information on the current setting
of the DSHIPINT and DSHIPIDL parameters, the number of shipped terminal
definitions built and deleted, and the idle time of the shipped terminal definitions.
Where only one version of a map is involved, it is optional whether the device
type suffix is coded. If the DDS option is being used, it is more efficient to use the
device suffixes than to leave the suffix blank. This is because, if the DDS option
applies, CICS first looks for a map set with a suffix name and then searches again
for a map with a blank suffix. Processor cycle requirements are reduced by
eliminating the second table lookup.
Effects
If only one device type is used with all maps in a CICS system and all devices
have the same screen size, CICS can be initialized to look for a blank suffix, thus
eliminating the second lookup.
If the map is to be used with multiple devices, multiple maps with the same basic
source are needed because the device type needs to be specified, and suffixing is
required in this case.
Recommendation
If you decide that you need device-dependent suffixing, you should suffix all your
map sets. If you do not need it, use blank suffixes (no suffix at all) and specify the
NODDS option in BMS.
How implemented
Maps are named in the link-edit process. These names are defined in the MAPSET
definition. Specifying NODDS in the BMS= system initialization parameter
determines that map suffixing is not used in CICS.
How monitored
No direct measurement of map suffixing is given.
Effects
The CICS translator automatically inserts a CBL card with the RESIDENT option.
NODYNAM must be specified.
Under the RESIDENT option, any library routines required by the COBOL
program are not link-edited. Instead, COBOL initialization code tries to locate them
when the program (or subprogram) is first invoked, by issuing MVS LOAD
macros. If the library routines have been set up in the link pack area (LPA), MVS
simply passes the routine addresses to COBOL, which then inserts them into the
COBOL program, without any extra I/O. But if the routines are not in the LPA,
they have to be loaded from the disk libraries (STEPLIB, JOBLIB, and LINKLIB).
These modules are loaded from the LINKLIB, any LINKLST data set, JOBLIB,
STEPLIB, or perhaps from the LPA, but not from DFHRPL (COBOL does not know
anything about DFHRPL). The MVS LOAD is not issued with any DCB parameter,
so only the standard MVS LOAD hierarchy can be used. They should not be
defined on the CSD, because this is not known to COBOL either. RES,NODYNAM
is a COBOL feature, not a CICS feature.
In the CICS environment, many of the required COBOL library routines can be
placed in the LPA; this gives an environment similar to that of the PL/I shared
library. This reduces the size of each COBOL program according to its subroutine
requirement. The saving in space normally offsets any extra space needed for one
copy of each library subroutine in the LPA (assuming the modules have not
already been put in the LPA for use by batch COBOL programs).
ILBCBL00 must not be in the LPA, however. It is loaded into the user’s region and
has a set of vector addresses for the other modules that may be used by the
COBOL programs within that region. When a new module is requested, its address
is not known and the COBOL interface routine loads that module and places its
address in the list to be used again, as explained above.
If the COBOL RESIDENT option is used under CICS and the desired COBOL
subroutines are not LPA-resident, an MVS LOAD is issued for each such
Using the RESIDENT option saves real and virtual storage. Approximately 3.5KB
of storage can be saved per program, depending on the release of COBOL used
and what subroutines are referenced. These subroutines are loaded the first time
they are referenced, or can reside in the LPA and be shared by all programs that
reference them.
Note that the requirement is that the ILBxxxxx routines be reentrant, not the
application code. This feature is purely a COBOL feature and CICS does not have
any specific code to support it.
The ability of CICS to share code between multiple concurrent users is based on
pseudoreentrance. This means that the program must be reentrant at the time you
pass control to CICS with a command but not between times.
Limitations
Recompilation of programs is required for programs not using these options.
Recommendations
Ensure that the resident subroutines are placed in the link pack area so that an
MVS LOAD is not incurred the first time they are referenced.
How implemented
Resident subroutines are implemented by specifying RESIDENT and NODYNAM
when the program is compiled.
How monitored
A link-edit map shows storage savings. RMF shows overall real and virtual storage
usage.
PL/I resident library routines can be shared between multiple CICS PL/I
programs, rather than being compiled into each separate PL/I application
program. This can save real and virtual storage; the amount depending on the
number of resident library routines that each program uses.
How implemented
To run PL/I application programs with the PL/I shared library, ensure that you
generate the PL/I shared library modules. CICS looks for the presence of the
significant shared library interface routines at startup time.
How monitored
A link-edit map shows storage savings. RMF shows overall real and virtual storage
usage.
VS COBOL II
VS COBOL II programs can be loaded above the 16MB line and can also use most
working storage above the 16MB line.
VS COBOL II has library modules that are grouped together in COBPACKs that
need to be defined to CICS. COBPACKs can be tailored by the installation.
How implemented
One item of tailoring recommended is to have COBPACKs IGZCPPC and
IGZCPAC contain only AMODE(31), RMODE(ANY) modules so that they may be
loaded above the 16MB line.
VS COBOL II also has a tuning mechanism in IGZTUNE which specifies the initial
amount of storage to be GETMAINed below (and above) the 16MB line.
This mechanism should be used, defining an optimum value that does not:
v Waste space below the 16MB line
v Incur unnecessary additional GETMAINs because the initial amount was too
small.
How monitored
Use of IGZOPT allows VS COBOL II to report on the effect of IGZTUNE options.
| Production systems should ensure that such reporting is turned off.
|
| Language Environment (LE)
| Language Environment (LE) conforming CICS applications issuing EXEC CICS
| LINK requests cause an increase in system pathlength. Repeated EXEC CICS LINK
| calls to the same LE-conforming program result in multiple
| GETMAIN/FREEMAIN requests for run-time unit work areas (RUWAs).
| RUWAPOOL(YES) results in the creation of a run-unit work area pool during task
| initialization. This pool is used to allocate RUWAs required by LE-conforming
| For more information, about the RUWAPOOL system initialization parameter, see
| the CICS System Definition Guide.
| If LE/370 is active in an address, the runtime libraries of the native language, such
| as COBOL, PL/I, are not needed. This means that CICS has a single interface to all
| the language run times.
Effects
If main temporary storage is used, requests to a TS queue are serialized with the
storage being allocated from the ECDSA.
Limitations
Increasing the use of main temporary storage, using a larger CI size, or increasing
the number of buffers, increases the virtual storage needs of the ECDSA and real
storage needs.
If you use auxiliary temporary storage, a smaller CI size can reduce the real
storage requirements.
Recommendations
Main temporary storage
Temporary storage items are stored in the ECDSA above the 16MB line. No
recovery is available. Queues are locked for the duration of the TS request.
The fact that temporary storage items are stored in main storage also means that
there is no associated I/O, so we recommend main temporary storage for
short-duration tasks with small amounts of data.
Temporary storage I/O occurs only when a record is not in the buffer, or when a
new buffer is required, or if dictated by recovery requirements.
The use of secondary extents allows more efficient use of DASD space. You can
define a temporary storage data set with a primary extent large enough for normal
activity, and with secondary extents for exceptional circumstances, such as
unexpected peaks in activity.
It follows that you can reduce or eliminate the channel and arm contention that is
likely to occur because of heavy use of temporary storage data.
The use of multiple buffers also increases the likelihood that the control interval
required by a particular request is already available in a buffer. This can lead to a
significant reduction in the number of input/output requests (VSAM requests) that
have to be performed. (However, VSAM requests are always executed whenever
their use is dictated by recovery requirements.) Note that although the use of a
large number of buffers may greatly improve performance for non-recoverable TS
queues, the associated buffers still have to be flushed sequentially at CICS
shutdown, and that might take a long time.
The number of buffers that CICS allocates for temporary storage is specified by the
system initialization parameter, TS.
In general, you should aim to minimize the number of times that a task has to
wait either because no space in buffers is available to hold the required data or
because no string is available to accomplish the required I/O. The trade-off here is
between improvement of temporary storage performance and increased storage
requirements. Specifying a large number of buffers may decrease temporary
storage I/O but lead to inefficient usage of real storage and increased paging.
VSAM requests are queued whenever the number of concurrent requests exceeds
the number of available strings. Constraints caused by this can thus be relieved by
increasing the number of available strings, up to a maximum equal to the number
of buffers.
The number of VSAM strings that CICS allocates for temporary storage is specified
by the system initialization parameter, TS.
Because temporary storage can use records larger than the control interval size, the
size of the control intervals is not a major concern, but there is a performance
overhead in using temporary storage records that are larger than the CI size.
The control interval size should be large enough to hold at least one (rounded up)
temporary storage record, including 64 bytes of VSAM control information for
control interval sizes less than, or equal to, 16 384, or 128 bytes of control
information for larger control interval sizes. For further information about the
effect of the control interval size for CICS temporary storage, see the CICS System
Definition Guide.
How implemented
Temporary storage items can be stored either in main storage or in auxiliary
storage on DASD. Main-only support can be forced by specifying TS=(,0) (zero
temporary storage buffers) in the SIT.
How monitored
The CICS temporary storage statistics show records used in main and auxiliary
temporary storage. These statistics also give buffer and string information and data
on I/O activity. RMF or the VSAM catalog gives additional information on data set
performance.
| If recovery is used for auxiliary temporary storage, PREFIX (called QUEUE name
| by the application programmer) is enqueued for DELETEQ TS and WRITEQ TS
| requests but not READQ TS. In a high-activity system, PREFIX should be
| monitored to ensure that a given PREFIX identifier is not a resource that is
| constraining your transaction throughput.
Note: If the NOSPACE condition is not handled, the task is suspended until
temporary storage becomes available. If the NOSPACE condition is
handled (through the use of the HANDLE CONDITION NOSPACE
command or the use of RESP on the WRITEQ TS command, or the
WRITEQ TS NOSUSPEND command, the user receives control when the
condition occurs, and can then decide whether to end the transaction
normally, abend, or wait.
Number of TS buffers
This is controlled by the second parameter of the TS system initialization
parameter.
Number of TS strings
This is controlled by the third parameter of the TS system initialization
parameter.
WRITEQ requests after this point are directed back to the start of the dataset.
DFHTSP maintains a bytemap representing the free space available within each
control interval in the dataset at any time. DFHTSP now starts interrogating the
bytemap to find a control interval that can accommodate new data at, or near to,
the start of DFHTEMP. The reasoning behind this is that by now, queues written
earlier in the CICS run could have been deleted. Such deleted data remains in
control intervals but is no longer required. If the bytemap shows a control interval
contains enough space, DFHTSP reads it into a temporary storage buffer,
compresses it to move all valid records to the start of the control interval, and uses
the remaining contiguous space to store the data from the new request.
Local TS queues offer less performance overhead than a QOR. However, local
queues can cause intertransaction affinities, forcing affected transactions to run in
the same AOR so that they can access the local queue. This affects performance by
inhibiting dynamic routing and preventing workload balancing across the AORs in
the sysplex. Intertransaction affinities can be managed by a workload management
function provided by CICSPlex SM, but you must provide intertransaction
affinities definitions for the affected transactions. The CICS/ESA 3.3 XRF Guide
gives guidance about determining where the affinities are in your application
programs. Temporary storage data sharing removes the need for the time and
effort that this systems management demands by avoiding intertransaction affinity.
In general, the overall workload balancing benefits provided by being able to use
dynamic transaction routing to any AOR should outweigh any overhead incurred
by the temporary storage servers.
Recovery options
Recovery can affect the length of time for which a transient data record is
enqueued. You can specify one of three options:
1. No recovery. If you specify no recovery, there is no logging, no enqueuing for
protecting resources.
2. Physical recovery. Specify physical recovery when you need to restore the
intrapartition queue to the status that it had immediately before a system
failure. The main performance consideration is that there is no deferred
transient data processing, which means that automatic task initiation may occur
instantaneously. Records that have been written may be read by another task
immediately. CIs are released as soon as they have been exhausted. For every
WRITEQ TD request, the CI buffer is written to the VSAM data set.
In summary, physical recovery ensures that records are restored in the case of a
system failure, while logical recovery also ensures integrity of records in the case
of a task failure, and ties up the applicable transient data records for the length of
a task that enqueues on them.
Up to 32767 buffers and 255 strings can be specified for a transient data set, with
serial processing only through a destination.
The use of multiple buffers also increases the likelihood that the control interval
required by a particular request is already available in a buffer. This can lead to a
significant reduction in the number of real input/output requests (VSAM requests)
that have to be performed. (However, VSAM requests are always executed
whenever their use is dictated by the requirements of physical and logical
recovery.)
The number of buffers that CICS allocates for transient data is specified by the TD
system initialization parameter. The default is three.
The provision of multiple buffers allows CICS to retain copies (or potential copies)
of several VSAM CIs in storage. Several transient data requests to different queues
can then be serviced concurrently using different buffers. Requests are serialized by
queue name, not globally. Multiple buffers also allow the number of VSAM
requests to the transient data data set to be reduced by increasing the likelihood
The benefits of multiple buffers depend on the pattern and extent of usage of
intrapartition transient data in an installation. For most installations, the default
specification (three buffers) should be sufficient. Where the usage of transient data
is extensive, it is worthwhile to experiment with larger numbers of buffers. The
buffer statistics give sufficient information to help determination of a suitable
allocation. In general, the aim of the tuning should be to minimize the number of
times a task must wait because no buffers are available to hold the required data.
VSAM requests are queued whenever the number of concurrent requests exceeds
the number of available strings. Constraints caused by this be relieved by
increasing the number of available strings, up to a maximum of 255. The limit of
255 on the number of strings should be taken into consideration when choosing
the number of buffers. If the number of buffers is more than the number of strings,
the potential for string waits increases.
The number of VSAM strings that CICS allocates for transient data is specified by
the TD system initialization parameter. The CICS default is three.
Logical recovery
Logging and enqueuing occur with logical recovery transactions (including
dynamic backout of the failing task’s activity on the transient data queue). Logical
recovery would generally be used when a group of records have to be processed
together for any reason, or when other recoverable resources are to be processed in
the same task.
During processing of the transient data request, the destination queue entry is
enqueued from the first request, for either input or output, or both (if the queue is
to be deleted), until the end of the UOW. This means that none of the other tasks
can access the queue for the same purpose during that period of time, thus
maintaining the integrity of the queue’s status.
At the end of the UOW (syncpoint or task completion), syncpoint processing takes
place and the queue entry is logged. Any purge requests are processed (during the
UOW, a purge only marks the queue ready for purging). The empty CIs are
released for general transient data use. Any trigger levels reached during the UOW
The DEQueue on the queue entry occurs, releasing the queue for either input or
output processing by other tasks. Records written by a task can then be read by
another task.
Logging activity
With physical recovery, the queue entry is logged after each READQ, WRITEQ, and
DELETEQ, and at an activity keypoint time (including the warm keypoint).
With logical recovery, the queue entry is logged at syncpoint and at activity
keypoint time (including the warm keypoint).
The use of secondary extents allows more efficient use of DASD space. You can
define an intrapartition data set with primary extents large enough for normal
activity, and with secondary extents for exceptional circumstances, such as
unexpected peaks in activity.
It follows that you can reduce or eliminate the channel and arm contention that is
likely to occur because of heavy use of intrapartition transient data.
Therefore, you should try to eliminate or minimize the occurrences of CICS region
waits by:
v Having sufficient buffering and blocking of the output data set
v Avoiding volume switching by initially allocating sufficient space
v Avoiding dynamic OPEN/CLOSE during peak periods.
Indirect destinations
To avoid specifying extrapartition data sets for the CICS-required entries (such as
CSMT and CSSL) in CSD definitions for TDQUEUES, you are recommended to use
indirect destinations for combining the output of several destinations to a single
destination. This saves storage space and internal management overheads.
Limitations
Application requirements may dictate a lower trigger level, or physical or logical
recovery, but these facilities increase processor requirements. Real and virtual
storage requirements may be increased, particularly if several buffers are specified.
How implemented
Transient data performance is affected by the TRIGGERLEVEL and RECOVSTATUS
operands in the transient data resource definitions that have been installed.
Recommendations
Suggestions for reducing WAITS during QSAM processing are to:
v Avoid specifying a physical printer.
v Use single extent data sets whenever possible to eliminate WAITS resulting from
the end of extent processing.
v Avoid placing data sets on volumes subject to frequent or long duration
RESERVE activity.
v Avoid placing many heavily-used data sets on the same volume.
v Choose BUFNO and BLKSIZE such that the rate at which CICS writes or reads
data is less than the rate at which data can be transferred to or from the volume,
for example, avoid BUFNO=1 for unblocked records whenever possible.
v Choose an efficient BLKSIZE for the device employed such that at least 3 blocks
can be accommodated on each track.
| How implemented
| Global ENQ/DEQ uses OS/390 global resource serialization (GRS) services to
| achieve locking that is unique across multiple MVS images in a sysplex. GRS can
| be configured as either GRS=STAR or GRS=RING.
| Recommendations
| When GRS is initialized as a star configuration, all the information about resource
| serialization is held in the ISGLOCK coupling facility structure. GRS accesses the
| coupling facility when a requestor issues an ENQ or DEQ on a global names
| resource.
Monitoring data is useful for performance, tuning, and for charging your users for
the resources they use. See “Chapter 6. The CICS monitoring facility” on page 65
for further information.
Limitations
Performance class monitoring can be a significant overhead. The overhead is likely
to be about 5 to 10%, but is dependent on the workload.
Recording of the above information incurs overhead, but, to tune a system, both
performance and exception information may be required. If this is not a daily
process, the CICS monitoring facility may not need to be run all the time. When
tuning, it is necessary to run the CICS monitoring facility during peak volume
times because this is when performance problems occur.
Consider excluding fields from monitoring records if overuse of the SMF data set
is a potential problem.
How implemented
To implement CICS monitoring, you can reset the system initialization table
parameters (MNPER, MNEXC, and MN)—see the CICS System Definition Guide.
You can change the settings dynamically using either CEMT INQUIRE|SET
MONITOR or EXEC CICS INQUIRE|SET MONITOR. See “Controlling CICS
monitoring” on page 72 for more information. Alternatively see the CICS Supplied
Transactions manual for details of CEMT, and the CICS System Programming
Reference manual for programming information about INQUIRE and SET
commands.
For further information about using the CICS monitoring facility, see “Chapter 6.
The CICS monitoring facility” on page 65.
How monitored
CICS Monitoring Domain statistics show the number of records produced of each
type. These statistics monitor CMF activity.
| MVS address space or RMF data can be gathered whether or not the CICS
| monitoring facility is active to give an indication of the performance overhead
| incurred when using the CICS monitoring facility.
CICS trace
CICS trace is used to record requests made by application programs to CICS for
various services. Because this involves the recording of these requests each time
they occur, the overhead depends on the frequency of the requests.
The CICS internal trace table resides in MVS virtual storage above the 16MB line
(but not in the EDSAs).
A trace table always exists and is used for recording exception conditions useful
for any first failure data capture. Other levels of trace are under the control of the
user. There are a large number of parameters and the CEMT commands which
allow dynamic control over the system and transaction dumps.
Buffer allocation may also take place at execution time in response to a CETR or
CEMT transaction request to set auxiliary trace to START (CEMT SET AUXTRACE
START) or simply to open the auxiliary trace data set. For more information, see
the CEMT SET AUXTRACE section in CICS Supplied Transactions manual.
Limitations
Running trace increases processing requirements considerably. Not running trace,
however, reduces the amount of problem determination information that is
available.
The additional cost of auxiliary trace is mainly due to the I/O operations.
Auxiliary trace entries vary in size, and they are written out in blocks of 4KB. Twin
buffers are used but, even if the I/O can be overlapped, the I/O rate is quite large
for a busy system.
When you use CICS auxiliary trace, you may need to decrease the relevant
DSALIM system initialization parameter by 8KB to ensure that adequate address
space is given up to the operating system to allow for the allocation of the two
4KB auxiliary trace buffers.
Recommendations
The trace table should be large enough to contain the entries needed for debugging
purposes.
With first failure data capture, CICS produces some trace entries regardless of the
settings produced. Because of this most of the tracing overhead can be reduced by
running with the following options:
v Internal tracing off
v Auxiliary tracing on
v Print auxiliary trace data only when required.
CICS allows tracing on a transaction basis rather than a system basis, so the trace
table requirements can be reduced.
How implemented
Trace activation is specified with the INTTR system initialization parameter or as a
startup override.
The size of the trace table is specified by the TRTABSZ system initialization
parameter or as a startup override. The minimum size is 16KB.
With CICS initialized and running, internal trace and auxiliary trace can be turned
on or off, independently and in either order, with one of the following: CETR,
CEMT SET INTRACE START or CEMT SET AUXTRACE START commands.
Auxiliary trace entries are recorded only when internal trace is active.
How monitored
No direct measurement of trace is given. RMF can show processing and storage
requirements.
CICS recovery
Some types of recoverable resources, when they are accessed for update, cause
logging. Do not define more resources as recoverable than you need for application
programming requirements, because the extra logging incurs extra I/O and
processor overheads. If the resource in question does not require recovery, these
overheads are unproductive.
Limitations
Specifying recovery increases processor time, real and virtual storage, and I/O
requirements. It also increases task waits arising from enqueues on recoverable
resources and system log I/O, and increases restart time.
Recommendation
Do not specify recovery if you do not need it. If the overhead is acceptable,
logging can be useful for auditing, or if a data set has to be rebuilt.
How implemented
See the CICS Recovery and Restart Guide for information on each resource to be
specified as recoverable.
How monitored
CICS auxiliary trace shows task wait time due to enqueues. RMF shows overall
processor usage. CICS monitoring data shows task wait time due to journaling.
CICS security
CICS provides an interface for an external security manager (ESM), such as RACF,
for three types of security: transaction, resource, and command security.
Limitations
Protecting transactions, resources, or commands unnecessarily both increases
processor cycles, and real and virtual storage requirements.
Recommendations
Because transaction security is enforced by CICS, it is suggested that the use of
both resource security and command security should be kept to the minimum. The
assumption is that, if operators have access to a particular transaction they
therefore have access to the appropriate resources.
How implemented
Resource security is defined with the RESSEC(YES) attribute in the
TRANSACTION definition.
How monitored
No direct measurement of the overhead of CICS security is given. RMF shows
overall processor usage.
Storage protect
Protects CICS code and control blocks from being accidentally overwritten by user
applications.
Command protection
Ensures that an application program does not pass storage to CICS using the EXEC
CICS interface, which requires updating by CICS, although the application itself
cannot update the storage.
Recommendation
Storage protection, transaction isolation, and command protection protect storage
from user application code. They add no benefit to a region where no user code is
executed; that is, a pure TOR or a pure FOR (where no DPL requests are
function-shipped).
Storage below the 16MB line is activated in multiples of 4KB. Storage above the
line is activated in multiples of 1MB. A user task rarely requires more than 1MB of
storage. So a user task that executes completely above the line only requires one
activate.
| Recommendations
| Since a BTS transaction may comprise many separate CICS transactions and may
| also span a considerable execution time, there are no specific performance
| recommendations for BTS transactions. There are however, some useful general
| observations.
| How implemented
| To support BTS function, CICS keeps data in new types of data sets the local
| request queue (DFHLRQ) and BTS repository. The local request queue data set is
| used to store pending BTS requests. Each CICS region has its own data set. It is a
| recoverable VSAM KSDS and should be tuned for best performance like a VSAM
| KSDS. You may have one or more BTS repositories. A BTS repository is a VSAM
| KSDS and is used to hold state data for processes, activities, containers, events and
| timers. A BTS repository is associated with a process through the PROCESSTYPE
| definition. If a BTS process executes on more than one CICS region, then the BTS
| repository needs to be shared between those regions. It would need to be a VSAM
| RLS file. It too should be tuned for best performance as a VSAM RLS file.
| To support the execution of the BTS processes, CICS runs one or many
| transactions. A BTS transaction comprises a process which itself consists of one or
| many activities. As each activity is run, so a CICS transaction is executed. If an
| activity becomes dormant, waiting for an event for example, the activity restarts
| after that event occurs, and a new CICS transaction is started. even if this is a
| continuation of the business transaction. You may see many executions of the
| transaction specified in a process definition in the CICS statistics for a single BTS
| transaction. The number of transaction executed and number and type of file
| accesses to the BTS repository depend on how you have chosen to use BTS
| services. Examination of CICS statistics reports will give you this information for
| your applications. You should be aware that containers are stored on the BTS
| repository. You need to ensure that the repository is large enough to contain all the
| active BTS data. This is probably best done by scaling it based on a test system.
Note: All terminals are installed, even surrogate TCT entries for MRO.
You must ensure that the DFHVTAM group precedes any TERMINAL or
TYPETERM definition in your GRPLIST. It is contained in the DFHLIST
GRPLIST, so adding DFHLIST first to your GRPLIST ensures this. If you do
not do this, the programs used to build the TCT are loaded for each terminal,
thus slowing initial and cold starts.
3. You should not have more than about 100 entries in any group defined in the
CSD. This may cause unnecessary overhead during processing, as well as
making maintenance of the group more difficult.
4. Make sure that changing the START= parameter does not change the default
for any facilities that your users do not want to have AUTO-started. Any
facility that you may want to override may be specifically coded in the
PARM= on the EXEC statement, or all of them may be overridden by
specifying START=(...,ALL).
| 5. If you do not intend to make use of the CICS Web Interface, you should make
| sure that WEB=NO is specified in the SIT. If WEB=YES is specified, the Web
| domain is activated, and there is an extra read from the CICS catalog during
| the setup of the CICS Web Interface.
Free space has no effect, so do not spend time trying to tune this.
8. On cold and initial starts, CICS normally has to delete all the resource
definition records from the global catalog. You can save the time taken to do
this by using the recovery manager utility program, DFHRMUTL, described in
the CICS Operations and Utilities Guide.
v Before a cold start, run DFHRMUTL with
SET_AUTO_START=AUTOCOLD,COLD_COPY as input parameters. This
creates a copy of the global catalog data set that contains only those records
needed for a cold start. If the return code from this job step is normal, you
can replace the original global catalog with the new copy (taking an archive
of the original catalog if you wish). A example of the JCL to do this is
contained in the CICS Operations and Utilities Guide.
v Before an initial start, run DFHRMUTL with
SET_AUTO_START=AUTOINIT,COLD_COPY as input parameters, and
follow the same procedure to use the resulting catalog.
9. Allocate your DATA and INDEX data sets on different units, if possible.
10. Consider the use of autoinstalled terminals as a way of improving cold start,
even if you do not expect any storage savings. On startup, fewer terminals are
installed, thereby reducing the startup time.
11. The RAPOOL system initialization parameter should be set to a value that
allows faster autoinstall rates. For a discussion of this, see “Receive-any pool
(RAPOOL)” on page 204.
12. Specify the buffer, string, and key length parameters in the LSR pool
definition. This reduces the time taken to build the LSR pool, and also reduces
the open time for the first file to use the pool.
You can use the MVS automatic restart manager to implement a sysplex-wide
integrated automatic restart mechanism. A sysplex can use ARM and VTAM
persistent sessions spread across many TORs in a generic resource set. ARM and
VTAM persistent sessions provide good recovery times in the event of a TOR
failure, and the TOR restart is reduced because only a fraction of the network has
to be rebuilt. You can log on to the generic resource while the failed TOR restarts.
ARM provides faster restart by providing surveillance and automatic restart. The
need for operator-initiated restarts, or other automatic restart packages, are
eliminated. For more information about MVS automatic restart management, see
the CICS Transaction Server for OS/390 Installation Guide, and the OS/390 MVS
Setting up a Sysplex manual, GC28-1779.
Chapter 27. Improving CICS startup and normal shutdown time 341
Buffer considerations
The number of index levels can be obtained by using the IDCAMS LISTCAT
command against a GCD after CICS has been shut down. Because cold start
mainly uses sequential processing, it should not require any extra buffers over
those automatically allocated when CICS opens the file.
Note that if you have a large number of terminals autoinstalled, shutdown can fail
due to the MXT system initialization parameter being reached or CICS becoming
short on storage. To prevent this possible cause of shutdown failure, you should
consider putting the CATD transaction in a class of its own to limit the number of
concurrent CATD transactions. Also, AIQMAX can be specified to limit the number
of devices that can be queued for autoinstall. This protects against abnormal
consumption of virtual storage by the autoinstall/delete process, caused as a result
of some other abnormal event.
If this limit is reached, the AIQMAX system initialization parameter affects the
LOGON, LOGOFF and BIND processing by CICS. CICS requests VTAM to stop
passing such requests to CICS. VTAM holds the requests until CICS indicates that
it can accept further commands (this occurs when CICS has processed a queued
autoinstall request).
All five types of CICS statistics record (interval, end-of-day, requested, requested
reset, and unsolicited) present information as SMF records. The numbers used to
identify each SMF statistics record are given in the DFHSTIDS copybook.
Programming information about the formats of CICS statistics records is given in
the CICS Customization Guide.
Summary report
The Statistics Utility Program (STUP) provides a summary report facility that can
be selected using a DFHSTUP control parameter. Information on how to run
DFHSTUP is given in the CICS Operations and Utilities Guide. When selected, the
summary report is placed after all other reports. The DFHSTUP summary report
The summary report feature uses all of the appropriate statistic collections
contained on the SMF data set. Therefore, depending on when the summary report
feature is executed and when the SMF data set was last cleared, summary reports
may be produced covering an hour, a week, or any desired period of time. Note
that due to the potential magnitude of the summary data, it is not recommended
that a summary period extend beyond one year.
Within each of the following sections, the meaning of the summary statistics is
given. Because the summary statistics are computed offline by the DFHSTUP
utility, the summary statistics are not available to online users. Due to the potential
magnitude of the summary data, and due to limited page width, summary data
may be represented as a scaled value. For example, if the total number of terminal
input messages is 1234567890, this value is shown as 1234M, where ‘M’ represents
millions. Other scaling factors used are ‘B’ for billions and ‘T’ for trillions. Scaling
is only performed when the value exceeds 99999999, and only then when page
width is limited, for example in terminal statistics.
Table 14. Statistics listed in this appendix
Statistic type DSECT Page
Monitoring
–Global DFHMNGDS 428
Program autoinstall DFHPGGDS 430
Loader domain
–Global DFHLDGDS 431
Program
–Resource DFHLDRDS 442
Recovery manager DFHRMGDS 445
Statistics domain DFHSTGDS 451
Storage manager
–Domain DFHSMDDS 259
–DSA DFHSMSDS 456
–Task subpools DFHSMTDS 462
Table manager DFHA16DS 464
| TCP/IP Services
| –Resource DFHSORDS 465
Temporary storage DFHTSGDS 468
Terminal control DFHA06DS 474
Transaction class DFHXMCDS 478
Transaction manager
–Global DFHXMGDS 482
–Resource DFHXMRDS 484
Transient data
–Global DFHTQGDS 491
–Resource DFHTQRDS 494
User domain DFHUSGDS 499
VTAM DFHA03DS 500
Autoinstall attempts is the total number of eligible autoinstall attempts made during the
entire CICS session to create terminal entries as users logged on. For
an attempt to be considered eligible, CICS and VTAM must not be
terminating, autoinstall must be enabled, and the terminal type
must be valid for autoinstall (not pipeline, LU6.1, or LU6.2 parallel
sessions).
Rejected attempts is the total number of eligible autoinstall attempts that were
subsequently rejected during the entire CICS session. Reasons for
rejection can be maximum concurrency value exceeded, invalid
bind, the user program has rejected the logon, and so on. If this
number is unduly high, check the reasons for rejection.
Deleted attempts is the total number of deletions of terminal entries as users logged
off during the entire session.
Peak concurrent is the highest number of attempts made during the entire CICS
attempts session to create terminal entries as users logged on at the same
time.
Times the peak was is the number of times that the “peak concurrent attempts” value
reached was reached during the entire CICS session.
Times SETLOGON is the number of times that the SETLOGON HOLD command was
HOLD issued issued during the entire run of CICS. CICS issues the VTAM
SETLOGON HOLD command when the maximum number of
concurrent autoinstall requests allowed (the AIQMAX= system
initialization parameter) is exceeded.
Queued logons is the total number of attempts that were queued for logon due to
delete in progress of the TCTTE for the previous session with the
same LU.
Peak of queued logons is the highest number of logons that were queued waiting for
TCTTE deletion at any one time. If this is unduly high, consider
increasing the delete delay interval parameter of the AILDELAY
system initialization parameter.
Times queued peak is the number of times that the “peak of queued logons” value was
reached reached.
Remote delete interval is the currently-specified time delay, in the form hhmmss, between
invocations of the timeout delete transaction that removes
redundant shipped terminal definitions. The value is set either by
the DSHIPINT system initialization parameter, or by a subsequent
SET DELETSHIPPED command.
Remote delete idle is the currently-specified minimum time, in the form hhmmss, that
time an inactive shipped terminal definition must remain installed in this
region, before it becomes eligible for removal by the CICS timeout
delete transaction. The value is set either by the DSHIPIDL system
initialization parameter, or by a subsequent SET DELETSHIPPED
command.
Shipped terminals is the number of shipped remote terminal definitions installed at the
built start of the recording period, plus the number built during the
recording period. This value equates to the sum of “Shipped
terminals installed” and “Shipped terminals deleted”.
CICS DB2
CICS DB2: global statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS DB2CONN command, and are mapped by the DFHD2GDS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 18. CICS DB2: global statistics
DFHSTUP Field name Description
name
Pool Thread D2G_POOL_PLAN_NAME is the name of the plan used for the
Plan name pool. If a dynamic plan exit is being
used for the pool this DSECT field
will be nulls.
Peak number of tasks is the peak number of CICS tasks queued waiting for a DB2 subtask
on the TCB Readyq TCB to become available.
Pool Thread Plan is the name of the plan used for the pool. If the plan name has
name changed, it is the last setting of plan name. If a dynamic plan exit is
being used for the pool this DSECT field will be nulls.
Pool Thread Dynamic is the name of the dynamic plan exit to be used for the pool. If the
Planexit name dynamic planexit name has changed, it is the last setting of
dynamic planexit name. If static plan is being used for the pool this
DSECT field will be nulls.
Pool Thread Authtype is the type of id to be used for DB2 security checking for pool
threads. If the pool thread authtype has changed, it is the last
setting of pool thread authtype. If an Authid is being used for pool
threads this DSECT field contains nulls.
Pool Thread Authid is the static id to be used for DB2 security checking for pool
threads. If the pool thread authid has changed, it is the last setting
of pool thread authid. If an Authtype is being used for pool threads
this DSECT field contains nulls.
Pool Thread is the frequency of DB2 accounting records to be produced for
Accountrec setting transactions using pool threads. If the pool thread accountrec setting
has changed, it is the last setting of pool thread accountrec.
Pool Thread is the setting for whether transactions should wait for a pool thread
Threadwait setting to be abended if the number of active pool threads reach the pool
thread limit. If the pool thread threadwait setting has changed, it is
the last setting of pool thread threadwait.
Pool Thread Priority is the priority of the pool thread subtasks relative to the CICS main
task (QR TCB). If the pool thread priority has changed, it is the last
setting of pool thread priority.
Total number of calls is the total number of SQL calls made using pool threads.
using Pool Threads
Total number of Pool is the total number of DB2 sign-ons performed for pool threads.
Thread Sign-ons
Total number of Pool is the total number of two phase commits performed for units of
Thread Commits work using pool threads.
Total number of Pool is the total number of units of work using pool threads that were
Thread Aborts rolled back.
Total number of Pool is the total number of units of work using pool threads that used
Thread Single Phases single phase commit, either because they were read-only UOWs, or
because DB2 was the only recoverable resource updated in the
UOW.
Total number of Pool is the total number of times CICS transactions using the pool were
Thread Reuses able to reuse an already created DB2 thread. This count includes
transactions that overflow to the pool to acquire a thread and reuse
an existing thread.
Total number of Pool is the total number of terminate thread requests made to DB2 for
Thread Terminates pool threads. This includes pool threads used by transactions that
overflow to the pool.
Total number of Pool is the total number of times all available threads in the pool were
Thread Waits busy and a transaction had to wait for a thread to become available.
This count includes transactions that overflow to the pool to acquire
a thread and have to wait for a pool thread.
Pool Thread Limit is the thread limit value for the pool. If the pool thread limit has
changed, it is the last setting of pool thread limit.
Peak number of Pool is the peak number of active pool threads.
Threads in use
Peak number of Pool is the peak number of CICS tasks that have used a pool thread.
tasks
Total number of Pool is the total number of completed tasks that have used a pool thread.
tasks
Peak number of tasks is the peak number of CICS tasks that waited for a pool thread to
on the Pool Readyq become available.
Command Thread is the type of id to be used for DB2 security checking for command
Authtype threads. If the command thread authtype has changed, it is the last
setting of command thread authtype. If an Authid is being used for
command threads this DSECT field contains nulls.
Command Thread is the static id to be used for DB2 security checking for command
Authid threads. If the command thread authid has changed, it is the last
setting of command thread authid. If an Authtype is being used for
command threads this DSECT field contains nulls.
Total number of calls is the total number of DB2 commands issued through the DSNC
using Command transaction.
Threads
Total number of is the total number of DB2 sign-ons performed for command
Command Thread threads.
Sign-ons
Total number of is the total number of terminate thread requests made to DB2 for
Command Thread command threads.
Terminates
Total Number of is the total number of times a DSNC DB2 command resulted in a
Command Thread pool thread being used because the number of active command
Overflows threads exceed the command thread limit.
Command Thread is the maximum number of command threads allowed. If the
Limit command thread limit has changed, it is the last setting of
command thread limit.
Peak number of is the peak number of active command threads.
Command Threads
There are three sections in the DFHSTUP report for CICS DB2 (resource) statistics:
v Resource information (see Resource statistics: resource information)
v Request information (see “Resource statistics: request information” on page 360)
v Performance information (see “Resource statistics: performance information” on
page 361)
Commit Count is the total number of two phase commits performed for units of
work using this DB2ENTRY.
Abort Count is the total number of units of work using this DB2ENTRY that
were total back.
Single Phase is the total number of units of work using the DB2ENTRY that used
single phase commit, either because they were read-only UOWs, or
because DB2 was the only recoverable resource updated in the
UOW.
Thread Reuse is the total number of times CICS transactions using the DB2ENTRY
were able to reuse an already created DB2 thread.
Thread Terms is the total number of terminate thread requests made to DB2 for
threads of this DB2ENTRY.
Thread is the total number of times all available threads in the DB2ENTRY
Waits/Overflows were busy and a transaction had to wait for a thread to become
available, or overflow to the pool and use a pool thread instead.
The DBCTL statistics exit DFHDBSTX is invoked by the CICS adapter (DFHDBAT),
and CICS statistics information is collected by the statistics domain whenever
DBCTL is disconnected as a result of:
v An orderly or immediate disconnection of the DBCTL using the menu
transaction CDBC
v An orderly termination of CICS.
For more information about CICS-DBCTL statistics, see the CICS IMS Database
Control Guide.
CICS DBCTL session number STADSENO is the number of the CICS-DBCTL session
and is incremented every time you connect
and disconnect.
Maximum number of threads STAMATHD is the maximum value specified in the DRA
startup parameter table.
| Dispatcher domain start date DSGLSTRT is the date and time at which the dispatcher
| and time started. This value can be used as an
approximate time at which CICS started.
The DFHSTUP report expresses this time as
hours:minutes:seconds.decimals; however, the
DSECT field contains the time as a store
clock (STCK) value in local time.
| Current ICVR time (msec) DSGICVRT is the ICVR time value (expressed im
| milliseconds) specified in the SIT, or as an
| override, or changed dynamically using
| CEMT SET SYSTEM TIME(value) or EXEC
| CICS SET SYSTEM TIME(fullword binary
| data-value) commands.
| Total MAXOPENTCBS delay DSGTOTWL is the total time that open mode TCBs were
| time delayed because the system had reached the
| MAXOPENTCBS limit.
Accum Time Dispatched DSGTDT is the accumulated real time that this TCB
has been dispatched by MVS, that is, the
total time used between an MVS wait issued
by the dispatcher and the subsequent wait
issued by the dispatcher. The DFHSTUP
report expresses this time as
hours:minutes:seconds.decimals; however, the
DSECT field contains the time as a store
clock (STCK) value.
| Detached Unclean DSGTCBDU is the number of MVS TCBs that have been
| or are in the process of being detached from
| this CICS dispatcher TCB mode because the
| CICS transaction that was associated with
| the TCB has abended.
| Dispatcher start date is the date and time at which the CICS dispatcher started. This
| and time value can be used as an approximate date and time at which CICS
| started. The DFHSTUP report expresses this time as
hours:minutes:seconds.decimals at the local time); however, the DSECT
field contains the time as a local store clock (STCK) value.
| Address space CPU is the total CPU time taken by the CICS address space. The
| time DFHSTUP report expresses this as hours:minutes:seconds.decimals
| Address space SRB is the total SRB time taken by the CICS address space. The
| time DFHSTUP report expresses this as hours:minutes:seconds.decimals
Peak number of tasks is the peak number of tasks concurrently in the system.
Peak ICV time (msec) is the peak ICV time value (expressed in milliseconds) specified in
the SIT, or as an override, or changed dynamically using CEMT SET
SYSTEM TIME(value) or EXEC CICS SET SYSTEM TIME(fullword
binary data-value) commands.
| Peak ICV time (msec) is the peak ICV time value (expressed in milliseconds) specified in
| the SIT, or as an override, or changed dynamically using CEMT SET
| SYSTEM RUNWAY(value) or EXEC CICS SET SYSTEM RUNWAY
| (fullword binary data-value) commands.
Peak ICVTSD time is the peak ICVTSD time value (expressed in milliseconds) specified
(msec) in the SIT, or as an override, or changed dynamically using CEMT
SET SYSTEM SCANDELAY(value) or EXEC CICS SET SYSTEM
SCANDELAY(fullword binary data-value) commands.
| Peak PRTYAGE time is the peak PRTYAGE time value (expressed in milliseconds)
| (msec) specified in the SIT, or as an override, or changed dynamically
| using CEMT SET SYSTEM AGING(value) or EXEC CICS SET
| SYSTEM AGING(fullword binary data-value) commands.
| Max open TCB limit is the last MAXOPENTCBS value (expressed as the number of open
| (MAXOPENTCBS) TCBs) that was specified in the SIT, or as an override, or changed
| dynamically using the CEMT SET SYSTEM MAXOPENTCBS(value)
| or EXEC CICS SET SYSTEM MAXOPENTCBS(fullword binary
| data-value) commands.
| Peak open TCBs in is the peak number of open TCBs in use reached in the system.
| use
| Times at max open is the total number of times the MAXOPENTCBS limit has been
| TCB limit reaches.
| Total TCB attaches is the total number of TCB attaches that have been delayed due to
| delayed by the MAXOPENTCBS limit being reached.
| MAXOPENTCBS
| Total MAXOPENTCBS is the total time spent waiting by those tasks that were delayed due
| delay time to the MAXOPENTCBS limit being reached.
| Average is the average time spent waiting by those tasks that were delayed
| MAXOPENTCBS due to the MAXOPENTCBS limit being reached.
| delay time
| Mode is the name of the CICS Dispatcher TCB mode, either QR, RO, CO,
| SZ, RP, FO, SL, SO, J8, L8 or S8.
| Peak TCBs is the peak number of MVS TCBs attached in this CICS dispatcher
| TCB mode.
MVS Waits is the total number of MVS waits which occurred on this TCB
mode.
Total Time in MVS is the total real time that the TCBs in this mode were in an MVS
wait wait. The DFHSTUP report expresses this time as
days-hours:minutes:seconds.decimals.
Total Time Dispatched is the total real time that the TCBs in this mode were dispatched by
MVS. The DFHSTUP report expresses this time as
days-hours:minutes:seconds.decimals.
Total CPU Time / is the total CPU time taken for the TCBs in this mode. The
TCB DFHSTUP report expresses this time as days-
hours:minutes:seconds.decimals.
| Dump domain
The dump domain collects global and resource statistics for both system and
transaction dumps which occur during the CICS run.
System dumps
Dump domain: system dump global statistics
These statistics fields contain the global data collected by the dump domain for
system dumps.
Dumps taken is the total number of system dumps taken by the whole system
during the entire run of CICS. This number does not include
suppressed dumps. A set of related dumps may be taken across the
sysplex if the dump code includes the RELATED option. In this
case, the count is incremented by one for the CICS system which
initiated the dump. The number is unchanged for all other CICS
systems even if they have issued a dump as part of the related
request.
Dumps suppressed is the total number of system dumps, requested from the dump
domain by CICS or by a user, which were suppressed by one of:
v A user exit
v The dump table
v A global system dump suppression.
Dumpcode is the system dump code. This code is a CICS message number with
the DFH prefix and the action code suffix (if any) removed. For
guidance information about CICS messages, see the CICS Messages
and Codes manual.
Dumps is the total number of system dumps taken for the dump code
identified in the Dumpcode field. A set of related dumps may be
taken across the sysplex if the dump code includes the RELATED
option. In this case, the count is incremented by one for the CICS
system which initiated the dump. The number is unchanged for all
other CICS systems even if they have issued a dump as part of the
related request.
Dumps suppressed is the total number of system dumps, for the dump code identified
in the Dumpcode field, which were suppressed by one of:
v A user exit
v The dump table
v A global system dump suppression.
Transaction dumps
Dump domain: transaction dump global statistics
These statistics fields contain the global data collected by the dump domain for
transaction dumps.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS TRANDUMPCODE command and are mapped by the DFHTDGDS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
Table 38. Dump domain: transaction dump global statistics
DFHSTUP name Field name Description
Dumps taken is the total number of transaction dumps taken by the whole system
during the entire run of CICS. This number does not include
suppressed dumps.
Dumps suppressed is the total number of transaction dumps, requested from the dump
domain by CICS or by a user, which were suppressed by one of:
v A user exit
v The dump table.
Enqueue domain
The enqueue domain collects global statistics for enqueue requests.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS ENQUEUE command, and are mapped by the DFHNQGDS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 42. Enqueue domain: enqueue requests global statistics
DFHSTUP name Field name Description
Enqueue Retention NQGTNQRT is the total retention time for the enqueues
time that were retained due to the owning UOW
being shunted.
File control
There are four sections in the DFHSTUP report for file statistics:
v Files: resource information (“Files: resource information statistics” on page 386).
v Files: requests information (“Files: requests information statistics” on page 389).
v Files: data table requests information (“Files: data table requests information
statistics” on page 391).
v Files: performance information (“Files: performance information statistics” on
page 395).
Unsolicited file statistics are printed in a statistics report separate from other types
of CICS statistics.
Remote sysid When operating in an ISC or MRO environment, and the file is held
by a remote system, this field specifies the system upon which the
file is resident.
LSR Pool ID The identity of the local shared resource pool. This value is that
specified via:
v The LSRPOOLID operand of the resource definition online
DEFINE FILE command.
v The TYPE=FILE, LSRPOOL operand of the DFHFCT macro.
They list the number of service requests processed against the data set. These are
dependent on the type of requests that are allowed on the data set.
Table 52. Files: requests information statistics
DFHSTUP name Field name Description
Highest table size A17DTSHI is the peak number of records present in the
table.
When using the shared data tables feature the statistics records will contain the additional
information as follows:
NOT IN THE DFHSTUP A17DTSIZ is the current number of records in the data
REPORT table.
Adds from reads is the total number of records placed in the table by the loading
process or as a result of API READ requests issued while loading
was in progress.
Add requests is the total number of attempts to add records to the table as a
result of WRITE requests.
Adds rejected
–Exit is the total number of records CICS attempted to add to the table
which were rejected by the global user exit.
–Table full is the total number of records CICS attempted to add to the table
but was unable to do so because the table already contained the
maximum number of records specified.
Rewrite requests is the total number of attempts to update records in the table as a
result of REWRITE requests.
Delete requests is the total number of attempts to delete records from the table as a
result of DELETE requests.
Highest table size is the peak number of records present in the table.
| Chng Resp/Lock is the total number of CHANGED responses encountered during
| Waits the data table loading process.
| Chng Resp/Lock is the total number of LOADING responses encountered during the
| Waits data table loading process.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS CONNECTION command, and are mapped by the DFHA14DS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
CICS always allocates a SEND session when sending an IRC request to another
region. Either a SEND or RECEIVE session can be allocated when sending requests
using LU6.1 ISC, and either a contention loser or a contention winner session can
be allocated when sending requests using APPC.
In LU6.1, SEND sessions are identified as secondaries, and RECEIVE sessions are
identified as primaries.
Table 59. ISC/IRC system entry: resource statistics
DFHSTUP name Field name Description
Peak bids in progress A14EBHWM is the peak number of bids that were in
progress at any one time. A bid is sent on an
LU6.1 RECEIVE session only.
Failed allocates due to A14ESTAO is the number of allocate requests that failed
sessions in use due to a session not being currently
available for use. These requests get
SYSBUSY responses to the allocate. This
field is incremented for allocates failing with
an AAL1 abend code.
Connection name is the system entry defined by the CONNECTION definition in the
CSD or by autoinstall.
Connection netname is the name by which the remote system is known in the
network—that is, its applid.
Access Method / is the combined communication access method and protocol used
Protocol for the connection.
Average autoinstalled is the average autoinstalled connection time. This field applies to
connection time autoinstalled connections and is summarized from the unsolicited
system entry statistics records only.
Send session count is the last value encountered for the SENDCOUNT specified on the
CONNECTION definition. This field applies to MRO and LU6.1
connections only.
Receive session count is the last value encountered for the RECEIVECOUNT specified on
the CONNECTION definition. This field applies to MRO, LU6.1,
and EXCI connections only.
Average number of is the average number of automatic initiate descriptors (AIDs) in the
AIDs in chain AID chain.
Average number of is the average number of AIDs waiting for a session to become
generic AIDs in chain available to satisfy an allocate request.
ATIs satisfied by is the total number of ATI requests (queued allocates) that have
contention losers been satisfied by contention loser sessions (primaries for LU6.1).
This is always zero for IRC system entries.
ATIs satisfied by is the total number of ATI requests (queued allocates) that have
contention winners been satisfied by contention winner sessions (secondaries for LU6.1).
This field is the total ATIs when the system entry is for IRC.
Peak contention losers is the peak number of contention loser sessions (primaries for
LU6.1). that were in use at any one time. For APPC, this field is
zero.
Peak contention is the peak number of contention winner sessions (secondaries for
winners LU6.1) that were in use at any one time. For APPC, this field is
zero.
Total bids sent is the total number of bids that were sent. A bid is sent on an LU6.1
RECEIVE session only. This field is always zero for IRC and APPC
system entries.
Average bids in is the average number of bids in progress. A bid is sent on an LU6.1
progress RECEIVE session only. This field is always zero for IRC and APPC
system entries.
Peak bids in progress is the peak number of bids that were in progress at any one time. A
bid is sent on an LU6.1 RECEIVE session only. This field is always
zero for IRC and APPC system entries.
Note for the following five fields: For APPC only, if an allocate request does not specify a
mode group, CICS takes the first mode group within the sessions available, and the statistics
for these allocates are reported against the system entry. If an allocate specifically requests a
mode entry, the statistics for these allocates go into that mode entry.
Peak outstanding is the peak number of allocation requests that were queued for this
allocates system. For APPC this field contains only generic allocate requests.
Total number of is the total number of allocate requests against this system. For
allocates APPC this field contains only generic allocate requests.
Average number of is the average number of queued allocate requests against this
queued allocates system. For APPC this field is incremented only for generic allocate
requests.
Failed link allocates is the total number of allocate requests that failed due to the
connection being released, out of service, or with a closed mode
group. For APPC this field is incremented only for generic allocate
requests.
Failed allocates due to is the total number of allocate requests that failed due to a session
sessions in use not being currently available for use. These requests get SYSBUSY
responses to the allocate. This field is incremented for allocates
failing with an AAL1 abend code. For APPC this field is
incremented only for generic allocate requests.
Maximum queue time is the last non-zero value encountered for the MAXQTIME specified
(seconds) on the CONNECTION definition. This value represents the
maximum time you require to process an allocate queue on this
connection. If the allocate queue would take greater than this time
to process the entire queue would be purged. This value only takes
effect if the QUEUELIMIT value has been reached.
Allocate queue limit is the last non-zero value encountered for the QUEUELIMIT
parameter specified on the CONNECTION definition. If this value
is reached then allocates are rejected.
Number of is the is the total number of allocates rejected due to the
QUEUELIMIT QUEUELIMIT value being reached.
allocates rejected
Number of is the total number of times an allocate queue has been purged due
MAXQTIME allocate to the MAXQTIME value. A queue is purged when the total time it
queue purges would take to process a queue exceeds the MAXQTIME value.
Number of is the total number of allocates purged due to the queue processing
MAXQTIME allocates time exceeding the MAXQTIME value.
purged
If sessions have not been freed after this mechanism has been
invoked then any subsequent allocate requests are purged and
included in this statistic as the MAXQTIME purging mechanism is
still in operation.
Number of XZIQUE is the total number of allocates rejected by the XZIQUE exit
allocates rejected
Number of XZIQUE is the total number of allocate queue purges that have occurred at
allocate queue purges XZIQUE request for this connection.
Number of XZIQUE is the total number of allocates purged due to XZIQUE requesting
allocates purged that queues should be purged for this connection.
Mode entry
ISC mode entry: Resource statistics
These statistics are collected only if you have an APPC connection defined in your
CICS region, and they are then produced for each mode group defined in that
connection. These statistics cannot be accessed online using the EXEC CICS
COLLECT STATISTICS command. They are only produced for offline processing
(written to SMF).
These statistics are mapped by the DFHA20DS DSECT. This DSECT is also used to
map the mode entry totals records.
Current bids in progress A20EBID is the number of bids that are in progress on
the sessions defined to this mode group. A
bid is sent on an APPC “contention loser”
session when there are no “contention
winner” sessions available to allocate.
These statistics are collected only if you have an APPC connection defined in your
CICS region, and they are then produced for each mode group defined in that
connection.
Connection name is the name of the APPC connection/system that owns this mode
entry. It corresponds to the system entry in the terminal definition.
Mode name is the mode group name related to the intersystem connection name
above (A20SYSN). It corresponds to the modename in the sessions
definition.
ATIs satisfied by is the total number of ATI requests (queued allocates) that have
contention losers been satisfied by “contention loser” sessions belonging to this mode
group.
ATIs satisfied by is the total number of ATI requests (queued allocates) that have
contention winners been satisfied by “contention winner” sessions belonging to this
mode group.
Peak contention losers is the peak number of “contention loser” sessions belonging to this
mode group that were in use at any one time. There can be sessions
not defined (by the MAXIMUM= parameter of CEDA) as
“contention winners” or “contention losers”, and their states are
dynamically decided at bind time.
Peak contention is the peak number of “contention winner” sessions belonging to
winners this mode group that were in use at any one time. There can be
sessions not defined (by the MAXIMUM= parameter of CEDA) as
“contention winners” or “contention losers”, and their states are
dynamically decided at bind time.
Total bids sent is the total number of bids that were sent on the sessions defined to
this mode group. A bid is sent on an APPC “contention loser”
session when there are no “contention winner” sessions available to
allocate.
Average bids in is the average number of bids in progress.
progress
Peak bids in progress is the peak number of bids that were in progress at any one time,
on the sessions defined to this mode group. A bid is sent on an
APPC “contention loser” session when there are no “contention
winner” sessions available to allocate.
The next three fields only contain allocates against specific mode groups. Generic allocate
requests are contained in the equivalent system entry statistics.
Peak outstanding is the peak number of allocation requests that were queued for this
allocates mode group.
Total specific allocate is the total number of specific allocate requests against this mode
requests group.
Total specific allocates is the total number of specific allocates satisfied by this mode
satisfied group.
Total generic allocates is the total number of generic allocates satisfied from this mode
satisfied group.
The next three fields only contain allocates against specific mode groups. Generic allocate
requests are contained in the equivalent system entry statistics.
Average number of is the average number of queued specific allocate requests against
queued allocates this mode group. An allocate is queued due to a session in this
mode group not being available at this moment. This includes
waiting for a bind, a bid, or all sessions are currently in use.
Failed link allocates is the total number of specific allocate requests that failed due to the
connection being released, out of service, or with a closed mode
group.
Failed allocates due to is the total number of specific allocate requests that failed due to a
sessions in use session not being currently available for use in this mode group.
These requests get SYSBUSY responses to the allocate. This field is
incremented for allocates failing with an AAL1 abend code.
Number of XZIQUE is the total number of allocate queue purges that have occurred at
allocate queue purges XZIQUE request for this mode entry.
Number of XZIQUE is the total number of allocates purged due to XZIQUE requesting
allocates purged that queues should be purged (A20EQPCT) for this mode entry.
These statistics are collected only if you have either an LU6.2 connection or IRC
defined in your CICS region, and they are then produced globally, one per system.
Table 64. ISC/IRC attach time: Summary resource statistics
DFHSTUP name Description
Persistent verification is the time in minutes set by the PVDELAY parameter of the SIT. It
refresh time specifies how long entries are allowed to remain unused in the PV
'signed on from' list of a remote system.
Entries reused refers to the number of times that user’s entries in the PV 'signed
on from' list were reused without referencing the ESM of the remote
system.
Entries timed out refers to the number of user’s entries in the PV 'signed on from' list
that were timed out after a period of inactivity.
Average reuse time refers to the average amount of time that has elapsed between each
between entries reuse of a user’s entry in the PV 'signed on from' list.
Journalname
Journalname resource statistics
These statistics fields contain the resource data collected by the log manager
domain. For more information on logging and journaling, see “Chapter 22. Logging
and journaling” on page 271, and “Journalname and log stream statistics” on
page 55.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS JOURNALNAME command, and are mapped by the DFHLGRDS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
Table 65. Journalname: resource statistics
DFHSTUP name Field name Description
Write Requests is the total number of times that a journal record was written to the
journal.
Bytes Written is the total number of bytes written.
Buffer Flushes is the total number of times that a journal block was written to the
log stream (in the case of a journal defined as type MVS), or to the
System Management Facility (in the case of a journal defined as
type SMF).
Log stream
Log stream resource statistics
These statistics fields contain the resource data collected by the log manager
domain. For more information on logging and journaling, see “Chapter 22. Logging
and journaling” on page 271, and “Journalname and log stream statistics” on
page 55.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS STREAMNAME command and are mapped by the DFHLGSDS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
Table 67. Log stream: resource statistics
DFHSTUP name Field name Description
Auto Delete LGSAUTOD The log data auto delete indicator. If set to
'YES' the MVS Logger automatically deletes
the data as it matures beyond the retention
period, irrespective of any logstream delete
calls. If set to 'NO' the data is only deleted
when a logstream delete call is issued and
the data has matured beyond the retention
period.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS STREAMNAME command and are mapped by the DFHLGSDS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
Table 68. Log stream: request statistics
DFHSTUP name Field name Description
These statistics fields contain the log stream summary resource data.
Table 69. Log stream: Summary resource statistics
DFHSTUP name Description
These statistics fields contain the log stream summary request data.
Table 70. Log stream: Summary request statistics
DFHSTUP name Description
LSRpool
CICS supports the use of up to eight LSRpools, and produces two sets of statistics
for LSRpool activity.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS LSRPOOL command, and are mapped by the DFHA08DS DSECT. For
programming information about the EXEC CICS COLLECT STATISTICS command,
see the CICS System Programming Reference manual.
Table 71. LSRpool: resource statistics for each LSR pool
DFHSTUP name Field name Description
Time Created A08LKCTD is the time when this LSR pool was created.
The DFHSTUP report expresses this time as
hours:minutes:seconds.decimals in local time.
Maximum key length A08BKKYL is the length of the largest key of a VSAM
data set which may use the LSR pool. The
value is obtained from one of:
v The MAXKEYLENGTH option of the
DEFINE LSRPOOL command in resource
definition online, if it has been coded
v A CICS calculation at the time the LSR
pool is built.
Size A08BKBSZ is the size of the buffers that are available to CICS.
Buffers may be specified through:
v The DEFINE LSRPOOL command of resource
definition online
v A CICS calculation at the time the LSRPOOL is
built, of the buffers to use.
Size A08BKBSZ is the size of the buffers that are available to CICS.
Buffers may be specified through:
v The DEFINE LSRPOOL command of resource
definition online
v A CICS calculation at the time the LSRPOOL is
built, of the buffers to use.
Hiperspace failed A08TOCRF_DATA is the number of CREAD requests that failed. MVS
reads had withdrawn the space and VSAM had to read
data from DASD.
Size A08BKBSZ is the size of the buffers that are available to CICS.
Buffers may be specified through:
v The DEFINE LSRPOOL command of resource
definition online
v A CICS calculation at the time the LSRPOOL is
built, of the buffers to use.
Size A08BKBSZ is the size of the buffers that are available to CICS.
Buffers may be specified through:
v The DEFINE LSRPOOL command of resource
definition online
v A CICS calculation at the time the LSRPOOL is
built, of the buffers to use.
The following group of statistics fields describes the characteristics and usage of
the different buffer sizes available for use by the pool. These statistics are available
online, and are mapped by the A08BSSDS DSECT defined in the DFHA08DS
DSECT. This DSECT is repeated for each of the 11 CISIZEs available.
Buffer Size A08BKBSZ is the size of the buffers that are available to
CICS. Buffers may be specified through:
v The DEFINE LSRPOOL command of
resource definition online
v A CICS calculation at the time the
LSRPOOL is built buffers to use.
Total number of pools is the total number of LSRPOOLS that were built during the entire
built CICS run.
Peak requests that is the highest number of requests that were queued at one time
waited for string because all the strings in the pool were in use.
Total requests that is the total number of requests that were queued because all the
waited for string strings in the pool were in use. This number reflects the number of
requests that were delayed during CICS execution due to a
restriction in LSR pool string resources.
Peak concurrently is the peak number of strings that were active during CICS
active strings execution. If you have coded a value for the number of strings the
pool is to use, this statistic is always less than or equal to the value
you have coded. If your coded value for string numbers is
consistently lower than this value in the statistics, you could
consider reducing it so that your pool of VSAM strings is not bigger
than you need.
The group of statistics fields below summarizes the usage of each of the eight
LSRPOOLS during the entire CICS run.
Table 79. LSRpool: summary data buffer statistics
DFHSTUP name Description
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Lookasides is the total number of successful lookasides to data buffers for the
pool.
Reads is the total number of read I/Os to the data buffers for the pool.
User writes is the total number of user-initiated buffer WRITEs from data
buffers for the pool.
Non-user writes is the total number of non-user-initiated buffer WRITEs from data
buffers for the pool.
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Hiperspace reads is the total number of Hiperspace data buffers specified for the
pool.
Hiperspace writes is the total number of successful CWRITE requests issued to transfer
data from virtual data buffers to Hiperspace data buffers.
Hiperspace failed is the total number of CREAD requests that failed. MVS had
reads withdrawn the space and VSAM had to read data from DASD.
Hiperspace failed is the total number of CWRITE requests that failed. There was
writes insufficient Hiperspace and VSAM had to write data to DASD.
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Lookasides is the total number of successful lookasides to index buffers for the
pool.
Reads is the total number of read I/Os to the index buffers for the pool.
User writes is the total number of user-initiated buffer WRITEs from index
buffers for the pool.
Non-user writes is the total number of non-user-initiated buffer WRITEs from index
buffers for the pool.
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Hiperspace reads is the total number of successful CREAD requests issued to transfer
data from Hiperspace index buffers to virtual index buffers.
Hiperspace writes is the total number of successful CWRITE requests issued to transfer
data from virtual index buffers to Hiperspace index buffers.
Hiperspace failed is the total number of CREAD requests that failed. MVS had
reads withdrawn the space and VSAM had to read data from DASD.
Hiperspace failed is the total number of CWRITE requests that failed. There was
writes insufficient Hiperspace and VSAM had to write data to DASD.
If LSRpool buffers are shared, the statistics that follow refer to those shared data
and index buffers.
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Lookasides is the total number of read requests that VSAM was able to satisfy
without initiating an I/O operation; that is, the requested record,
whether index or data, was already present in one of the buffer
resident CIs. This means that no physical I/O had to be done to put
the control interval in the buffer.
These statistics are obtained from VSAM and represent the activity
after the pool was created. Note that these statistics are not reset by
CICS under any circumstances.
Reads is the total number of I/O operations to the buffers that VSAM was
required to initiate to satisfy the CICS application’s activity. This
figure represents failures to find the control interval in the buffers.
These statistics are obtained from VSAM and represent the activity
after the pool was created. Note that these statistics are not reset by
CICS under any circumstances.
User writes is the total number of user-initiated I/O WRITE operations from the
buffers that VSAM was required to initiate to satisfy the CICS
application’s activity.
These statistics are obtained from VSAM and represent the activity
after the pool was created. Note that these statistics are not reset by
CICS under any circumstances.
Non-user writes is the total number of non-user initiated I/O WRITE operations
from the buffers that VSAM was forced to initiate due to no buffers
being available for reading the contents of a CI.
These statistics are obtained from VSAM and represent the activity
after the pool was created. Note that these statistics are not reset by
CICS under any circumstances.
Pool Number is the identifying number of the pool. This value may be in the
range 1 through 8.
Hiperspace reads is the total number of successful CREAD requests issued to transfer
data from Hiperspace buffers to virtual buffers.
Hiperspace writes is the total number of successful CWRITE requests issued to transfer
data from virtual buffers to Hiperspace buffers.
Hiperspace failed is the total number of CREAD requests that failed. MVS had
reads withdrawn the space and VSAM had to read data from DASD.
Hiperspace failed is the total number of CWRITE requests that failed. There was
writes insufficient Hiperspace and VSAM had to write data to DASD.
The following information describes the buffer usage for each file that was
specified to use the LSR pool at the time the statistics were printed. Note that this
section is not printed for unsolicited statistics output.
If the allocation of files to the LSR pool is changed during the period that the
statistics cover, no history of this is available and only the current list of files
sharing the pool are printed in this section. The activity of all files that have used
the pool are, however, included in all the preceding sections of these statistics.
Pool Number is the LSR pool number, in the range 1 through 8, associated with
this file.
File Name is the CICS file identifier you specified through resource definition
online.
Data Buff Size is the last non-zero value encountered for the buffer size used for
the file’s data records. This value is one of the eleven possible
VSAM buffer sizes ranging from 512 bytes to 32 KB. The value is
zero if the file has not been opened yet. The last non-zero value is
produced only if it has been opened.
Index Buff Size is the last non-zero value encountered for the buffer size used for
the file’s index records. This is printed, even if the file has
subsequently been dynamically allocated to a VSAM ESDS or
RRDS. The values this field may take are the same as for the data
buffer size statistic.
Total Buff Waits is the total number of requests that had to wait because all buffers
of the size used by the data set for data (or index) in the LSR pool
were in use.
Peak Buff Waits is the peak number of requests that had to wait because all buffers
of the size used by the data set for data (or index) in the LSR pool
were in use.
If the data sets are waiting for buffers you should examine the
numbers of buffers defined for the data and index buffer sizes used
by the data set. The buffer size used by VSAM depends on the
control interval size in the VSAM definition of the data set. If no
buffer size exists for the specified control interval size, the next
largest buffer size available is used.
Monitoring domain
Monitoring data is made up of a combination of performance class data, exception
class data, and SYSEVENT data.
Performance records is the total number of performance records scheduled for output to
SMF.
Program autoinstall
Program autoinstall: global statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS PROGAUTO command, and are mapped by the DFHPGGDS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 89. Program autoinstall: global statistics
DFHSTUP name Field name Description
Loader
Loader domain: global statistics
These statistics fields contain the global data collected by the loader domain. The
loader domain maintains global statistics to assist the user in tuning and
accounting.
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS PROGRAM command, and are mapped by the DFHLDGDS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 91. Loader domain: global statistics
DFHSTUP name Field name Description
Library load requests LDGLLR is the number of times the loader has issued
an MVS LOAD request to load programs
from the DFHRPL library concatenation into
CICS managed storage. Modules in the LPA
are not included in this figure.
Times DFHRPL re-opened LDGDREBS is the number of times the loader received
an end-of-extent condition during a LOAD
and successfully closed and re-opened the
DFHRPL library and retried the LOAD.
SDSA
Programs removed by LDGDPSCR is the number of program instances removed
compression from storage by the Dynamic Program
Storage Compression (DPSC) mechanism.
Total Not In Use queue LDGDPSCT is the program Not-In-Use (NIU) queue
membership time membership time. For each program that
becomes eligible for removal from storage
by the DPSC mechanism, the time between
the program becoming eligible and the
actual time of its being removed from
storage is calculated. This field is the sum of
these times for all programs removed by the
DPSC mechanism and as such can be
greater than the elapsed time CICS run time.
This field does not include the wait time for
those programs reclaimed from the
Not-In-Use queue.
Total Not In Use queue LDGDPSCT is the program Not-In-Use (NIU) queue
membership time membership time. For each program that
becomes eligible for removal from storage
by the DPSC mechanism, the time between
the program becoming eligible and the
actual time of its being removed from
storage is calculated. This field is the sum of
these times for all programs removed by the
DPSC mechanism and as such can be
greater than the elapsed time CICS run time.
This field does not include the wait time for
those programs reclaimed from the
Not-In-Use queue.
Library load requests is the total number of times the loader has issued an MVS LOAD
request to load programs from the DFHRPL library concatenation
into CICS managed storage. Modules in the LPA are not included in
this figure.
Total loading time is the total time taken for the number of library loads indicated by
LDGLLR. The DFHSTUP report expresses this time as
days-hours:minutes:seconds.decimals.
Average loading time is the average time to load a program from the DFHRPL library
concatenation into CICS managed storage. This value is expressed
as minutes:seconds.decimals.
Program uses is the total number of uses of any program by the CICS system.
Requests that waited is the total number of loader domain requests that were forced to
suspend due to the loader domain performing an operation on that
program on behalf of another task. These operations could be:
v A NEWCOPY request
v Searching the LPA
v A physical load in progress.
Peak waiting Loader is the peak number of tasks suspended at one time.
requests
Times at peak is the total number of times the peak level indicated by the previous
statistic was reached.
Programs loaded but is the total number of programs on the Not-In-Use (NIU) queue.
Not In Use
ECDSA
Programs removed by is the total number of program instances removed from storage by
compression the Dynamic Program Storage Compression (DPSC) mechanism.
Total Not In Use is the total program Not-In-Use (NIU) queue membership time. For
queue membership each program that becomes eligible for removal from storage by the
time DPSC mechanism, the time between the program becoming eligible
and the actual time of its being removed from storage is calculated.
This field is the sum of these times for all programs removed by the
DPSC mechanism and as such can be greater than the elapsed time
CICS run time. This field does not include the wait time for those
programs reclaimed from the Not-In-Use queue.
Total Not In Use is the total program Not-In-Use (NIU) queue membership time. For
queue membership each program that becomes eligible for removal from storage by the
time DPSC mechanism, the time between the program becoming eligible
and the actual time of its being removed from storage is calculated.
This field is the sum of these times for all programs removed by the
DPSC mechanism and as such can be greater than the elapsed time
CICS run time. This field does not include the wait time for those
programs reclaimed from the Not-In-Use queue.
Total Not In Use is the total program Not-In-Use (NIU) queue membership time. For
queue membership each program that becomes eligible for removal from storage by the
time DPSC mechanism, the time between the program becoming eligible
and the actual time of its being removed from storage is calculated.
This field is the sum of these times for all programs removed by the
DPSC mechanism and as such can be greater than the elapsed time
CICS run time. This field does not include the wait time for those
programs reclaimed from the Not-In-Use queue.
Program
Program: resource statistics
These statistics fields contain the resource data collected by the loader for each
program. They are available online, and are mapped by the DFHLDRDS DSECT.
These statistics fields contain the summary resource data statistics for the loader
for each program.
Table 94. Programs: summary resource statistics
DFHSTUP name Description
Fetch count is the total number of times the loader domain has issued an MVS
LOAD request to load a copy of the program from the DFHRPL
library concatenation into CICS managed storage.
Average fetch time is the average time taken to perform a fetch of the program. The
DFHSTUP report expresses this time as minutes:seconds.decimals.
NEWCOPY count is the total number of times a NEWCOPY has been requested
against this program.
Times removed is the total number of times an instance of this program has been
removed from CICS managed storage due to the actions of the
Dynamic Program Storage Compression (DPSC) mechanism.
Recovery manager
Recovery manager: global statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS RECOVERY command, and are mapped by the DFHRMGDS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 95. Recovery manager: global statistics
DFHSTUP name Field name Description
Total UOWs shunted RMGTSHRO is the total number of units of work that had
for commit/backout to be shunted for commit/backout failure
failure because a local resource manager could not
perform commit/backout processing at this
time on behalf of the UOW during
syncpoint, but have now completed.
The following fields further detail the reasons why a UOW did not have the ability to wait
indoubt (shunt) at the time of indoubt failure (lost coordinator), and are breakdowns of the
field RMGIAFNW. This is because the UOW uses either recoverable local resources,
recoverable resources across intersystem links, or external resource managers (RMI), which
do not have the ability to wait indoubt. As a result of a resolution of a UOW being forced
for this reason, integrity exposures may occur.
–Indoubt action RMGNWTD is the number of UOW forces that occurred
forced by TD queues because the UOW uses a recoverable
transient data queue defined with an
indoubt attribute of WAIT=NO.
Total time in shunts is the total time (STCK) that the UOWs shunted for indoubt failure
for indoubt failure (RMGTSHIN) spent waiting in this condition.
Total UOWs shunted is the total number of UOWs that had to be shunted for
for commit/backout commit/backout failure because a local resource manager was not
failure able to perform commit/backout processing at that time, but have
now completed.
Total time shunted for is the total time (STCK) that the UOWs shunted for commit/
commit/backout backout (RMGTSHRO) failures waited in this condition, but have
failure now completed.
Outstanding indoubt is the current number of UOWs that have been shunted for indoubt
failure shunted UOWs failure because the connection to their recovery coordinator during
syncpoint processing was lost.
Outstanding shunted is the total time (STCK) that the UOWs currently shunted for
UOWs time in shunts indoubt failure (RMGTSHIN) spent waiting in this condition so far.
Outstanding shunted is the current number of UOWs that have been shunted for
UOWs for resource commit/ backout failure because a local resource manager was
failure unable to perform commit/backout processing at that time on
behalf of the UOW.
Outstanding shunt is the total time (STCK) that the UOWs are currently shunted for
time for resource commit/backout (RMGCHSIR) failures have been waiting in this
failure condition so far.
Total forces of indoubt is the total number of UOWs that were forced to complete
action by trandef syncpoint processing, despite losing the connection to the recovery
coordinator, because their transaction definition specifying that they
could not wait indoubt.
Total forces of indoubt is the total number of shunted indoubt UOWs that were forced to
action by timeout complete syncpoint processing, although still unconnected to the
recovery coordinator, because their transaction definition wait for
indoubt timeout value was exceeded.
Total forces of indoubt is the total number of shunted indoubt UOWs that were forced to
action by operator complete syncpoint processing, although still unconnected to the
recovery coordinator because the operator (CEMT) forced a
resolution.
Total forces of indoubt is the total number of UOWs that were forced to complete
action by no wait syncpoint processing, despite having the ability to wait indoubt
because a local resource owner or connected resource manager that
the UOW used was unable to wait indoubt.
Total forces of indoubt is the total number of UOWs
action by other
No support for indoubt waiting breakdown
–Indoubt action forced is the number of UOW forces that occurred because the UOW was
by TD queues using a recoverable transient data queue defined with an indoubt
attribute of WAIT=NO.
–Indoubt action forced is the number of UOW forces that occurred because the UOW used
by LU61 connections an LU6.1 intersystem link, which cannot support indoubt waiting.
–Indoubt action forced is the number of UOW forces that occurred because the UOW using
by MRO connections an MRO intersystem link to a downlevel CICS region, which cannot
support indoubt waiting.
–Indoubt action forced is the number of UOW forces that occurred because the OUW using
by RMI exits (TRUEs) an RMI that declared an interest in syncpoint but could not support
indoubt waiting.
–Indoubt action forced is the number of UOW forces that occurred because the UOW used
by others recoverable facilities other than above, for example, terminal RDO,
which invalidates the ability to support indoubt waiting.
Total number of is the total number of UOWs that were forced to resolve using an
indoubt action indoubt action attribute, whether by definition, option, or operator
mismatches override (as detailed in the above fields), and detected an indoubt
action attribute mismatch with a participating system or RMI. For
example, a participating system in a distributed UOW resolves its
work forward while other systems back out theirs. The opposite
also applies.
Statistics domain
Statistics domain: global statistics
These statistics are available online, and are mapped by the DFHSTGDS DSECT.
Table 97. Statistics domain: global statistics
DFHSTUP name Field name Description
Interval, end-of-day, and requested statistics all contain the same items.
Storage manager
These statistics are produced to aid all aspects of storage management.
NOT IN THE DFHSTUP SMDIFREE is the size of the initial free area for the
REPORT subpool (which may be zero), expressed in
bytes. For further information about the
initial free area, see “Appendix F. MVS and
CICS virtual storage” on page 615.
Subpool Name is the unique 8-character name of the domain subpool. The values
of the domain subpool field are described in “Appendix F. MVS
and CICS virtual storage” on page 615.
Location is the indicator of the subpool location (CDSA, SDSA, RDSA,
ECDSA, ESDSA, or ERDSA).
Access is the type of access of the subpool. It will be either CICS, USER, or
READONLY. If storage protection not active, all storage areas will
revert to CICS except those in the ERDSA.
Getmain Requests is the total number of GETMAIN requests for the subpool.
Freemain Requests is the total number of FREEMAIN requests for the subpool.
Peak Elements is the peak number of storage elements in the subpool.
Peak Elem Stg is the peak amount of element storage in the subpool, expressed in
bytes.
Peak Page Stg is the peak amount of page storage in the subpool, expressed in
bytes.
These statistics are collected for each pagepool. They are available online, and are
mapped by the DFHSMSDS DSECT.
Table 101. Storage manager: global statistics
DFHSTUP name Field name Description
These statistics are collected for each pagepool, and are mapped by the
DFHSMSDS DSECT.
These statistics are collected for each pagepool. They are available online, and are
mapped by the DFHSMSDS DSECT.
Table 103. Storage manager statistics: dynamic storage areas
DFHSTUP name Field name Description
Note: The following fields are mapped by the SMSBODY DSECT within the DFHSMSDS
DSECT. The SMSBODY DSECT is repeated for each pagepool in the CICS region
(SMSNPAGP).
Free storage (inc. cushion) SMSFSTG is the amount of free storage in this
pagepool, that is the number of free
pages multiplied by the page size
(4K), expressed in bytes.
Storage protection is the total storage protection defined to the storage manager.
Transaction isolation is the number of transactions associated with this terminal that can
be isolated.
Reentrant programs is the number of programs that have used the reentry facility.
Current DSA limit this is the limit of CICS dynamic storage area that can be defined
by the storage manager.
Current DSA total this is the number of CICS dynamic storage areas currently in use
by the storage manager.
Peak DSA total is the highest number of CICS dynamic storage areas used by the
storage manager since the last recorded statistics.
Current EDSA limit this is the number of extended dynamic storage areas currently
defined by the storage manager.
Current EDSA total this is the number of extended dynamic storage areas currently in
use by the storage manager.
Peak EDSA total is the highest number of extended dynamic storage area defined by
the storage manager since the last recorded statistics.
DSA size is the total size of the CDSA, UDSA, SDSA, RDSA, ECDSA,
EUDSA, ESDSA, or ERDSA, expressed in bytes.
Cushion size is the size of the cushion, expressed in bytes. The cushion forms
part of the DSA or the EDSA, and is the amount of storage below
which CICS goes SOS.
Getmain requests is the total number of GETMAIN requests from the CDSA, UDSA,
SDSA, RDSA, ECDSA, EUDSA, ESDSA, or ERDSA.
Freemain requests is the total number of FREEMAIN requests from the CDSA, UDSA,
SDSA, RDSA, ECDSA, EUDSA, ESDSA, or ERDSA.
Times no storage is the total number of times a GETMAIN request with
returned SUSPEND(NO) returned the condition INSUFFICIENT_STORAGE.
Times request is the total number of times a GETMAIN request with
suspended SUSPEND(YES) was suspended because of insufficient storage to
satisfy the request at the moment.
Peak requests is the peak number of GETMAIN requests suspended for storage.
suspended
Purged while waiting is the total number of requests which were purged while suspended
for storage.
Times cushion is the total number of times a GETMAIN request caused the storage
released cushion to be released. The cushion is said to be released when the
number of free pages drops below the number of pages in the
cushion
Times went short on is the total number of times CICS went SOS in this pagepool
storage (CDSA, UDSA, ECDSA, EUDSA, or ERDSA), where SOS means
either that the cushion is currently in use and/or there is at least
one task suspended for storage.
Total time SOS is the accumulated time that CICS has been SOS in this DSA. The
DFHSTUP report expresses this time as
hours:minutes:seconds.decimals; however, the DSECT field contains the
time as a store clock (STCK) value.
Storage violations is the total number of storage violations recorded in the CDSA,
UDSA, ECDSA, EUDSA, and the ERDSA.
Access is the type of access of the page subpool. It will be either CICS,
USER, or READONLY. If storage protection not active, all storage
areas will revert to CICS except those in the ERDSA.
These statistics are collected for each pagepool. They are mapped by the
DFHSMTDS DSECT.
Although task subpools are dynamically created and deleted for each task in the
system, these statistics are the sum of all task subpool figures for the task related
pagepools (CDSA, UDSA, ECDSA, and EUDSA). If further granularity of task
storage usage is required, use the performance class data of the CICS monitoring
facility.
Table 107. Storage manager statistics: Task subpools
DFHSTUP name Field name Description
NOT IN THE DFHSTUP SMTNTASK is the number of task subpools in the CICS
REPORT region.
Note: The following fields are mapped by the SMTBODY DSECT within the DFHSMTDS
DSECT. The SMTBODY DSECT is repeated for each task subpool in the CICS region
(SMTNTASK).
NOT IN THE DFHSTUP SMTDSAINDEX A unique identifier for the dynamic storage
REPORT area that these statistics refer to. Values can
be:
v SMTCDSA (X'01') indicating that the task
storage is obtained from the CDSA
v SMTUDSA (X'02') indicating that the task
storage is obtained from the UDSA
v SMTECDSA (X'05') indicating that the task
storage is obtained from the ECDSA
v SMTEUDSA (X'06') indicating that the
task storage is obtained from the EUDSA
DSA Name tells you whether the dynamic storage area is in the CDSA, UDSA,
ECDSA, or EUDSA.
Access is the type of access of the subpool. It will be either CICS, or USER.
Getmain Requests is the total number of task subpool GETMAIN requests from this
dynamic storage area.
Freemain Requests is the total number of task subpool FREEMAIN requests from this
dynamic storage area.
Peak Elements is the peak number of elements in all the task subpools in this
dynamic storage area.
Peak Elem Storage is the peak amount of storage occupied by all elements in task
subpools within this dynamic storage area, expressed in bytes.
Peak Page Storage is the peak amount of storage in all pages allocated to task subpools
within this dynamic storage area, expressed in bytes.
Table manager
Table manager: global statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS TABLEMGR command, and are mapped by the DFHA16DS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
Table 109. Table manager: global statistics
DFHSTUP name Field name Description
NOT IN THE DFHSTUP A16NTAB is the number of tables defined to the table
REPORT manager.
Table Name is the name of a CICS table supported by the table manager.
Average Table Size is the average amount of storage, expressed in bytes, used by the
(bytes) table manager to support the table named in the field above (for
example, for scatter tables and directory segments). This does not
include storage used by the tables themselves.
Peak Table Size (bytes) is the peak amount of storage, expressed in bytes, used by the table
manager to support the table named in the field above (for
example, for scatter tables and directory segments). This does not
include storage used by the tables themselves.
Temporary storage
Temporary storage statistics are produced for the data that is written into a
temporary storage queue.
Times queues created TSGSTA3F is the number of times that CICS created
individual temporary storage queues.
Available bytes per control TSGNAVB is the number of bytes available for use in
interval the TS data set control interval.
Times string wait occurred TSGVWTN is the number of I/O requests that were
queued because no strings were available.
This is zero if the number of strings is the
same as the number of buffers. If this is a
high percentage (over 30%) of the number of
I/O requests, consider increasing the
number of strings initially allocated.
Put/Putq main is the total number of records that application programs wrote to
storage requests main temporary storage.
Get/Getq main is the total number of records that application programs obtained
storage requests from main temporary storage.
Peak storage for temp. is the peak value, expressed in bytes, of the amount of virtual
storage (main) storage used for temporary storage records.
Put/Putq auxiliary is the total number of records that application programs wrote to
storage requests auxiliary temporary storage.
Get/Getq auxiliary is the total number of records that application programs obtained
storage requests from auxiliary temporary storage.
Peak temporary is the peak number of temporary storage queue names at any one
storage names in use time.
Number of entries in is the peak number of items in any one queue, up to a maximum of
longest queue 32767
Times queues created is the total number of times that CICS created individual temporary
storage queues.
Control interval size is the size of VSAM’s unit of transmission between DASD and main
storage, specified in the CONTROLINTERVALSIZE parameter in the
VSAM CLUSTER definition for the temporary storage data set (for
guidance information about this, see the CICS Operations and
Utilities Guide ). In general, using large CIs permits more data to be
transferred at one time, resulting in less system overhead.
Available bytes per is the number of bytes available for use in each TS data set control
control interval interval.
Segments per control is the number of segments in each TS data set control interval.
interval
Bytes per segment is the number of bytes per segment.
Writes more than is the total number of writes of records whose length was greater
control interval than the control interval (CI) size. If the reported value is large,
increase the CI size. If the value is zero, consider reducing the CI
size until a small value is reported.
Longest auxiliary is the size, expressed in bytes, of the longest record written to the
temporary storage temporary storage data set.
record
Number of control is the number of control intervals (CIs) available for auxiliary
intervals available storage. This is the total available space on the temporary storage
data set expressed as a number of control intervals. This is not the
space remaining at termination.
Peak control intervals is the peak number of CIs containing active data.
available
Times aux. storage is the total number of situations where one or more transactions
exhausted may have been suspended because of a NOSPACE condition, or
(using a HANDLE CONDITION NOSPACE command) may have
been forced to abend. If this item appears in the statistics, increase
the size of the temporary storage data set.
Number of temp. is the total number of times that temporary storage buffers were
storage compressions compressed.
Note: The following statistics are produced for buffer usage:
Temporary storage is the total number of temporary storage buffers specified in the
buffers TS= system initialization parameter or in the overrides.
Buffer waits is the total number of times a request was queued because all
buffers were allocated to other tasks. A buffer wait occurs if the
required control interval is already in a locked buffer, and therefore
unavailable, even if there are other buffers available.
Peak users waiting on is the peak number of requests queued because no buffers were
buffers available.
Buffer writes is the total number of WRITEs to the temporary storage data set.
This includes both WRITEs necessitated by recovery requirements
(see next item) and WRITEs forced by the buffer being needed to
accommodate another CI. I/O activity caused by the latter reason
can be minimized by increasing buffer allocation.
Forced writes for is the subset of the total number of WRITEs caused by recovery
recovery being specified for queues. This I/O activity is not affected by
buffer allocation.
Buffer reads is the total number of times a CI has to be read from disk.
Increasing the buffer allocation decreases this activity.
Format writes is the total number of times a new CI was successfully written at
the end of the data set to increase the amount of available space in
the data set. A formatted write is attempted only if the current
number of CIs available in the auxiliary data set have all been used.
Note: The following statistics are produced for string usage:
Temporary storage is the total number of temporary storage strings specified in the TS=
strings system initialization parameter or in the overrides.
Peak number of is the peak number of concurrent I/O operations. If this is
strings in use significantly less than the number specified in the SIT, consider
reducing the SIT value to approach this number.
Times string wait is the total number of I/O requests that were queued because no
occurred strings were available. This is zero if the number of strings is the
same as the number of buffers. If this is a high percentage (over
30%) of the number of I/O requests, consider increasing the number
of strings initially allocated.
Peak number of users is the peak number of I/O requests that were queued at any one
waiting on string time because all strings were in use.
I/O errors on TS data is the total number of input/output errors which occurred on the
set temporary storage data set. This should normally be zero. If it is
not, inspect the CICS and VSAM messages to determine the cause.
Terminal control
This is the DFHSTUP listing for terminal statistics.
There are a number of ways in which terminal statistics are important for
performance analysis. From them, you can get the number of inputs and outputs,
that is, the loading of the system by end users. Line-transmission faults and
transaction faults are shown (these both have a negative influence on performance
behavior).
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS TERMINAL command, and are mapped by the DFHA06DS DSECT.
For programming information about the EXEC CICS COLLECT STATISTICS
command, see the CICS System Programming Reference manual.
In addition to this, this DSECT should be used to map the terminal totals record.
Table 117. Terminal control: resource statistics
DFHSTUP name Field name Description
Line Id (TCAM and BSAM A06TETI is the line number for TCAM and BSAM
only) (sequential device support) lines. The line ID
is blank for all other access methods.
Input and output messages can vary because of differences in the application program being
used on different terminals. ATI-initiated transactions would typically not have terminal
input but could result in one or many output messages. A batch oriented terminal could
initiate a single transaction that did multiple reads to the terminal resulting in multiple
input messages. The differences between the remote and local terminal counts may be a
result of different applications that run on them. Otherwise, they should be similar.
Xmission Errors A06TETE is the number of errors for this terminal, or
the number of disconnects for this session.
Line Id (TCAM and is the line number for TCAM and BSAM (sequential device support)
BSAM only) lines. The line ID is blank for all other access methods.
Term Id is the identifier of each terminal as stated in the TERMINAL
attribute in CEDA or in the TRMIDNT= operand in the TCT.
LUname is the terminal LU name
The remainder of the information should be used for tracking terminal activity.
Polls (TCAM and is the total number of polls that have been sent to the terminal. This
BSAM only) field is for TCAM and BSAM only.
Terminal Type is the terminal type as defined in the TCT. For information about
terminal types and their codes, see the DFHTCTTE DSECT.
Acc Meth is the terminal access method as defined in the TCT. For
information about access methods and their codes, see the
DFHTCTTE DSECT.
Conn ID is the last value found for the owning connection name for this
terminal/session.
No. of Xactions is the number of transactions, both nonconversational and
pseudoconversational, that were started at this terminal. The
transaction count is less than input messages if conversational
transactions are being used.
When the operator signs off, the transaction count is not reset. At
this time, message DFHSN1200 is issued containing the transaction
count for that operator.
When the operator signs off, the transaction error count is not reset.
At this time, message DFHSN1200 is issued containing the
transaction error count for that operator.
Storage Viols is the number of storage violations that have occurred on this
terminal.
Input Messages See note.
Output Messages See note.
Note: Input messages (A06TENI) and output messages (A06TENO) are the amount of
message activity per terminal. Input and output messages should represent the message
traffic between CICS and the terminal. Input traffic should be the result of operator initiated
input: that is, initial transaction input or input as a result of a conversational read to the
terminal. Output messages should be output written by the application program or
messages sent by CICS.
Input and output messages can vary because of differences in the application program being
used on different terminals. ATI-initiated transactions would typically not have terminal
input but could result in one or many output messages. A batch oriented terminal could
initiate a single transaction that did multiple reads to the terminal resulting in multiple
input messages. The differences between the remote and local terminal counts may be a
result of different applications that run on them. Otherwise, they should be similar.
Xmission Errors is the number of errors for this terminal, or the number of
disconnects for this session.
Pipeline messages
–Totals is the total throwaway count.
–Groups is the number of consecutive throwaways.
–Max Csec is the maximum throwaway count.
TIOA Storage is the TIOA storage allowed at this terminal.
Avg logged on time is the average logged on time for an autoinstalled terminal/session.
This field is blank if the terminal/session is not autoinstalled.
Peak Queued The total highest number of transactions queued waiting for
admittance to the transaction class.
Times Max Act The total number of separate times that the number of active
transactions in the transaction class was equal to the maximum
value.
Times PurgeThr The total number of separate times that the purge threshold has
been reached.
Average The average time spent waiting by those transactions that were
Queuing-Time queued.
Transaction manager
Transaction manager: global statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS TRANSACTION command, and are mapped by the DFHXMGDS
DSECT. For programming information about the EXEC CICS COLLECT
STATISTICS command, see the CICS System Programming Reference manual.
Table 121. Transaction manager: global statistics
DFHSTUP name Field name Description
Peak number of active user XMGPAT is the number of user transactions that have
transactions become active.
Total number of is the total number of tasks that have run in the system.
transactions (user and
system)
MAXTASK limit is the last MXT value (expressed as a number of tasks) that was
specified in the SIT, or as an override, or changed dynamically
using CEMT SET SYSTEM MAXTASKS(value) or EXEC CICS SET
SYSTEM MAXTASKS(fullword binary data-value) commands.
Times the MAXTASK is the total number of times MXT has been reached.
limit reached
Peak number of active is the peak number of active user transactions reached in the
user transactions system.
Total number of active is the total number of user transactions that have become active.
user transactions
Total number of is the total number of transactions that had to queue for MXT
MAXTASK delayed reasons.
user transactions
Total MAXTASK is the total time spent waiting by those user transactions that had to
queuing time queue for MXT reasons.
Average MAXTASK is the average time spent waiting by those user transactions that
queuing time of had to queue for MXT reasons.
queued transactions
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS command and are mapped by the DFHXMRDS DSECT. For
programming information about the EXEC CICS COLLECT STATISTICS, see the
CICS System Programming Reference manual.
The following set of statistics are further breakdowns of the recovery manager global
statistics to aid further isolation of potential integrity exposures.
Indoubt Waits is the number of indoubt waits (shunts) that have occurred for
UOWs executing on behalf of this transaction.
Indoubt action forced
–Trandefn is the number of times this transaction id had a UOW that could
not be shunted when an indoubt failure occurred, because the
transaction definition for this transaction id specified that it could
not support indoubt waiting (ie. XMRIWTOP = XMTIWTN). The
UOW would have been forced to resolve in the direction specified
by XMRIACTN, regardless of the actions taken by any other
participating region in this distributed UOW.
–Timeout is the number of times this transaction id had a UOW that although
shunted because of an indoubt failure, had the wait for
resynchronization with its recovery coordinator terminated
prematurely, because the indoubt wait timeout value (XMRITOV)
had been exceeded. The UOW would have been forced to resolve in
the direction specified by XMRIACTN, regardless of the actions
taken by any other participating region in this distributed UOW.
–Operator is the number of times this transaction id had a UOW that although
shunted because of an indoubt failure, had the wait for
resynchronization with its recovery coordinator terminated
prematurely, because an operator (CEMT) or SPI command forced a
resolution. The UOW would have been forced to resolve in the
direction specified by XMRIACTN by default, or in the direction
specified by the operator, regardless of the actions taken by any
other participating region in this distributed UOW.
–No waiting is the number of times this transaction id had a UOW that could
not be shunted when an indoubt failure occurred, even though the
transaction definition specified that it could (XMRIWTOP =
XMRIWTY), because the resource managers (RMIs) or CICS
resources or CICS connections used by the UOW could not support
indoubt waiting (shunting). The UOW would have been forced to
resolve in the direction specified by XMRIACTN, regardless of the
actions taken by any other participating region in this distributed
UOW.
–Other is the number of times this transaction id had a UOW that although
shunted because of an indoubt failure, had the wait for
resynchronization with its recovery coordinator terminated
prematurely, because reasons other than those stated above. This
could be a cold started recovery coordinator, a resynchronization
protocol violation or failure, or level of resource manager (RMI)
adaptor changes etc. The UOW would have been forced to resolve
in the direction specified by XMRIACTN by default, or in the
direction specified by the operator, regardless of the actions taken
by any other participating region in this distributed UOW.
Action mismatch is the number of times this transaction id had a UOW that was
forced to resolve using the indoubt action attribute, whether by
definition, option or operator override (as detailed in the above
fields), and on doing so detected an indoubt action attribute
mismatch with a participating system or resource manager (RMI).
For example, a participating system in a distributed UOW resolves
its work forward while other systems back out theirs. The opposite
also applies.
Peak intrapartition is the peak number of requests queued because no buffers were
buffer waits available.
In the statistics produced for the intrapartition data set:
All of the intrapartition data set statistics above are printed, even if the values reported are
zero.
CICS produces the following statistics for multiple strings:
Times strings accessed is the total number of times a string was accessed.
Peak concurrent string is the peak number of strings concurrently accessed in the system.
accesses
Intrapartition string is the total number of times that tasks had to wait because no
waits strings were available.
Peak string waits is the peak number of concurrent string waits in the system.
The TQRQTYPE field is not displayed in the DFHSTUP report. It signifies the
queue type, which can be one of:
v TQRQTEXT (X'01') for extrapartition queues
v TQRQTINT (X'02') for intrapartition queues
v TQRQTIND (X'03') for indirect queues
v TQRQTREM (X'04') for remote queues.
Timeout mean reuse time USGTOMRT the average time user instances remain on
the timeout queue until they are removed.
| Directory reuse count USGDRRC the number of times a directory entry was
| reused.
Average timeout reuse is the average time user instances remain on the timeout queue
time until they are removed.
Timeout reuse count is the number of times a user instance is reused while being timed
out.
Timeout expiry count is the number of times a user instance remains on the timeout
queue for a full USRDELAY interval without being reused.
Directory reuse count records how many times an existing user interface is reused.
Directory not found records the number of times the user instance needs to be added if
count it doesnot already exist in the directory.
VTAM statistics
These statistics can be accessed online using the EXEC CICS COLLECT
STATISTICS VTAM command, and are mapped by the DFHA03DS DSECT. For
programming information about the EXEC CICS COLLECT STATISTICS command,
see the CICS System Programming Reference manual.
Times at RPL maximum A03RPLXT is the number of times the peak RPL posted
value (A03RPLX) was reached.
Dynamic opens count A03DOC is the number of times the VTAM access
method control block (ACB) was opened
through the control terminal. If VTAM is
started before CICS and stays active for the
whole CICS run, this value is zero.
Times at RPL is the total number of times the maximum RPL posted value
maximum (A03RPLX) was reached.
Peak RPLs posted is the peak number of receive-any request parameter lists (RPLs)
that are posted by VTAM on any one dispatch of terminal control.
Short on storage count is a counter that is incremented in the VTAM SYNAD exit in the
CICS terminal control program each time VTAM indicates that there
is a temporary VTAM storage problem.
Dynamic opens count is the total number of times that the VTAM access method control
block (ACB) was opened through the control terminal. If VTAM is
started before CICS and stays active for the whole CICS run, this
value is 0.
Current LUs in session is the average value for the number of LUs logged on.
HWM LUs in session is the highest value of the number of LUs logged on.
PS inquire count is the total number of times CICS issued INQUIRE
OPTCD=PERSESS.
PS nib count is the total number of VTAM sessions that persisted.
PS opndst count is the total number of persisting sessions that were successfully
restored.
PS unbind count is the total number of persisting sessions that were terminated.
PS error count is the total number of persisting sessions that were already
unbound when CICS tried to restore them.
Structure
S1PREF First part of structure name
S1POOL Poolname part of structure name
S1CNPREF Prefix for connection name
S1CNSYSN Own MVS system name from CVTSNAME
Size S1SIZE Current allocated size of the list structure.
Elem size S1ELEMLN Data element size, fullword, used for the
structure.
Max size S1SIZEMX Maximum size to which this structure could
be altered.
Lists
Total S1HDRS Maximum number of list headers
Control S1HDRSCT Headers used for control lists
Data S1HDRSQD Headers available for queue data
In use S1USEDCT Number of entries on used list
Max used S1USEDHI Highest number of entries on used list
Entries
In Use S1ENTRCT Number of entries currently in use.
Max Used S1ENTRHI Maximum number in use (since last reset).
Min Free S1ENTRLO Minimum number of free entries (since last
reset).
Total S1ENTRMX Total data entries in the currently allocated
structure. (Obtained at connection time, may
be updated by ALTER).
S1FREECT Number of entries on free list
S1ENTRRT Entry size of entry to element ratio
S1FREEHI Highest number of entries on free list
Elements
In use S1ELEMCT Number of elements currently in use.
Max used S1ELEMHI Maximum number in use (since last reset).
The statistics are described in detail in the DFHXQS2D data area. The individual
fields have the following meanings:
Table 142. Shared TS queue server:queue pool statistics
Statistic name Field Description
Buffers
Total S2BFQTY Number of buffers in the pool.
Max used S2BFENTH Highest number ever used (not affected by
reset).
Active S2BFACTS Buffers currently in use.
On LRU S2BFLRUS Buffers with valid contents on LRU chain to
allow reuse.
Empty S2BFEMPS Buffers previously used but now empty.
Requests
Gets S2BFGETS Requests to get a buffer.
Puts S2BFPUTS Put back buffer with valid contents
Keep S2BFKEPS Keeps (put back buffer with modified
contents).
Free S2BFFRES Requests to put back a buffer as empty.
Purges S2BFPURS Request to discard contents of a previously
valid buffer.
Results (Get)
Got hit S2BFHITS Requests to put back a buffer with valid
contents.
Got free S2BFGFRS Requests to put back a buffer with modified
contents. (This function is not currently used
by the queue server).
Got new S2BFGNWS Request obtained a buffer not previously
used
Got LRU S2BFGLRS Request discarded and reused the oldest
valid buffer.
No buf S2BFGNBS GETs which returned no buffer.
Errors
Not freed S2BFFNOS A request tried to release a buffer it did not
own. (This can occur during error recovery).
These statistics are for the named storage page pool produced since the most
recent statistics (if any). Each of the storage statistics is shown in kilobytes and as a
percentage of the total size.
|
| Coupling facility data tables: list structure statistics
| The statistics are described in detail in the DFHCFS6K data area.
| Structure
| S6NAME Full name of list structure
| S6PREF First part of structure name
| S6POOL Pool name part of structure name
| S6CNNAME Name of connection to structure
| S6CNPREF Prefix for connection name
| S6CNSYSN Own MVS system name from CVTSNAME
| Size S6SIZE Current allocated size of the list structure.
| Max size S6SIZEMX Maximum size to which this structure could
| be altered.
| Lists
| Total S6HDRS Maximum number of list headers in the
| structure.
| Control S6HDRSCT Number of lists in use for control
| information.
| Data S6HDRSTD Number of lists in use for table data.
| Structure
| Elem size S6ELEMLN Data element size used for the structure.
| S6ELEMPW Data element size as a power of 2
| S6ELEMRT Element side of entry:element ratio
| S6ENTRRT Entry side of entry:element ratio
| Entries
| In use S6ENTRCT Number of entries currently in use.
| Max used S6ENTRHI Maximum number in use (since last reset).
| Min free S6ENTRLO Minumum number of free entries (since last
| reset).
| Total S6ENTRMX Total entries in the currently allocated
| structure (initially set at structure connection
| time and updated on completion of any
| structure alter request).
| Elements
| In Use S6ELEMCT Number of elements currently in use.
| Access
| S7TABLE Table name padded with spaces
| Vector
| S7STATS Statistics vector
| Table requests
| Open S7OCOPEN Number of successful OPEN requests for the
| table.
| Close S7OCCLOS Number of successful CLOSE requests for
| the table.
| Set Attr S7OCSET Number of times new table status was set.
| Delete S7OCDELE Number of times the table of that name was
| deleted.
| Stats S7OCSTAT Extract table statistics.
| Record requests
| Point S7RQPOIN Number of POINT requests.
| Highest S7RQHIGH Number of requests for current highest key.
| Read S7RQREAD Number of READ requests (including those
| for UPDATE)
| Vector
| S8STATS Statistics vector
| Table
| Open S8OCOPEN Number of successful OPEN requests for the
| table
| Close S8OCCLOS Number of successful CLOSE requests for
| the table.
| Set Attr S8OCSET Number of times new table status was set.
| Delete S8OCDELE Number of times the table of that name was
| deleted.
| Stats S8OCSTAT Number of times table access statistics were
| extracted.
| Record
| Point S8RQPOIN Number of POINT requests.
| Highest S8RQHIGH Number of requests for current highest key
| Read S8RQREAD Number of READ requests (including those
| for UPDATE)
| Read Del S8RQRDDL Number of combined READ and DELETE
| requests
| Unlock S8RQUNLK Number of UNLOCK requests.
| Loads S8RQLOAD Number of records written by initial load
| requests.
| Write S8RQWRIT Number of WRITE requests for new records
| Rewrite S8RQREWR Number of REQRITE requests.
| Delete S8RQDELE Number of DELETE requests.
| Del Mult S8RQDELM Number of multiple (generic) delete requests
| Table
| Inquire S8IQINQU Number of INQUIRE table requests.
| UOW
| Prepare S8SPPREP Number of units of work prepared.
| Retain S8SPRETA Number of units of work whose locks were
| retained.
| Commit S8SPCOMM Number of units of work committed.
| Backout S8SPBACK Number of units of work backed out.
| Inquire S8SPINQU Number of units of work INQUIRE requests.
| Storage is initially allocated from the pool using a bit map. For faster allocation,
| free areas are not normally returned to the pool but are added to a vector of free
| chains depending on the size of the free area (1 to 32 pages). When storage is
| being acquired, this vector is checked before going to the pool bit map.
| If there are no free areas of the right size and there is not enough storage left in
| the pool, free areas in the vector are put back into the pool, starting from the
| smallest end, until a large enough area has been created. This action appears as a
| compress attempt in the statistics. If there is still insufficient storage to satisfy the
| request, the request fails.
|
| Named counter sequence number server statistics
| The statistics are described in detail in the DFHNCS4K data area.
| Structure:
| Lists
| S4NAME Full name of list structure
| S4PREF First part of structure name
| S4POOL Pool name part of structure name
| S4CNNAME Name for connection to structure
| S4CNPREF Prefix for connection name
| S4CNSYSN Own MVS system name from CVTSNAME
| Size S4SIZE Current allocated size for the list structure.
| Max size S4SIZEMX Maximum size to which this structure could
| be altered.
| Entries
| In Use S4ENTRCT Number of entries currently in use.
| Max Used S4ENTRHI Maximum number of entries in use (since
| last reset).
| Min Free S4ENTRLO Minimum number of free entries (since last
| reset).
| Total S4ENTRMX Total entries in the currently allocated
| structure (initially set at structure connection
| time and updated on completion of any
| structure alter request).
| Requests
| Create S4CRECT Create counter
| Get S4GETCT Get and increment counter
| Set S4SETCT Set counter
| Delete S4DELCT Delete counter
| Inquire S4KEQCT Inquire KEQ
| Browse S4KGECT Inquire KGE
| Responses
| Asynch S4ASYCT Number of requests for which completion
| was asynchronous.
| Normal S4RSP1CT Number of normal responses.
| Not Fnd S4RSP2CT The specified entry (table or item) was not
| found.
| Storage is initially allocated from the pool using a bit map. For faster allocation,
| free areas are not normally returned to the pool but are added to a vector of free
| chains depending on the size of the free area (1 to 32 pages). When storage is
| being acquired, this vector is checked before going to the pool bit map.
| If there are no free areas of the right size and there is not enough storage left in
| the pool, free areas in the vector are put back into the pool, starting from the
| smallest end, until a large enough area has been created. This action appears as a
| compress attempt in the statistics. If there is still insufficient storage to satisfy the
| request, the request fails.
| These statistics are for the named storage page pool produced since the most
| recent statistics (if any). Each of the storage statistics is shown in kilobytes and as a
| percentage of the total size.
All programs are command level and run above the 16MB line.
DFH0STAT can be invoked from the PLT at PLTPI (second phase) or PLTSD (first
phase) or as a CICS transaction (either from a console or as a conversational
transaction from a terminal). The output is sent via the CICS JES SPOOL interface
for which a number of default parameters can be changed by the user to specify
the distribution of the report(s). These defaults are defined in the working-storage
section of this program under the 01 level “OUTPUT-DEFAULTS”.
| In addition, the statistics report selection mapset allows the user to select the
| required statistics reports. Figure 33 shows an example of the statistics report
| selection mapset with the default reports selected.
|
Sample Program - CICS Statistics Print Report Selection
08/24/1998 12:02:55
F3=Return to print
| The heading of each report includes the generic applid, sysid, jobname, date and
| time, and the CICS version and release information.
|
| System Status Report
| Figure 34 on page 522 shows the format of the System status report. The field
| headings and contents are described in Table 150 on page 522.
|
System Status
Monitoring Statistics
| CICS writes exception class SMF records as soon as the monitor domain is
| notified of the exception completion, so there is one exception record per
| SMF record. The performance class, however, has many performance class
| records per SMF record. The SMF record for the performance class is written
| when the buffer is full, performance class has been deactivated, or CICS is
| quiescing.
| Source field:
| Statistics Interval The current statistics recording interval.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 P
Transaction Manager
Dispatcher
Force Quasi-Reentrant. . . : No
Dispatcher TCBs
Dispatcher Start Time and Date . . . . . : 08:00:23.71167 12/02/1998
TCB Current Peak Op. System Op. System Total TCB Total TCB DS TCB TCB CPU/Disp
Mode TCBs TCBs Waits Wait Time Dispatch Time CPU Time CPU Time Ratio
_______________________________________________________________________________________________________________
QR 1 1 3,492 00:51:51.09861 00:00:06.61393 00:00:03.69497 00:00:00.47251 55.8%
RO 1 1 123 00:51:44.37901 00:00:11.83684 00:00:01.46320 00:00:00.00512
CO 1 1 484 00:29:28.25675 00:00:00.71757 00:00:00.28578 00:00:00.16471
SZ 1 1 436 00:29:08.30509 00:00:01.37113 00:00:00.59021 00:00:00.21527
RP 0 0 0 00:00:00.00000 00:00:00.00000 00:00:00.00000 00:00:00.00000
FO 1 1 87 00:00:23.01983 00:00:03.76582 00:00:00.29225 00:00:00.00414
SL 1 1 4 00:31:28.23371 00:00:00.00676 00:00:00.00593 00:00:00.00009
SO 1 1 2 00:00:15.28525 00:00:01.20830 00:00:00.00000 00:00:00.00000
J8 0 0 0 00:00:00.00000 00:00:00.00000 00:00:00.00000 00:00:00.00000
L8 0 0 0 00:00:00.00000 00:00:00.00000 00:00:00.00000 00:00:00.00000
S8 0 0 0 00:00:00.00000 00:00:00.00000 00:00:00.00000 00:00:00.00000
____________________________________________________________________________________________________
Totals 00:00:06.33236 00:00:00.86185
TCB Mode The name of the TCB mode that the statistics refer to. The names
of the TCB modes are 'QR', 'RO', 'CO', 'SZ', 'RP', 'FO', 'SL', 'SO',
'J8', 'L8', and 'S8'.
TCB Mode The name of the TCB mode that the statistics refer to. The names
of the TCB modes are 'QR', 'RO', 'CO', 'SZ', 'RP', 'FO', 'SL', 'SO',
'J8', 'L8', and 'S8'.
Storage Reports
The Storage below 16MB report provides information on the use of MVS and CICS
virtual storage. It contains the information you need to understand your current
use of virtual storage below 16MB and helps you to verify the size values used for
the CDSA, UDSA, SDSA, and RDSA and the value set for the DSA limit. Figure 38
| on page 534 shows the format of the Storage Below 16MB Report. This report is
| produced using the EXEC CICS COLLECT STATISTICS STORAGE command. The
| statistics data is mapped by the DFHSMSDS DSECT. The field headings and
| contents are described in Table 154 on page 534.
Getmain Requests. . . . . . . : 19 3 0 0
Freemain Requests . . . . . . : 17 3 0 0
Current number of Subpools. . : 27 9 7 3 46
Add Subpool Requests. . . . . : 10 10 0 0
Delete Subpool Requests . . . : 9 9 0 0
The Storage Above 16MB Report provides information on the use of MVS and
CICS virtual storage. It contains the information you need to understand your
current use of virtual storage above 16MB and helps you to verify the size values
used for the ECDSA, EUDSA, ESDSA, and ERDSA and the value set for the EDSA
limit. Figure 39 on page 539 shows the format of the Storage Above 16MB Report.
| This report is produced using the EXEC CICS COLLECT STATISTICS STORAGE
| command. The statistics data is mapped by the DFHSMSDS DSECT. The field
| headings and contents are described in Table 155 on page 539.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 7
Loader
Library Load requests. . . . . . . . . . . . . : 15 Total Program Uses . . . . . . . . . . . . . . : 541
Total Library Load time. . . . . . . . . . . . : 00:00:00.34448 Program Use to Load Ratio. . . . . . . . . . . : 36.06
Average Library Load time. . . . . . . . . . . : 00:00:00.02296
Times DFHRPL secondary extents detected. . . . : 0
Library Load requests that waited. . . . . . . : 0
Total Library Load request wait time . . . . . : 00:00:00.00000
Average Library Load request wait time . . . . : 00:00:00.00000
Current Waiting Library Load requests. . . . . : 0
Peak Waiting Library Load requests . . . . . . : 0
Times at Peak. . . . . . . . . . . . . . . . . : 0 Average Not-In-Use program size. . . . . . . . : 14K
CDSA ECDSA
Programs Removed by compression. . . . . . . . : 0 Programs Removed by compression. . . . . . . . : 0
Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000 Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000
Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000 Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000
Programs Reclaimed from the Not-In-Use Queue . : 0 Programs Reclaimed from the Not-In-Use Queue . : 435
Programs Loaded - now on the Not-In-Use Queue. : 0 Programs Loaded - now on the Not-In-Use Queue. : 15
SDSA ESDSA
Programs Removed by compression. . . . . . . . : 0 Programs Removed by compression. . . . . . . . : 0
Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000 Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000
Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000 Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000
Programs Reclaimed from the Not-In-Use Queue . : 0 Programs Reclaimed from the Not-In-Use Queue . : 2
Programs Loaded - now on the Not-In-Use Queue. : 0 Programs Loaded - now on the Not-In-Use Queue. : 3
RDSA ERDSA
Programs Removed by compression. . . . . . . . : 0 Programs Removed by compression. . . . . . . . : 0
Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000 Time on the Not-In-Use Queue . . . . . . . . . : 00:00:00.00000
Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000 Average Time on the Not-In-Use Queue . . . . . : 00:00:00.00000
Programs Reclaimed from the Not-In-Use Queue . : 0 Programs Reclaimed from the Not-In-Use Queue . : 75
Programs Loaded - now on the Not-In-Use Queue. : 1 Programs Loaded - now on the Not-In-Use Queue. : 24
Program Storage
Nucleus Program Storage (CDSA) . . . . . . . . : 36K Nucleus Program Storage (ECDSA). . . . . . . . : 104K
Program Storage (SDSA) . . . . . . . . . . . . : 0K Program Storage (ESDSA). . . . . . . . . . . . : 12K
Resident Program Storage (SDSA). . . . . . . . : 0K Resident Program Storage (ESDSA) . . . . . . . : 0K
Read-Only Nucleus Program Storage (RDSA) . . . : 156K Read-Only Nucleus Program Storage (ERDSA). . . : 5,988K
Read-Only Program Storage (RDSA) . . . . . . . : 52K Read-Only Program Storage (ERDSA). . . . . . . : 3,660K
Read-Only Resident Program Storage (RDSA). . . : 0K Read-Only Resident Program Storage (ERDSA) . . : 0K
CDSA used by Not-In-Use programs. : 0K 0.00% of CDSA ECDSA used by Not-In-Use programs : 55K 1.08% of ECDSA
SDSA used by Not-In-Use programs. : 0K 0.00% of SDSA ESDSA used by Not-In-Use programs : 10K 1.00% of ESDSA
RDSA used by Not-In-Use programs. : 1K 0.22% of RDSA ERDSA used by Not-In-Use programs : 573K 5.60% of ERDSA
Storage Subpools
SMSHARED CDSA 4K 4K
SMSHRC24 CDSA 0K 0K
SMSHRU24 SDSA 84K 84K
SMSHRC31 ECDSA 4K 4K
SMSHRU31 ESDSA 0K 0K
TSMAIN ECDSA 0K 0K
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 9
Transaction Classes
Tclass Trans Attach Class At Cur Peak Purge At Cur Peak Accept Accept Purged Purge Total Avg. Avg. Cur
Name in Tcl in Tcl Limit Limit Active Active Thresh Thresh Queued Queued Immed Queued Immed Queued Queued Que Time Que Time
__________________________________________________________________________________________________________________________________
DFHCOMCL 2 0 10 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHEDFTC 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCIND 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL01 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL02 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL03 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL04 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL05 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL06 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL07 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL08 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL09 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
DFHTCL10 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00:00.00 00:00.00
__________________________________________________________________________________________________________________________________
Totals 2 0
Transaction Classes . : 13
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 10
Transactions
Tran Tran Program Task Data Attach Restart Dynamic --- Counts Remote Storage
id Class Name Dynamic Isolate Location/Key Count Count Local - Remote Starts Viols
__________________________________________________________________________________________________________________________________
-ALT DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-ARC DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-CAN DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-DIS DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-MOD DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-REC DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-RES DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-SET DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-STA DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-STO DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
-TER DFHD2CM1 Static Yes Any/CICS 0 0 0 0 0 0
AADD DFH$AALL Static Yes Below/USER 0 0 0 0 0 0
ABRW DFH$ABRW Static Yes Below/USER 0 0 0 0 0 0
ADMA ADMIVPC Static Yes Below/USER 0 0 0 0 0 0
ADMC ADMPSTBC Static Yes Below/USER 0 0 0 0 0 0
ADMI ADMISSEC Static Yes Below/USER 0 0 0 0 0 0
ADMM ADM1IMDC Static Yes Below/USER 0 0 0 0 0 0
ADMP ADMOPUC Static Yes Below/USER 0 0 0 0 0 0
ADMU ADM5IVUC Static Yes Below/USER 0 0 0 0 0 0
ADMV ADMVSSEC Static Yes Below/USER 0 0 0 0 0 0
ADM4 ADM4CDUC Static Yes Below/USER 0 0 0 0 0 0
ADYN DFH99 Static Yes Below/CICS 0 0 0 0 0 0
AINQ DFH$AALL Static Yes Below/USER 0 0 0 0 0 0
AMNU DFH$AMNU Static Yes Below/USER 0 0 0 0 0 0
AORD DFH$AREN Static Yes Below/USER 0 0 0 0 0 0
AORQ DFH$ACOM Static Yes Below/USER 0 0 0 0 0 0
APPA APPCP05 Static Yes Any/USER 0 0 0 0 0 0
APPC APPCP00 Static Yes Any/USER 0 0 0 0 0 0
AREP DFH$AREP Static Yes Below/USER 0 0 0 0 0 0
AUPD DFH$AALL Static Yes Below/USER 0 0 0 0 0 0
BACK DPLBACK Static Yes Below/USER 0 0 0 0 0 0
BRAS BRASSIGN Static Yes Below/USER 0 0 0 0 0 0
BRA1 BRA009BS Static Yes Below/USER 0 0 0 0 0 0
BRA2 BRA010BS Static Yes Below/USER 0 0 0 0 0 0
BRA5 BRA005BS Static Yes Below/USER 0 0 0 0 0 0
BRLT BRSTLTBS Static Yes Below/USER 0 0 0 0 0 0
BRU1 BRU001BS Static Yes Below/USER 0 0 0 0 0 0
BRU2 BRU002BS Static Yes Below/USER 0 0 0 0 0 0
BRU3 BRU003BS Static Yes Below/USER 0 0 0 0 0 0
BRU4 BRU004BS Static Yes Below/USER 0 0 0 0 0 0
BRU5 BRU005BS Static Yes Below/USER 0 0 0 0 0 0
BRU6 BRU006BS Static Yes Below/USER 0 0 0 0 0 0
CAFB CAUCAFB1 Static Yes Any/CICS 0 0 0 0 0 0
CAFF CAUCAFF1 Static Yes Any/CICS 0 0 0 0 0 0
CALL CALLJT1 Static Yes Any/USER 0 0 0 0 0 0
CATA DFHZATA Static Yes Any/CICS 0 0 0 0 0 0
CATD DFHZATD Static Yes Any/CICS 0 0 0 0 0 0
CATR DFHZATR Static Yes Any/CICS 0 0 0 0 0 0
CBAM DFHECBAM Static Yes Below/CICS 0 0 0 0 0 0
CBLT DFHDUMMY Static Yes Any/CICS 0 0 0 0 0 0
CCIN DFHCOMCL DFHZCN1 Static Yes Any/CICS 0 0 0 0 0 0
Transaction Totals
Task Data Subspace Transaction Attach
Isolate Location/Key Usage Count Count
No Below/CICS Common 0 0
No Any/CICS Common 1 0
No Below/USER Common 3 0
No Any/USER Common 0 0
Totals 242 9
Subspace Statistics
Current Unique Subspace Users (Isolate=Yes). . . : 0
Peak Unique Subspace Users (Isolate=Yes) . . . . : 1
Total Unique Subspace Users (Isolate=Yes). . . . : 2
Current Common Subspace Users (Isolate=No) . . . : 0
Peak Common Subspace Users (Isolate=No). . . . . : 0
Total Common Subspace Users (Isolate=No) . . . . : 0
Programs Report
| Figure 45 on page 555 shows the format of the Programs Report. This report is
| produced using a combination of the EXEC CICS INQUIRE PROGRAM and EXEC
| CICS COLLECT STATISTICS PROGRAM commands. The statistics data was
| mapped by the DFHLDRDS DSECT. The field headings and contents are described
| in Table 161 on page 555.
Programs
Program Data Exec Times Total Average RPL Times Times Program Program
Name Loc Key Times Used Fetched Fetch Time Fetch Time Offset Newcopy Removed Size Location
__________________________________________________________________________________________________________________________________
DFHPGAHX Any CICS 0 0 0 None
DFHPGALX Any CICS 0 0 0 None
DFHPGAMP 0 0 0 None
DFHPGAOX Any CICS 0 0 0 None
DFHPGAPG Below USER 0 0 0 None
DFHPGAPT 0 0 0 None
DFHPRK Any CICS 0 0 0 None
DFHPSIP Any CICS 0 0 2 0 0 864 ECDSA
DFHPUP Any CICS 14 0 2 0 0 20,904 ERDSA
DFHP3270 Any CICS 0 0 0 None
DFHQRY Any CICS 0 0 2 0 0 3,936 ERDSA
DFHRCEX Any CICS 0 0 0 944 None
DFHREST Any CICS 0 0 0 None
DFHRKB Any CICS 0 0 0 None
DFHRMSY Any CICS 0 0 0 None
DFHRMXN3 Any CICS 0 0 0 None
DFHRPAL Any CICS 0 0 0 None
DFHRPAS Any CICS 0 0 0 None
DFHRPC00 Any CICS 0 0 0 None
DFHRPMS Any CICS 0 0 0 None
DFHRPRP Any CICS 0 0 0 None
DFHRPTRU Any USER 0 0 0 None
DFHRP0 0 0 0 None
DFHRTC Any CICS 0 0 0 None
DFHRTE Any CICS 0 0 0 None
DFHSFP Any CICS 0 0 0 None
DFHSHRRP Any CICS 0 0 0 None
DFHSHRSP Any CICS 0 0 0 None
DFHSHSY Any CICS 0 0 2 0 0 632 ERDSA
DFHSIPLT Any CICS 0 0 0 11,152 None
DFHSNLE 0 0 2 0 0 1,384 ECDSA
DFHSNP Any CICS 0 0 2 0 0 13,264 ERDSA
DFHSNSE 0 0 0 None
DFHSTP Below CICS 0 0 0 None
DFHSZRMP Any CICS 0 0 2 0 0 213,232 ERDSA
DFHTACP Below CICS 0 0 2 0 0 5,672 CDSA
DFHTAJP Below CICS 0 0 2 0 0 1,736 ECDSA
DFHTBS Any CICS 0 0 0 None
DFHTCRP Below CICS 0 0 2 0 0 25,776 ERDSA
DFHTDRP Below CICS 0 0 2 0 0 6,432 ERDSA
DFHTEP Any CICS 0 0 2 0 0 2,592 ECDSA
DFHTEPT Any CICS 0 0 2 0 0 3,480 ECDSA
DFHTFP Any CICS 0 0 2 0 0 7,744 ECDSA
DFHTOR Any CICS 0 0 2 0 0 57,920 ERDSA
DFHTORP Below CICS 0 0 2 0 0 560 ERDSA
DFHTPQ Any CICS 0 0 0 None
DFHTPR Any CICS 0 0 0 None
DFHTPS Any CICS 0 0 0 None
DFHTRAP Any CICS 0 0 0 None
DFHTSDQ Any CICS 0 0 0 None
DFHUCNV Any CICS 0 0 0 None
DFHWBA Any CICS 0 0 0 None
DFHWBADX Any CICS 0 0 0 None
DFHWBAHX Any CICS 0 0 0 None
Program Totals
Programs. . . . . . . : 1,208
Assembler . . . . . : 1,046
C . . . . . . . . . : 6
COBOL . . . . . . . : 49
Java (JVM). . . . . : 4
LE/370. . . . . . . : 10
PL1 . . . . . . . . : 86
Remote. . . . . . . : 0
Not Deduced . . . . : 7
Maps. . . . . . . . . : 69
Partitionsets . . . . : 1
___________________________________
Total . . . . . . . . : 1,278
CDSA Programs . . . . : 0
SDSA Programs . . . . : 0
RDSA Programs . . . . : 3
ECDSA Programs. . . . : 11
ESDSA Programs. . . . : 0
ERDSA Programs. . . . : 34
LPA Programs. . . . . : 0
ELPA Programs . . . . : 0
Unused Programs . . . : 2
Not Located Programs. : 1,228
___________________________________
Total . . . . . . . . : 1,278
|
DFHRPL Analysis Report
Figure 47 on page 559 shows the format of the DFHRPL Analysis Report. The field
headings and contents are described in Table 163 on page 559.
DFHRPL Analysis
RPL Average
Offset Programs Times Used Fetches Fetch Time Newcopies Removes
0 1 2 1 00:00:00.02214 0 0
1 2 6 1 00:00:00.00422 0 0
2 52 558 4 00:00:00.03283 0 0
3 3 4 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 9 13 9 00:00:00.02073 0 0
7 0 0 0 0 0
8 1 0 0 0 0
9 9 0 0 0 0
Totals 77 583 15 0 0
Program Concurrency Times Total Average RPL Times Times Program Program
Name Status Times Used Fetched Fetch Time Fetch Time Offset Newcopy Removed Size Location
__________________________________________________________________________________________________________________________________
CEEEV003 Quasi Rent 0 0 9 0 0 1,934,336 ERDSA
CEEEV005 Quasi Rent 0 0 9 0 0 13,720 ERDSA
CEEEV010 Quasi Rent 0 0 9 0 0 225,488 ERDSA
CEELCLE Quasi Rent 0 0 9 0 0 25,400 ERDSA
CEEPLPKA Quasi Rent 0 0 9 0 0 851,792 ERDSA
CEEQMATH Quasi Rent 0 0 9 0 0 544,160 ERDSA
DFHAMP Quasi Rent 16 1 00:00:00.07161 00:00:00.07161 2 0 0 161,464 ERDSA
DFHAPATT Quasi Rent 0 0 2 0 0 760 ERDSA
DFHCRNP Quasi Rent 0 0 2 0 0 11,528 ERDSA
DFHCRQ Quasi Rent 0 0 2 0 0 872 ERDSA
DFHCRR Quasi Rent 0 0 2 0 0 4,824 ERDSA
DFHCRSP Quasi Rent 0 0 2 0 0 3,528 ERDSA
DFHDMP Quasi Rent 42 0 2 0 0 42,552 ERDSA
DFHD2RP Quasi Rent 0 0 2 0 0 4,544 ERDSA
DFHEDAD Quasi Rent 2 1 00:00:00.03720 00:00:00.03720 2 0 0 140,744 ERDSA
DFHEDAP Quasi Rent 2 1 00:00:00.00422 00:00:00.00422 1 0 0 3,208 ERDSA
DFHEITMT Quasi Rent 4 0 2 0 0 45,416 ERDSA
DFHEITSP Quasi Rent 4 1 00:00:00.00627 00:00:00.00627 2 0 0 25,456 ERDSA
DFHEMTD Quasi Rent 4 0 2 0 0 99,896 ERDSA
DFHEMTP Quasi Rent 4 0 1 0 0 3,304 ERDSA
DFHPUP Quasi Rent 14 0 2 0 0 20,904 ERDSA
DFHQRY Quasi Rent 0 0 2 0 0 3,936 ERDSA
DFHSHSY Quasi Rent 0 0 2 0 0 632 ERDSA
DFHSNP Quasi Rent 0 0 2 0 0 13,264 ERDSA
DFHSZRMP Quasi Rent 0 0 2 0 0 213,232 ERDSA
DFHTCRP Quasi Rent 0 0 2 0 0 25,776 ERDSA
DFHTDRP Quasi Rent 0 0 2 0 0 6,432 ERDSA
DFHTOR Quasi Rent 0 0 2 0 0 57,920 ERDSA
DFHTORP Quasi Rent 0 0 2 0 0 560 ERDSA
DFHWBGB Quasi Rent 0 0 2 0 0 752 ERDSA
DFHWBIP Quasi Rent 0 0 2 0 0 2,896 ERDSA
DFHWBST Quasi Rent 0 0 2 0 0 11,144 ERDSA
DFHWBTC Quasi Rent 0 0 2 0 0 72,336 ERDSA
DFHZATA Quasi Rent 0 0 2 0 0 30,360 ERDSA
DFHZATDY Quasi Rent 0 0 2 0 0 592 ERDSA
DFHZATR Quasi Rent 0 0 2 0 0 2,656 ERDSA
DFHZATS Quasi Rent 0 0 2 0 0 13,072 ERDSA
DFHZCGRP Quasi Rent 0 0 2 0 0 1,064 ERDSA
DFHZCQ Quasi Rent 0 0 2 0 0 250,160 ERDSA
DFHZCSTP Quasi Rent 0 0 2 0 0 656 ERDSA
DFHZNAC Quasi Rent 0 0 2 0 0 42,240 ERDSA
DFHZXRE Quasi Rent 0 0 2 0 0 3,608 ERDSA
DFHZXST Quasi Rent 0 0 2 0 0 9,304 ERDSA
DFH0STAT Quasi Rent 1 0 3 0 0 277,208 ERDSA
DFH0VZXS Quasi Rent 0 0 2 0 0 8,352 ERDSA
IGZCPAC Quasi Rent 0 0 9 0 0 112,792 ERDSA
IGZCPCC Quasi Rent 0 0 8 0 0 11,784 ERDSA
Totals 93 4 0 0
Temporary Storage
Put/Putq main storage requests . . . . . : 0
Get/Getq main storage requests . . . . . : 0
Peak storage used for TS Main. . . . . . : 0K
Current storage used for TS Main . . . . : 0K
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 52
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0
Tsqueue Totals
SHAR1 3 11 16 43
SHAR3 7 16 16 112
___________________________________________________________
10
AAAA 6 8 13 53
SHARB 3 16 16 48
SHARE 2 16 16 32
___________________________________________________________
11
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 55
Transient Data
Transient data reads . . . . . . . . . . : 0
Transient data writes. . . . . . . . . . : 0
Transient data formatting writes . . . . : 0
Control interval size. . . . . . . . . . : 1,536
Control intervals in the DFHINTRA dataset: 3,900
Peak control intervals used. . . . . . . : 1
Times NOSPACE on DFHINTRA occurred . . . : 0
Transient data strings . . . . . . . . . : 3
Times Transient data string in use . . . : 0
Peak Transient data strings in use . . . : 0
Times string wait occurred . . . . . . . : 0
Peak users waiting on string . . . . . . : 0
Transient data buffers . . . . . . . . . : 5
Times Transient data buffer in use . . . : 0
Peak Transient data buffers in use . . . : 0
Peak buffers containing valid data . . . : 0
Times buffer wait occurred . . . . . . . : 0
Peak users waiting on buffer . . . . . . : 0
I/O errors on the DFHINTRA dataset . . . : 0
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 56
Tdqueue Totals
Intrapartition . . . : 4 0 0 0
Extrapartition . . . : 23 10 0
Indirect . . . . . . : 28 10 0 0
Remote . . . . . . . : 0 0 0 0
Total. . . . . . . . : 55
Journalnames Report
| Figure 56 on page 574 shows the format of the Journalnames Report. This report is
| produced using a combination of the EXEC CICS INQUIRE JOURNALNAME and
| EXEC CICS COLLECT STATISTICS JOURNALNAME commands. The statistics
| data is mapped by the DFHLGRDS DSECT. The field headings and contents are
| described in Table 172 on page 574.
Journalnames
Logstreams Report
| Figure 57 on page 575 shows the format of the Logstreams Report. This report is
| produced using a combination of the EXEC CICS INQUIRE STREAMNAME and
| EXEC CICS COLLECT STATISTICS STREAMNAME commands. The statistics data
| is mapped by the DFHLGSDS DSECT. The field headings and contents are
| described in Table 173 on page 575.
Logstreams - Resource
Use Sys Max Block DASD Retention Auto Stream Browse Browse
Logstream Name Count Status Log Structure Name Length Only Period Delete Deletes Starts Reads
CBAKER.IYK2Z1V2.DFHJ04 1 OK NO 32,000 YES 21 YES N/A N/A N/A
CBAKER.IYK2Z1V2.DFHJ05 2 OK NO 48,000 YES 14 YES N/A N/A N/A
CBAKER.IYK2Z1V2.DFHJ06 2 OK NO LOG_GENERAL_001 64,000 NO 0 NO N/A N/A N/A
CBAKER.IYK2Z1V2.DFHJ08 1 OK NO LOG_GENERAL_001 64,000 NO 0 NO N/A N/A N/A
CBAKER.IYK2Z1V2.DFHLOG 1 OK YES LOG_GENERAL_005 64,000 NO 0 NO 0 46 0
CBAKER.IYK2Z1V2.DFHSHUNT 1 OK YES LOG_GENERAL_006 64,000 NO 0 NO 0 0 0
| Figure 58 shows the format of the Logstreams Report. This report is produced
| using a combination of the EXEC CICS INQUIRE STREAMNAME and EXEC CICS
| COLLECT STATISTICS STREAMNAME commands. The statistics data is mapped
| by the DFHLGSDS DSECT. The field headings and contents are described in
| Table 174.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 11/23/1998 Time 10:47:07 CICS 5.3.0 PAGE 4
Logstreams - Requests
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 62
Program Autoinstall
Program Autoinstall Status. . . . : INACTIVE
Autoinstall Program . . . . . . . : DFHPGADX
Catalog Program Definitions . . . : MODIFY
Autoinstalls attempted. . . . . . : 0
Autoinstalls rejected . . . . . . : 0
Autoinstalls failed . . . . . . . : 0
Terminal Autoinstall
VTAM
VTAM Open Status. . . . . . . . . : OPEN
Dynamic open count. . . . . . . . : 0
VTAM Short-on-Storage . . . . . . : 0
MAX RPLs . . . . . . . : 1
Times at MAX RPLs . . . . . . . . : 17
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 63
Connections
Connection Name/Netname . . . . . : CJB2/IYK2Z1V2 Access Method/Protocol. . . . . . . . . : XM
Autoinstalled Connection Create Time. . : 00:00:00.00000
Peak Contention Losers. . . . . . : 1
ATIs satisfied by Losers. . . . . : 0 Receive Session Count. . . . . . . . : 5
Peak Contention Winners . . . . . : 1 Send Session Count . . . . . . . . . : 12
ATIs satisfied by Winners . . . . : 1
Current AIDs in chain . . . . . . : 0 Generic AIDs in chain. . . . . . . . : 0
Total number of Bids sent . . . . : 0
Current Bids in progress. . . . . : 0 Peak Bids in progress. . . . . . . . : 0
Modenames
Modename Connection Name. . . . . : CJB3
Modename. . . . . . . . . . . . . : SNASVCMG
Active Sessions . . . . . . . . . : 0
Available Sessions. . . . . . . . : 0
Maximum Sessions. . . . . . . . . : 2
Maximum Contention Winners. . . . : 1
Modename Connection Name. . . . . : CJB3
Modename. . . . . . . . . . . . . :
Active Sessions . . . . . . . . . : 0
Available Sessions. . . . . . . . : 0
Maximum Sessions. . . . . . . . . : 5
Maximum Contention Winners. . . . : 3
|
| TCP/IP Services Report
| Figure 62 and Figure 63 on page 586 show the formats of the TCP/IP Services
| reports. These reports are produced using a combination of EXEC CICS INQUIRE
| TCPIPSERVICE and EXEC CICS COLLECT STATISTICS TCPIPSERVICE
| commands, the statistics data is mapped by the DFHSORDS DSECT. The field
| headings and contents are described in Table 177 and Table 178 on page 586.
|
TCP/IP Services
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 02/18/1999 Time 08:32:40 CICS 5.3.0 PAGE 2
____________________________________________________________________________________________________________________________________
TCP/IP Services
_______________
TCP/IP Port <- Connections -> Send Avg Bytes Receive Avg Bytes
Service Number IP Address Current Peak Attached Requests / Send Requests / Receive
________________________________________________________________________________________________________________
CHRIS1 5060 9.20.2.52 0 2 15 46 1,839 25 324
CHRIS2 5061 9.20.2.52 0 1 1 4 51 1 251
CHRIS3 5063 9.20.2.52 0 1 1 6 842 3 251
FREDCLIA 5067 0 0 0 0 0 0 0
SAMPLE 5069 0 0 0 0 0 0 0
________________________________________________________________________________________________________________
Totals 17 56 29
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 70
LSR Pools
Pool Number : 2 Time Created : 08:35:07.02513
Buffer Totals
Data Buffers . . . . . . . . . . : 44 Index Buffers. . . . . . . . . . : 44
Hiperspace Data Buffers. . . . . : 0 Hiperspace Index Buffers . . . . : 0
Successful look asides . . . . : 652 Successful look asides . . . . : 1,360
Buffer reads . . . . . . . . . : 24 Buffer reads . . . . . . . . . : 4
User initiated writes. . . . . : 655 User initiated writes. . . . . : 31
Non-user initiated writes. . . : 0 Non-user initiated writes. . . : 0
Successful Hiperspace CREADS . : 0 Successful Hiperspace CREADS . : 0
Successful Hiperspace CWRITES. : 0 Successful Hiperspace CWRITES. : 0
Failing Hiperspace CREADS. . . : 0 Failing Hiperspace CREADS. . . : 0
Failing Hiperspace CWRITES . . : 0 Failing Hiperspace CWRITES . . : 0
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
These statistics are obtained from VSAM and represent the activity after the
pool was created.
Files Report
| Figure 65 shows the format of the Files Report. This report is produced using a
| combination of the EXEC CICS INQUIRE FILE and EXEC CICS COLLECT
| STATISTICS FILE commands. The statistics data is mapped by the DFHA17DS
| DSECT. The field headings and contents are described in Table 180.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 72
Files
Access File Remote Remote LSR Data CFDT Table Update
Filename Method Type Filename System Pool RLS Table Type Poolname Name Model
_________________________________________________________________________________________________________
ADMF VSAM KSDS 1 No
CSQKCDF VSAM KSDS 1 No
CSQ4FIL VSAM KSDS 1 No
DFHCMACD VSAM KSDS 1 No
DFHCSD VSAM KSDS No No
DFHDBFK VSAM KSDS No No
DFHLRQ VSAM KSDS No No
DFHRPCD VSAM KSDS 1 No
FILEA VSAM KSDS 1 No
RFSDIR1 VSAM KSDS 2 No
RFSDIR2 VSAM KSDS 2 No
RFSPOOL1 VSAM KSDS 2 No
RFSPOOL2 VSAM KSDS 2 No
|
File Requests Report
| Figure 66 shows the format of the File Requests report. This report is produced
| using a combination of the EXEC CICS INQUIRE FILE and EXEC CICS COLLECT
| STATISTICS FILE commands. The field headings and contents are described in
| Table 181 on page 594.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 12/02/1998 Time 08:52:33 CICS 5.3.0 PAGE 3
Files - Requests
Read Get Update Browse Browse Add Update Delete Remote RLS Req.
Filename Requests Requests Requests Updates Requests Requests Requests Deletes Timeouts
ADMF 0 0 0 0 0 0 0 0 0
BANKACCT 0 0 0 0 0 0 0 0 0
CSQKCDF 0 0 0 0 0 0 0 0 0
CSQ4FIL 0 0 0 0 0 0 0 0 0
DFHCMACD 0 0 0 0 0 0 0 0 0
DFHCSD 0 0 0 0 0 0 0 0 0
DFHDBFK 0 0 0 0 0 0 0 0 0
DFHLRQ 0 0 0 0 0 0 0 0 0
DFHRPCD 0 0 0 0 0 0 0 0 0
FILEA 0 0 0 0 0 0 0 0 0
RFSDIR1 0 0 0 0 0 0 0 0 0
RFSDIR2 0 0 0 0 0 0 0 0 0
RFSPOOL1 0 0 0 0 0 0 0 0 0
RFSPOOL2 0 0 0 0 0 0 0 0 0
Figure 67 shows the format of the Data Tables Requests Report. This report is produced using a combination of the
EXEC CICS INQUIRE FILE and EXEC CICS COLLECT STATISTICS FILE commands. The statistics data is mapped
by the DFHA17DS DSECT. The field headings and contents are described in Table 182.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 01/28/1999 Time 11:07:52 CICS 5.3.0 PAGE 8
____________________________________________________________________________________________________________________________________
Data Tables - Requests
______________________
Successful Records Adds via Adds via Adds Adds Rewrite Delete Read Chng Resp/
Filename Reads Not Found Read API Rejected Full Requests Requests Retries Lock Waits
__________________________________________________________________________________________________________________________________
F140BASE 0 0 1 0 0 0 0 0 0 0
F150BASE 0 0 0 0 0 0 0 0 0 0
F170BASE 0 0 0 0 0 0 0 0 0 0
F270BASE 0 0 0 0 0 0 0 0 0 0
Figure 68 shows the format of the Data Tables Storage Report. The field headings
and contents are described in Table 183.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 02/26/1999 Time 08:18:14 CICS 5.3.0 PAGE 2
Coupling Facility Data Table Pools
Coupling Facility Data Table Pool . CFPOOL1 Connection Status.. . . : UNAVAILABLE
Table 184. Fields in the Coupling Facility Data Table Pools Report
Field Heading Description
Coupling Facility Data Table Pools
Coupling Facility Data Table Pool The name of the coupling facility data table pool.
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 P
Exit Programs
<---- Global Area ----> No. Task <-- Task Related User
Program Entry Entry Use of Program Concurrency Area Task Shut
Name Name Name Length Count Exits Status API Status Qualifier Length start EDF down Ind
_________________________________________________________________________________________________________________________
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0
| Figure 72 on page 601 shows the format of the DB2 Connection Report. The field
| headings and contents are described in Table 187 on page 601
DB2 Connection
DB2 Connection Name. . . . . . . . . . . : RCTJT
DB2 Sysid. . . . . . . . . . . . . . . . : DB3A
DB2 Release. . . . . . . . . . . . . . . : 3.1.0
DB2 Connection Status. . . . . . . . . . : CONNECTED DB2 Connect Date and Time . . . : 02/06/1997 09:48:47.92429
DB2 Connection Error . . . . . . . . . . : SQLCODE
DB2 Standby Mode . . . . . . . . . . . . : CONNECT
DB2 Pool Thread Plan Name. . . . . . . . :
DB2 Pool Thread Dynamic Plan Exit Name . : DSNCUEXT
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 PAGE 80
DB2 Entries
DB2Entry Name. . . . . . . . . . . . . . : XC06 DB2Entry Status . . . . . . . . . . . . : ENABLED
DB2Entry Static Plan Name. . . . . . . . : TESTC06 DB2Entry Disabled Action. . . . . . . . : POOL
DB2Entry Dynamic Plan Exit Name. . . . . : DB2Entry Deadlock Resolution. . . . . . : ROLLBACK
DB2Entry Authtype. . . . . . . . . . . . : USERID DB2Entry Accounting records by. . . . . : UOW
DB2Entry Authid. . . . . . . . . . . . . :
Number of Calls using DB2Entry. . . . . . . : 456
DB2Entry Thread Wait Setting . . . . . . : POOL Number of DB2Entry Signons. . . . . . . . . : 57
Number of DB2Entry Commits. . . . . . . . . : 0
DB2Entry Thread Priority . . . . . . . . : HIGH Number of DB2Entry Aborts . . . . . . . . . : 0
DB2Entry Thread Limit. . . . . . . . . . : 1 Number of DB2Entry Single Phase . . . . . . : 114
Current number of DB2Entry Threads . . . : 0 Number of DB2Entry Thread Reuses. . . . . . : 0
Peak number of DB2Entry Threads. . . . . : 1 Number of DB2Entry Thread Terminates. . . . : 15
Number of DB2Entry Thread Waits/Overflows . : 95
DB2Entry Protected Thread Limit. . . . . . . . . : 0
Current number of DB2Entry Protected Threads . . : 0
Peak number of DB2Entry Protected Threads. . . . : 0
------------------------------------------------------------------------------------------------------------------------------------
ENQs issued. . . . . . . . . . . . . . . : 0
ENQs waited. . . . . . . . . . . . . . . : 0
ENQueue waiting time . . . . . . . . . . : 00:00:00.00000
Average Enqueue wait time. . . . . . . . : 00:00:00.00000
Applid IYK2Z1V3 Sysid CJB3 Jobname CI07CJB3 Date 08/20/1998 Time 15:33:57 CICS 5.3.0 P
Recovery Manager
Page Index
Page
Connections . . . . . . . . . . . : 63
CF Data Table Pools . . . . . . . : 76
Data Tables . . . . . . . . . . . : 74
DB2 Connection. . . . . . . . . . : 79
DB2 Entries . . . . . . . . . . . : 80
DFHRPL Analysis . . . . . . . . . : 45
Dispatcher. . . . . . . . . . . . : 3
Dispatcher TCBs . . . . . . . . . : 4
Enqueue Manager . . . . . . . . . : 81
Files . . . . . . . . . . . . . . : 73
Global User Exits . . . . . . . . : 78
Journalnames. . . . . . . . . . . : 59
Loader. . . . . . . . . . . . . . : 7
Logstreams. . . . . . . . . . . . : 60
LSR Pools . . . . . . . . . . . . : 70
Program Autoinstall . . . . . . . : 62
Programs. . . . . . . . . . . . . : 16
Programs by DSA and LPA . . . . . : 46
Program Storage . . . . . . . . . : 7
Program Totals. . . . . . . . . . : 44
Recovery Manager. . . . . . . . . : 99
Storage Manager BELOW 16MB. . . . : 5
Storage Manager ABOVE 16MB. . . . : 6
Storage Subpools. . . . . . . . . : 8
Subspace Statistics . . . . . . . : 15
System Status . . . . . . . . . . : 1
TCP/IP Services . . . . . . . . . : 69
Temporary Storage . . . . . . . . : 51
Temporary Storage Queues. . . . . : 52
Temporary Storage Queue Totals. . : 53
Temporary Storage Queues by Pool. : 54
Terminal Autoinstall. . . . . . . : 62
Transactions. . . . . . . . . . . : 10
Transaction Totals. . . . . . . . : 15
Transaction Manager . . . . . . . : 2
Transaction Classes . . . . . . . : 9
Transient Data. . . . . . . . . . : 55
Transient Data Queues . . . . . . : 56
Transient Data Queue Totals . . . : 58
User Exit Programs. . . . . . . . : 77
VTAM. . . . . . . . . . . . . . . : 62
'N/S' indicates that the statistics report was not selected for printing
Most of the CICS storage areas are moved above the line, and it is necessary to
have some detailed knowledge of the components that make up the total address
space in order to determine what is really required.
MVS storage
There are four major elements of virtual storage within MVS. Each storage area is
duplicated above 16MB.
v The common area below 16MB
v The private area below 16MB
v The extended common area above 16MB
v The extended private area above 16MB.
All the elements of the common area above are duplicated above 16MB line with
the exception of the PSA.
It has been established that a 2MB LPA is sufficient for MVS when using CICS
with MRO or ISC, that is, the size of an unmodified LPA as shipped by IBM. If it is
larger, there are load modules in the LPA that may be of no benefit to CICS. There
may be SORT, COBOL, ISPF, and other modules that are benefiting batch and TSO
users. You have to evaluate if the benefits you are getting are worth the virtual
storage that they use. If modules are removed, be sure to determine if the regions
they run in need to be increased in size to accommodate them.
The pageable link pack area (PLPA) contains supervisor call routines (SVCs), access
methods, and other read-only system programs along with read-only re-enterable
user programs selected by an installation to be shared among users of the system.
Optional functions or devices selected by an installation during system generation
add additional modules to the PLPA.
The modified link pack area (MLPA) contains modules that are an extension to the
PLPA. The MLPA may be changed at IPL without requiring the create link pack
area (CLPA) option at IPL to change modules in the PLPA.
CICS uses the ECSA only if IMS/ESA or MRO is used. Even in this case, this use is
only for control blocks and not for data transfer. If cross-memory facilities are
being used, the ECSA usage is limited to 20 bytes per session and 1KB per address
space participating in MRO. The amount of storage used by CICS MRO is given in
the DFHIR3794 message issued to the CSMT destination at termination.
For static systems, the amount of unallocated CSA should be around 10% of the
total allocated CSA; for dynamic systems, a value of 20% is desirable. Unlike the
SQA, if CSA is depleted there is no place for it to expand into and a re-IPL can
very possibly be required.
Except for the 16KB system region area, each storage area in the private area has a
counterpart in the extended private area.
The portion of the user’s private area within each virtual address space that is
available to the user’s application program, is referred to as its region. The private
area user region may be any size up to the size of the entire private area (from the
top end of the PSA to the beginning, or bottom end, of the CSA) minus the size of
LSQA, SWA, subpools 229 and 230, and the system region: for example, 220KB. (It
is recommended that the region be 420KB less to allow for RTM processing.)
The segment sizes are one megabyte, therefore CSA is rounded up to the nearest
megabyte. The private area is in increments of one megabyte. For more
information, see “The CICS private area”.
This section describes the major components of the CICS address space. In CICS
Transaction Server for OS/390 Release 3 there are eight dynamic storage areas.
They are:
The user DSA (UDSA)
The user-key storage area for all user-key task-lifetime storage below the
16MB boundary.
The read-only DSA (RDSA)
The key-0 storage area for all reentrant programs and tables below the
16MB boundary.
The shared DSA (SDSA)
The user-key storage area for any non-reentrant user-key RMODE(24)
programs, and also for any storage obtained by programs issuing CICS
GETMAIN commands for storage below the 16MB boundary with the
SHARED option.
The CICS DSA (CDSA)
The CICS-key storage area for all non-reentrant CICS-key RMODE(24)
programs, all CICS-key task-lifetime storage below the 16MB boundary,
and for CICS control blocks that reside below the 16MB boundary.
Figure 78 shows an outline of the areas involved in the private area. The three
main areas are HPA, MVS storage, and the CICS region. The exact location of the
free and allocated storage may vary depending on the activity and the sequence of
the GETMAIN/FREEMAIN requests.
Additional MVS storage may be required by CICS for kernel stack segments for
CICS system tasks—this is the CICS kernel.
Note: The CICS extended private area is conceptually the same as the CICS
private area except that there is no system region. All the other areas have
equivalent areas above the 16MB line.
┌─────────────────────┐ ─┐
│ LSQA │ │
│ SP 253, 254, 255 │ │
├─────────────────────│ │ High
│ SWA │ ├─ Private
│ SP 236, 237 │ │ Area
├─────────────────────│ │
│ SP 229, 230 │ │
├─────────────────────│ ─┘
│ MVS storage above │
│ region │
├─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┤ IEALIMIT ─┐
│ │ │
├─────────────────────│ ─┐ │
│ MVS storage │ │ │
├─────────────────────│ │ │
│ IMS and │ │ │
│ DBRC modules │ │ │
├─────────────────────│ │ ├─ Expanded
│ CICS CDSA and UDSA │ ─┤ Requested │ region
├─────────────────────│ │ region │
│ CICS system tasks │ │ │
│ │ │ │
├─────────────────────│ ─┘ │
│ System region │ │
└─────────────────────┘ ─┘
The area at the high end of the address space is not specifically used by CICS, but
contains information and control blocks that are needed by the operating system to
support the region and its requirements.
The usual size of the high private area varies with the number of job control
statements, messages to the system log, and number of opened data sets.
The total space used in this area is reported in the IEF374I message in the field
labeled “SYS=nnnnK” at jobstep termination. There is a second “SYS=nnnnK”
which is issued which refers to the high private area above 16MB. This information
is also reported in the sample statistics program, DFH0STAT.
Very little can be done to reduce the size of this area, with the possible exception
of subpool 229. This is where VTAM stores inbound messages when CICS does not
have an open receive issued to VTAM. The best way to determine if this is
happening is to use CICS statistics (see “VTAM statistics” on page 500) obtained
following CICS shutdown. Compare the maximum number of RPLs posted, which
is found in the shutdown statistics, with the RAPOOL value in the SIT. If they are
equal, there is a very good chance that subpool 229 is being used to stage
messages, and the RAPOOL value should be increased.
The way in which the storage within the high private area is used can cause an
S80A abend in some situations. There are at least two considerations:
1. The use of MVS subpools 229 and 230 by access methods such as VTAM:
VTAM and VSAM may find insufficient storage for a request for subpools 229
and 230. Their requests are conditional and so should not cause an S80A abend
of the job step (for example, CICS).
2. The MVS operating system itself, relative to use of LSQA and SWA storage
during job-step initiation: The MVS initiator’s use of LSQA and SWA storage
may vary, depending on whether CICS was started using an MVS START
command or started as a job step as part of already existing initiator and
address space. Starting CICS with an MVS START command is better for
minimizing fragmentation within the space above the region boundary. If CICS
is a job step initiated in a previously started initiator’s address space, the
manner in which LSQA and SWA storage is allocated may reduce the
apparently available virtual storage because of increased fragmentation.
Storage above the region boundary must be available for use by the MVS initiator
(LSQA and SWA) and the access method (subpools 229 and 230).
Your choice of sizes for the MVS nucleus, MVS common system area, and CICS
region influences the amount of storage available for LSQA, SWA, and subpools
The total size of LSQA is difficult to calculate because it depends on the number of
loaded programs, tasks, and the number and size of the other subpools in the
address space. As a guideline, the LSQA area usually runs between 40KB and
170KB depending on the complexity of the rest of the CICS address space.
The storage control blocks define the storage subpools within the private area,
describing the free and allocated areas within those subpools. They may consist of
such things as subpool queue elements (SPQEs), descriptor queue elements
(DQEs), and free queue elements (FQEs).
The contents management control blocks define the tasks and programs within the
address space such as task control blocks (TCBs), the various forms of request
blocks (RBs), contents directory elements (CDEs), and many more.
CICS DBCTL requires LSQA storage for DBCTL threads. Allow 9KB for every
DBCTL thread, up to the MAXTHRED value.
Generally, this area can be considered to increase with an increase in the number of
DD statements. The distribution of storage in subpools 236 and 237 varies with the
operating system release and whether dynamic allocation is used. The total amount
of storage in these subpools is around 100KB to 150KB to start with, and it
increases by about 1KB to 1.5KB per allocated data set.
A subset of SWA control blocks can, optionally, reside above 16MB. JES2 and JES3
have parameters that control this. If this needs to be done on an individual job
basis, the SMF exit, IEFUJV, can be used.
Subpool 229
This subpool is used primarily for the staging of messages. JES uses this area for
messages to be printed on the system log and JCL messages as well as
SYSIN/SYSOUT buffers. Generally, a value of 40 to 100 KB is acceptable,
depending on the number of SYSIN and SYSOUT data sets and the number of
messages in the system log.
Subpool 230
This subpool is used by VTAM for inbound message assembly for segmented
messages. Data management keeps data extent blocks (DEBs) here for any opened
data set.
CICS DBCTL requires subpool 230 storage for DBCTL threads. Allow 3KB for
every DBCTL thread, up to the MAXTHRED value.
If this free storage is not enough for recovery termination management (RTM)
processing, the address space may be terminated with a S40D abend that produces
no dump.
This area can be very dynamic. As the high private area grows, it extends down
into this area, and the CICS region may extend upward into this area up to the
value specified in IEALIMIT.
High address
┌────────────────┐ ─┐
│ MVS storage │ │
│ │ │
├────────────────┤ │
│ CICS UDSA │ │
│ CICS CDSA │ │
├────────────────┤ │
│ │ │
│ │ │
│ │ │
└────────────────┘ ─┘
Low address
This is the amount of storage that remains after the dynamic storage areas and
other CICS storage requirements have been met. The size of this area depends on
MVS GETMAIN requirements during the execution of CICS. Opening files is the
major contributor to usage of this area.
MVS storage is used to contain control blocks and data areas needed to open data
sets or do other operating system functions, and program modules for the access
method routines not already resident in the LPA, and shared routines for the
COBOL and PL/I programs.
The VSAM buffers and most of the VSAM file control blocks reside above the
16MB line.
The VSAM buffers may be for CICS data sets defined as using local shared
resources (LSR) (that is, the default) or nonshared resources (NSR).
The VSAM LSR pool is built dynamically above the 16MB line when the first file
specified as using it is opened, and deleted when the last file using it is closed.
Every opened data set requires some amount of storage in this area for such items
as input/output blocks (IOBs) and channel programs.
Files that are defined as data tables use storage above 16MB for records that are
included in the table, and for the structures which allow them to be accessed.
QSAM files require some storage in this area. Transient data uses a separate buffer
pool above the 16MB line for each type of transient data queue. Storage is obtained
from the buffer pool for queue entries as they are added to the destination control
table. Transient data also uses a buffer pool below the 16MB line where sections of
extrapartition DCTEs are copied for use by QSAM, when an extrapartition queue is
being opened or closed.
CICS DBCTL uses DBCTL threads. DBCTL threads are specified in the CICS
address space but they have storage requirements in the high private area of the
CICS address space.
If DB2 is used by the system, MVS storage is allocated for each DB2 thread.
| If you run JVM programs in CICS, that is, run Java programs under control of a
| Java virtual machine (JVM), CICS uses the MVS JVM which requires significant
| amounts of MVS storage above and below the 16MB line. For each JVM program
| running in CICS, there is an MVS JVM running in the CICS address space.
The physical placement of the MVS storage may be anywhere within the region,
and may sometimes be above the CICS region. The region may expand into this
MVS storage area, above the region, up to the IEALIMIT set by the installation or
When both the MVS storage areas are exhausted, the GETMAIN request fails,
causing abends or a bad return code if it is a conditional request.
The amount of MVS storage must be enough to satisfy the requests for storage
during the entire execution of the CICS region. You must use caution here; you
never want to run out of MVS Storage but you do not want to overallocate it
either.
The size of MVS storage is the storage which remains in the region after allowing
for the storage required for the dynamic storage areas, the kernel storage areas,
and the IMS/VS and DBRC module storage. The specification of OSCOR storage in
CICS/MVS® Version 2 and earlier has been replaced with the specification of the
DSA sizes in CICS/ESA Version 3. It is important to specify the correct DSA sizes
so that the required amount of MVS storage is available in the region.
Because of the dynamic nature of a CICS system, the demands on MVS storage
varies through the day, that is, as the number of tasks increases or data sets are
opened and closed. Also, because of this dynamic use of MVS storage,
fragmentation occurs, and additional storage must be allocated to compensate for
this.
Too small a dynamic storage area results in increased program compression or,
more seriously, SOS (short on storage) conditions, or even storage deadlock abends
when program compression is not sufficient.
DSAs consist of one or more extents. An extent below the line is 256KB and above
the line, 1MB (except UDSA with TRANISO active, when the extent is 1M).
CICS GETMAIN requests for dynamic storage are satisfied from one of the
following: the CDSA, RDSA, SDSA, ESDSA, UDSA, ECDSA, or the ERDSA during
normal execution. The sizes of the dynamic storage areas are defined at CICS
initialization, but the use of storage within them is very dynamic.
The dynamic storage areas consist of a whole number of virtual storage pages
allocated from a number of MVS storage subpools. CICS uses about 180 storage
subpools, which are located in the dynamic storage areas. For a list of the subpools
see the tables on pages 627 through 636. Each dynamic storage area has it own
“storage cushion” These subpools (including the cushion) are dynamically
acquired, as needed, a page at a time, from within the dynamic storage area.
The dynamic storage areas are essential for CICS operation. Their requirements
depend on many variables, because of the number of subpools. The major
contributors to the requirements are program working storage and the type and
If you have exhausted the tuning possibilities of MVS/ESA and other tuning
possibilities outside CICS, and the dynamic storage areas limits have been set as
large as possible within CICS, and you are still encountering virtual storage
constraint below 16MB, you may have to revise the use of options such as MXT
and making programs resident, to keep down the overall storage requirement. This
may limit task throughput. If you foresee this problem on an MVS system, you
should consider ways of dividing your CICS system, possibly by use of facilities
such as CICS multiregion operation (MRO), described in the CICS
Intercommunication Guide. You can also reduce storage constraint below 16MB by
using programs which run above 16MB.
If the dynamic storage areas limits are too small, CICS performance is degraded.
The system may periodically enter a short-on-storage condition, during which it
curtails system activity until it can recover enough storage to resume normal
operations.
However, resist the temptation to make the dynamic storage area limit as large as
possible (which you could do by specifying the maximum allowable region size). If
you do this, it can remove any warning of a shortage of virtual storage until the
problem becomes intractable.
The dynamic storage areas limits should be as large as possible after due
consideration of other areas, especially the MVS storage area above 16MB.
CICS subpools
This section describes briefly the main features of the subpools. They are found in
each of the dynamic storage areas. Most of the subpools are placed above the
16MB line. Those subpools that are found below the 16MB line, in the CDSA,
SDSA, RDSA, and UDSA, need to be more carefully monitored due to the limited
space available. Individual subpools may be static or dynamic. Some contain static
CICS storage which cannot be tuned. All the subpools are rounded up to a
multiple of 4KB in storage size and this rounding factor should be included in any
subpool sizing or evaluation of storage size changes due to tuning or other
changes. CICS statistics contain useful information about the size and use of the
dynamic storage area subpools. The CICS subpools in the dynamic storage areas
may be grouped and described according to the major factor affecting their use.
Application design
The use of CICS facilities such as program LINK, SHARED storage
GETMAINs, the type of file requests, use of temporary storage, application
program attributes (resident or dynamic), or the number of concurrent
DBCTL, or DB2, requests affect storage requirements.
Number of files definitions
These subpools may only be tuned by reducing the number of file
definitions, or by using MRO.
The following tables list the subpools according to their dynamic storage areas and
their use.
| When a DSA, such as the CDSA, requires additional storage in order to satisfy a
| GETMAIN request, the CICS storage manager allocates another extent to that DSA.
| However, if all extents are currently allocated, an attempt is made to locate a free
| extent in another DSA which may then be relocated to the DSA in need. However,
| in order to remove an extent from one DSA so that it may be allocated to another,
| all pages in the extent must be free (that is, not allocated to any subpool).
| Use the IPCS command VERBX CICS530 ’SM=3’ to format the SM control blocks.
| Examine the DSA summaries, noting which DSA(s) are short-on-storage and the
| amount of free space in the other DSAs (above or below the 16M line as
| appropriate). The amount of freespace is given for each extent for each DSA.
| Frequently either the UDSA or the CDSA is short-on-storage but there is a large
| amount of free storage in the SDSA. The following dump extracts are from a
| problem of this type where the UDSA is short-on-storage.
| Each extent has an associated page pool extent (PPX) and page allocation map
| (PAM). Examination of the SDSA extents shows several extents with large amounts
| The DSA extent summary shows that the PPX for the extent at 00700000 is found
| at 09F0A100, and the associated PAM is found at 09F0A150. Examination of the
| PAM shows only one page is allocated, and it belongs to the subpool with an ID of
| x’7A’.
| Start End Size PPX_addr Acc DSA
| 00700000 0073FFFF 256K 09F0A100 C SDSA
|
| PPX.SDSA 09F0A100 Pagepool Extent Control Area
|
| 0000 00506EC4 C6C8E2D4 D7D7E740 40404040 *.&>DFHSMPPX *
| 0010 E2C4E2C1 40404040 09A1BA68 071B3EA0 *SDSA ........*
| 0020 00040000 00700000 0073FFFF 071B5EE0 *................*
| 0030 00000000 09F0A150 00000040 0710A268 *.....0.&;.. ..s.*
| 0040 0003F000 00000000 00000000 00000000 *..0.............*
|
| PAM.SDSA 09F0A150 Page Allocation Map
|
| 0000 00000000 00000000 00000000 00000000 *................*
| 0010 - 002F LINES SAME AS ABOVE
| 0030 00000000 0000007A 00000000 00000000 *................*
| The domain subpool summary determines for the SDSA which subpool is
| associated with the ID of x’7A’. In this dump 7A is the ID for subpool ZCTCTUA.
| Do not rely on the IDs being the same for multiple runs of CICS because the IDs
| are assigned in the order the ADD_SUBPOOL is issued.
| ==SM: UDSA Summary (first part only)
|
| Size: 512K
| Cushion size: 64K
| Current free space: 56K (10%)
| * Lwm free space: 12K ( 2%)
| * Hwm free space: 276K (53%)
| Largest free area: 56K
| * Times nostg returned: 0
| * Times request suspended: 0
| Current suspended: 0
| * Hwm suspended: 0
| * Times cushion released: 1
| Currently SOS: YES
|
| ==SM: SDSA Summary (first part only)
|
| Size: 4352K
| Cushion size: 64K
| Current free space: 2396K (55%)
| * Lwm free space: 760K (17%)
| * Hwm free space: 2396K (55%)
| Largest free area: 252K
| * Times nostg returned: 0
| * Times request suspended: 0
| Current suspended: 0
| * Hwm suspended: 0
| * Times cushion released: 0
| Currently SOS: NO
| To ease the short-on-storage problems, you may have to define the initial size of a
| DSA using one or more of the SIT overrides CDSASZE, UDSASZE, SDSASZE, and
| RDSASZE (see the CICS System Definition Guide). These overrides should only be
| used if changes to storage management do not completely resolve the
| short-on-storage problems.
| Storage management requests the loader to reduce the RPS storage below 80%.
| This makes additional extents available to be allocated to the DSA in need.
| Review the CICS statistics for several days. This provides information which can
| be used to define the amount of storage used at a subpool and a DSA level. Extent
| usage is shown with the number of extents added and released.
| In addition to the DSA information provided in DFH0STAT, the results about each
| subpool are provided, including the DSA where it was allocated. If statistics are
| being gathered, end-of-day statistics will only provide data since the last statistics
| collection.
| Determine if DSALIM has been specified as large as possible, but allowing for
| OSCORE requirements of the various packages in use.
| Allocating into managed extents can lead to a block of storage in an extent which
| is insufficient to satisfy a getmain request. With the dynamic nature of the
| subpools and DSAs, this should be relieved as the subpool/extent storage is
| reused. Specifying the initial DSA size using the SIT override for the affected DSA
| has the effect of reserving contiguous extents up to the amount specified, and
| eliminating the blocks of storage.
| Additional DSAs (RDSA and SDSA) are available and many of the subpools from
| the UDSA are moved to the SDSA. The end-of-day statistics or information in a
| dump of the CICS region can be used to define relative sizes of the subpools and
| associated DSAs.
| Also, using the LPA reduces the amount of storage used in LDNUCRO by
| approximately 100K.
The kernel recognises two types of task: static tasks, and dynamic tasks. The kernel
storage for static tasks is pre-allocated and is used for tasks controlled by the MXT
mechanism. The storage for dynamic tasks is not pre-allocated and is used for
tasks such as system tasks which are not controlled by the MXT value. Because the
storage for dynamic tasks is not pre-allocated, the kernel may need to GETMAIN
the storage required to attach a dynamic task when the task is attached.
The number of static tasks is dependant upon the current MXT value (there are
MXT+1 static tasks). The storage for static tasks is always GETMAINed from the
CICS DSAs. If MXT is lowered the storage for an excess number of static tasks is
freed again.
During early CICS initialization the kernel allocates storage for 8 dynamic tasks.
This storage is GETMAINed from MVS and is always available for use by internal
CICS tasks. All other storage for dynamic tasks is then allocated, as needed, from
the CICS DSAs. Typically when a dynamic task ends, its associated storage is
freed.
The storage required by a single task is the same for both types of task and can be
divided into storage required above and below the 16MB line:
v Above the line the following storage is required per task:
– A 896-byte kernel task entry
– A 24K 31-bit stack.
v Below the line the following storage is required per task:
– A 2K 24-bit stack.
When the kernel GETMAINs storage from the CICS DSAs, the following subpools
are used:
v In the CDSA:
KESTK24 2K stack segments
KESTK24E 4K extension stack segments
v In the ECDSA:
KESTK31 24K stack segments
KESTK31E 4K extension stack segments
KETASK 896 byte task entries
Before you work with these numbers, please note the following:
v The cost per call is documented in 1K or millisecond instruction counts taken
from a tracing tool used internally by IBM. Each execution of an instruction has
a count of 1. No weighting factor is added for instructions that use more
machine cycles than others.
v Because the measurement consists of tracing a single transaction within the CICS
region, any wait for I/O etc. results in a full MVS WAIT. This cost has been
included in the numbers reported in this document. On a busy system the
possibility of taking a full MVS WAIT is reduced because the dispatcher has a
higher chance of finding more work to do.
v When judging performance, the numbers in this book should not be compared
with those published previously, because a different methodology has been used.
Variable costs
The sections from “Transaction initialization and termination” on page 644 onwards
describe the relative costs of a subset of the CICS API calls. To those costs must be
added the variable costs described in this section.
Variable costs are encountered, for different machine configurations, when there is
synchronous access to a coupling facility. For example, RLS and shared temporary
storage use synchronous access to a coupling facility; so, for CF log streams, does
the MVS logger. The variance occurs because a synchronous access instruction
executes for as long as it takes to complete the access to the coupling facility and
return. The number of central processing unit (CPU) cycles consumed during the
request therefore depends on:
The following sections describe variable costs for logging and syncpointing.
Logging
Because logging costs contain some of the variable costs incurred by synchronous
accesses to the coupling facility, they are documented here in terms of milliseconds
of CPU time. The measurements have been taken on a 9672-R61 with a 9674-R61
coupling facility; they can be scaled to any target system, using the IT Relative
| Ratios (ITRRs) published in the IBM Large System Performance Report. This can be
| accessed through the IBM System/390 web page (http://www.s390.ibm.com), more
| specifically, at http://www.s390.ibm.com/lspr/lspr.html.
When looking at the cost of accessing recoverable resources, the cost of writing the
log buffer to primary storage has been separated from the API cost. FORCE and
NOFORCE are the two types of write operations to the system log buffer.
v The FORCE operation requests that the log buffer is written out and is made
non-volatile. The transaction that made this request is suspended until the
process completes. The log is not written out immediately but is deferred using
an internal algorithm. The first forced write to the log sets the clock ticking for
the deferred log flush. Subsequent transactions requesting log forces will put
their data in the buffer and suspend until the original deferred time has expired.
This permits buffering of log requests and it means that the cost of writing the
log buffer is shared between many transactions.
v The NOFORCE operation puts the data into the log buffer, which is written to
primary storage when a FORCE operation is requested or the buffer becomes
full.
The cost of writing a log buffer varies, depending on which of the following
applies:
v The write is synchronous to the coupling facility
v The write is asynchronous to the coupling facility
v A staging data set is being used
v DASD-only logging is being used.
Synchronous writes to the CF
Writes of less than 4K in size are generally synchronous. A synchronous write
uses a special instruction that accesses the coupling facility directly. The
instruction lasts for as long as it takes to access the coupling facility and
return. This access time, known as the “CF Service Time”, depends on both the
speed of the coupling facility and the speed of the link to it. CF Service Times
can be monitored using RMF III, as shown on page 273. For synchronous
writes, the CPU cost of the access changes as the CF Service Time changes; this
is not true of asynchronous writes.
Asynchronous writes to the CF
Asynchronous writes do not use the same instruction used by synchronous
writes. A CICS task that does an asynchronous log write gives up control to
another task, and the operation is completed by the logger address space.
Table 197 shows the costs of the various flavours of log writes. Note that, although
CICS Transaction Server for OS/390 log writes are more expensive than those in
Syncpointing
The syncpoint cost needs to be factored into the overall transaction cost. The
amount of work at syncpoint varies according to the number of different types of
resource managers (RMs) involved during the unit of work (UOW). Therefore, the
cost can vary.
Typically, a syncpoint calls all the RMs that have been involved during the UOW.
These may or may not need to place data in the log buffer before it is written out.
For example, recoverable TD defers putting data into the log buffer until a
syncpoint. Recovery manager itself puts commit records into the log buffer and
requests a forced write. For these reasons it is difficult to give a precise cost for a
syncpoint, but the following should be used as a guide:
This shows syncpoint costs, in 1K instruction units, for local resources only. If
distributed resources are updated, communication costs will need to be added.
If no recoverable resources have been updated, the cost is only the transaction
termination cost as described in “Transaction initialization and termination” on
page 644.
Receive
The receive cost is based on an LU2 type terminal sending a 4-byte transaction
identifier and includes all the VTAM processing using HPO=YES.
Attach/terminate
Assembler COBOL
Attach and initialization 7.5 11.0
Termination 6.2 10.0
Notes:
The transaction initialization cost is calculated from the start of transaction attach
to the start of the CICS application code.
The transaction termination cost assumes that no recoverable resources have been
updated. If recoverable resources have been updated, the syncpointing cost must
be added to the termination cost.
Send
The send cost consists of one request unit to a LU2 type terminal. It includes both
CICS and VTAM instructions for a system using HPO=YES.
File control
This section contains the relative costs of VSAM file control accesses. For read
operations the VSAM I/O cost is not included because the necessity to access
DASD is workload dependent. For the read operation to complete both the index
and data must be accessed. If neither index or data is in a buffer, an I/O must be
READ
KSDS ESDS RRDS Data Table (CMT)
3.0 2.4 2.2 First: 1.5
Subsequent:1.1
READ UPDATE
Recoverable and non-recoverable files are included in the READ UPDATE cost:
Non-recoverable files
KSDS ESDS RRDS
3.1 2.3 2.2
Recoverable files
KSDS ESDS RRDS
5.5 4.3 4.2
Notes:
A recoverable READ UPDATE puts the ’before image’ into the log buffer which, if not
subsequently written to primary storage, is written out before the REWRITE is completed.
REWRITE
Recoverable and non-recoverable files are included in the REWRITE cost.
Non-recoverable files
KSDS ESDS RRDS
10.2 10.1 10.1
Recoverable files
KSDS ESDS RRDS
10.4 10.3 10.3
A REWRITE of a recoverable file requires that the log buffer containing the before image has
been written out. If the buffer has not already been written out since the READ UPDATE,
the cost of writing the log buffer is incurred. When the before image has been hardened
the VSAM I/O takes place.
At the end of the transaction, there are additional costs involved in syncpointing if
recoverable resources have been updated. See “Syncpointing” on page 643.
WRITE
The cost for WRITE includes nonrecoverable files and recoverable files.
Every WRITE has a data VSAM I/O associated with it. The index will need to be
written only when a control area split occurs.
Non-Recoverable files
KSDS ESDS RRDS
12.9 11.1 10.9
Recoverable files
KSDS ESDS RRDS
14.9 13.1 12.9
Notes:
Every WRITE has a hidden READ associated with it to ensure that the record is not
already present in the file. This under the cover READ could incur the cost of I/Os if the
index and/or data are not in the buffer.
Each WRITE to a recoverable file will require that the Log Buffer containing the data image
has been written out before doing the VSAM I/O.
At the end of the transaction, there are additional costs involved in syncpointing if
recoverable resources have been updated. See “Syncpointing” on page 643.
DELETE
You cannot delete from an ESDS record file.
Non-Recoverable files
KSDS RRDS
12.5 11.5
Recoverable files
KSDS RRDS
14.5 13.5
At the end of the transaction, additional costs are involved in syncpointing if recoverable
resources have been updated. See “Syncpointing” on page 643.
Browsing
STARTBR READNEXT READPREV RESETBR ENDBR
3.1 1.5 1.6 2.6 1.4
UNLOCK
The pathlength for EXEC CICS UNLOCK is 0.7.
|
| Coupling facility data tables
| The CPU instruction data provided here was obtained using a 9672-R55 system.
| CPU instructions per API call for record lengths greater than 4K, are
|| API CALL CONTENTION LOCKING RECOVERABLE
| READ 15.3 15.3 15.3
| READ/UPDATE 15.0 25.7 25.9
| REWRITE 23.0 27.5 36.5
| WRITE 11.5 11.5 16.5
| DELETE 10.5 14.5 20.0
|
Record Level Sharing (RLS)
For information about performance measurements on record level sharing (RLS),
see the System/390 MVS Parallel Sysplex Performance manual, SG24 4356 02.
Main Storage
WRITEQ REWRITE READQ DELETEQ
1.0 0.8 0.8 0.71 + 0.23 * n
Auxiliary Storage
The approximations for auxiliary TS queues do not include any VSAM I/O cost. A
VSAM I/O costs approximately 11.5K instructions and will occur as follows:
v When attempting to write an item that does not fit in any buffer
v When reading an item that is not in the buffer
v If, when reading a control interval from DASD with no available buffer space,
the least recently used buffer must first be written out.
Therefore, under certain circumstances, a READQ could incur the cost of two
VSAM I/Os.
Non-Recoverable TS Queue
WRITEQ REWRITE READQ DELETEQ
1.3 1.8 1.0 0.75 + 0.18 * n
Recoverable TS Queue
WRITEQ REWRITE READQ DELETEQ
1.4 19 1.0 0.87 + 0.18 * n
Note: The main difference between the cost of accessing non-recoverable and
recoverable TS queues is incurred at syncpoint time, when, for recoverable
queues, the following happens:
v The VSAM I/O cost is incurred if only interval has been used during the
unit of work, and has not already reached DASD.
v The new DASD control interval addresses are put in the log buffer. The
cost for recovery manager to do this is about 2.0K instructions.
v A forced log write is requested and the syncpoint will complete when the
log buffer has been written to primary storage. For more information, see
“Variable costs” on page 641.
Intrapartition Queues
The approximations for non-recoverable and logically recoverable intrapartition TD
queues do not include any VSAM I/O cost. A VSAM I/O costs approximately
11.5K and occurs:
v When attempting to write an item that will not fit in any buffer.
v When reading an item that is not in the buffer.
v If, when reading a control interval from DASD and there is no available buffer
space, the least recently used buffer will first have to be written out. Therefore,
under certain circumstances, a READQ could incur the cost of two VSAM I/Os.
Non-Recoverable TD Queue
WRITEQ READQ DELETEQ
1.5 1.3 1.3
Notes:
Physically Recoverable WRITEQ requests involve forcing a VSAM I/O and forcing a Log
write to the CF for every request.
Extrapartition queues
The approximate calculations for extrapartition TD queues do not include any I/O
cost. An I/O for a physically sequential file costs approximately 7.0K and occurs
as follows:
v When attempting to write an item that will not fit in any buffer.
v When reading an item that is not in the buffer.
WRITEQ READQ
1.2 1.0
Program Control
Program control costs assume that all programs have previously been loaded, and
that there is no load operation from DASD.
Assembler COBOL
LINK 1.5 4.0
XCTL 2.1 5.1
RETURN 1.1 3.3
Storage control
GETMAIN FREEMAIN
0.9 0.9
Interregion Communication
This section describes the additional costs of communication between two CICS
regions using the following communication methods:
MRO XM
This is CICS to CICS communication where both regions are in the same MVS
image. CICS uses MVS cross memory (XM) services for this environment.
MRO XCF (via CTC)
This is CICS to CICS communication where both regions are on separate MVS
images. In this environment the transport class is defined to use a XCF path
that exploits a channel to channel (CTC) device for message traffic between the
two MVS images. This is only supported within a sysplex.
MRO XCF (via CF)
This is CICS to CICS communication where both regions are on separate MVS
images. In this environment the transport class is defined to use an XCF path
that exploits a CF structure for message traffic between the two MVS images.
This is supported only within a sysplex.
ISC LU6.2
This is CICS to CICS communication where both regions are on separate MVS
images. In this environment VTAM LU6.2 uses a CTC for communication
between the two MVS images.
Transaction routing
MRO XM MRO XCF (via CTC) MRO XCF (via CF) ISC LU6.2
37.0 43.0 66.0 110.0
The above costs relate to CICS systems with long running mirrors.
For example, if you were migrating from a local file access to MRO XM and requesting 6
function ships per transaction, the additional cost could be calculated as follows:
MRO XM MRO XCF (via CTC) MRO XCF (via CF) ISC LU6.2
21.4 35.0 59.9 115.0
access method control block (ACB). A control block agent. In a two-phase commit or MRO syncpointing
that links an application program (for example, a CICS sequence, a task that receives syncpoint requests from
program) to an access method (for example VSAM or the initiator (the task that initiates the syncpoint
ACF/VTAM). An ACB is used when communicating activity).
with DL/I only when the underlying access method is
VSAM. AID. Automatic Initiate Descriptor.
ACF. See advanced communications function. AIEXIT. System initialization parameter used to
specify the name of the autoinstall user program that
active session. In XRF, a session between a class 1 you want CICS to use when autoinstalling VTAM
terminal and the active system. A session that connects terminals. The default is the name of the CICS-supplied
the active CICS to an end user. autoinstall user program, DFHZATDX. See the CICS
System Definition Guide for more information.
active system. In an XRF environment, the CICS
system that currently supports the processing requests AILDELAY. System initialization parameter used to
of the user. specify the delay period that elapses between the end
of a session between CICS and a terminal and the
active task. A CICS task that is eligible for dispatching deletion of the terminal entry. The default is zero,
by CICS. During emergency restart, a task that
Glossary 655
| CICS business transaction services. CICS domains CICS-attachment facility. Provides a multithread
| that support an application programming interface connection to DB2 to allow applications running under
| (API) and services that simplify the development of CICS to execute DB2 commands.
| business transactions.
CICS-key. Storage in the key in which CICS is given
CICS dynamic storage area (CDSA). A storage area control (key 8) when CICS storage protection is used. It
allocated below the 16MB line, intended primarily for is for CICS code and control blocks and can be
the small amount of CICS code and control blocks that accessed and modified by CICS. Application programs
remain below the line in CICS/ESA 3.3. The size of the in user-key cannot modify CICS-key storage, but they
CDSA is controlled by the CDSASZE system can read it. The storage is obtained in MVS
initialization parameter. exclusive-key storage. Compare with user-key..
CICS monitoring facility. The CICS monitoring CICSPARS/MVS. The Customer Information Control
facility (part of the system monitoring component) System Performance Analysis Reporting System
gives a comprehensive set of operational data for CICS, (CICSPARS/MVS) (program number 5665-355) provides
using one data recording program and, optionally, one a method of reporting performance and accounting
or more data sets. See also performance class data, information produced by the CICS monitoring facility.
exception class data, and SYSEVENT data.
class 1 terminal. In XRF, a remote SNA VTAM
CICS monitoring facility data set. CICS monitoring terminal connected through a boundary network node
facility data sets are used to record information that is IBM 3745/3725/3720 Communication Controller with
output by the CICS monitoring facility program. The an NCP that supports XRF. Such a terminal has a
MCT defines which journal data sets are used by each backup session to the alternate CICS system.
class of monitoring. These data sets appear in the JCT
with the FORMAT=SMF parameter. The format of the class 2 terminal. In XRF, a terminal belonging to a
records is the system management facility (SMF), type class mainly comprised of VTAM terminals that are not
110 format. eligible for class 1. For these terminals, the alternate
system tracks the session, and attempts reestablishment
CICS PD/MVS. CICS Problem Determination/MVS after takeover.
(program number 5695-035) is a set of online tools to
help system programmers analyze and manage system class 3 terminal. In XRF, a terminal belonging to a
dumps. It automates dump analysis and formats the class mainly comprised of TCAM(DCB) terminals.
results into interactive online panels that can be used These terminals lose their sessions at takeover.
for further diagnosis and resolution of problems.
CLT. System initialization parameter used to specify
CICS private area. Element of CICS storage that has the suffix for the command list table, if this system
both static and dynamic storage requirements. The initialization table is used by an alternate XRF system.
static areas are set at initialization time and do not vary See the CICS System Definition Guide for more
over the execution of that address space. The dynamic information.
areas increase or decrease their allocations as the needs
of the address space vary. cluster. A data set defined to VSAM. A cluster can be
a key-sequenced data set, an entry-sequenced data set,
CICS program library. The CICS program library or a relative record data set.
contains all user-written programs and CICS programs
to be loaded and executed as part of the online system. CMAC. A CICS-supplied transaction used to display
This group includes the control system itself and individual message information as it is provided in the
certain user-defined system control tables essential to CICS CICS Messages and Codes manual. See the CICS
CICS operation. The library contains program text and, Supplied Transactions manual for more information.
where applicable, a relocation dictionary for a program.
COBOL. Common business-oriented language. An
The contents of this library are loaded asynchronously
English-like programming language designed for
into CICS dynamic storage for online execution.
business data processing applications.
CICS region userid. The userid assigned to a CICS
command list table (CLT). In XRF, a CICS table that
region at CICS initialization. It is specified either in the
contains a list of MVS commands and messages to be
RACF started procedures table when CICS is started as
issued during takeover. The CLT is defined to the
a started task, or on the USER parameter of the JOB
alternate CICS system and used during takeover.
statement when CICS is started as a job.
command security. A form of security checking that
CICS system definition data set (CSD). A VSAM
can be specified for a subset of the CICS application
KSDS cluster with alternate paths. The CSD data set
programming interface (API) commands. Command
contains a resource definition record for every record
security operates in addition to any transaction security
defined to CICS using resource definition online.
or resource security specified for a transaction. For
common system area (CSA). A major CICS storage conversational. Pertaining to a program or a system
control block that contains areas and data required for that carries on a dialog with a terminal user, alternately
the operation of CICS. It can be extended to include a accepting input and responding to the input quickly
user-defined common work area (CWA) that can be enough for the user to maintain a train of thought.
referred to by application programs. This area is
duplicated above the 16MB line as the extended CSA. See Common system area.
common system area (ECSA).
CSAC. Transient data destination used by the
common work area (CWA). The common work area abnormal condition program (DFHACP).
(CWA) is an area within the CSA that can be used by
application programs for user data that needs to be CSD. See CICS system definition data set.
accessed by any task in the system. This area is
CSMT. Transient data destination used by the
acquired during system initialization and its size is
terminal abnormal condition program (DFHTACP), the
determined by the system programmer at system
node abnormal condition program (DFHZNAC), and
generation. It is initially set to binary zeros. Its contents
the abnormal condition program (DFHACP) for writing
can be accessed and altered by any task during CICS
terminal error and abend messages.
operation. Contrast with transaction work area (TWA).
CSTE. Transient data destination used by the terminal
communication area (COMMAREA). An area that is
abnormal condition program (DFHTACP).
used to pass data between tasks that communicate with
a given terminal. The area can also be used to pass CWA. See Common work area.
data between programs within a task.
CWAKEY. System initialization parameter used to
communication management configuration (CMC). A specify the storage key for the CWA if CICS is running
configuration in which the VTAM subsystem that owns with the storage protection facility. The default storage
the terminals is in a different MVS image from the key is user-key.
active or the alternate CICS system.
control interval (CI). A fixed-length area of data entry database (DEDB). An IMS hierarchic
auxiliary-storage space in which VSAM stores records database designed to provide efficient storage and fast
and distributes free space. The unit of information online gathering, retrieval, and update of data using
transmitted to or from auxiliary storage by VSAM, VSAM ESDS. From CICS, a DEDB is accessible only
independent of physical record size. through DBCTL, not through local DL/I.
control subpool. A CICS area that holds the dispatch Data Language/I (DL/I). A high-level interface
control area (DCA), interval control elements (ICEs), between applications and IMS. It is invoked from PL/I,
automatic initiate descriptors (AIDs), queue element COBOL, or assembler language by means of ordinary
areas (QEAs), and other control information. Generally, subroutine calls. DL/I enables you to define data
the control subpool occupies only one page. structures, to relate structures to the application, and to
Glossary 657
load and reorganize these structures. It enables device independence. The capability to write
applications programs to retrieve, replace, delete, and application programs so that they do not depend on
add segments to databases. the physical characteristics of devices. BMS provides a
measure of device independence.
data management block (DMB). An IMS control block
that resides in main storage and describes and controls DFH. Three-character prefix of all CICS modules.
a physical database. It is constructed from information
obtained from the application control block (ACB) DFHCSDUP. CICS system definition data set (CSD)
library or the database description (DBD) library. utility program. It provides offline services for the CSD.
It can be invoked as a batch program or from a
data set name sharing. An MVS option that allows user-written program running either in batch mode or
one set of control blocks to be used for the base and under TSO.
the path in a VSAM alternate index.
dispatching. The act of scheduling a task for
data sharing (IMS). The concurrent access of DL/I execution, performed by CICS task control.
databases by two or more IMS/VS subsystems. The
subsystems can be in one processor or in separate dispatching priority. A number assigned to tasks,
processors. In IMS data sharing, CICS can be an IMS used to determine the order in which they are to use
subsystem. There are two levels of data sharing: the processor in the CICS multitasking environment.
block-level data sharing and database-level data
sharing. distributed program link (DPL). Type of CICS
intercommunication which, in CICS/ESA 3.3, enables
data stream. All information (data and control CICS to ship LINK requests between host CICS regions.
information) transmitted through a data channel in a In CICS OS/2, DPL enables CICS OS/2™ to ship LINK
single read or write operation. requests up to a host CICS region, or to another CICS
OS/2 system.
data-owning region (DOR). A CICS address space
whose primary purpose is to manage files and distributed transaction processing (DTP). Type of
databases. See application-owning region (AOR), and intercommunication in CICS, in which the processing is
terminal-owning region (TOR). distributed between transactions that communicate
synchronously with one another over intersystem or
DATABASE 2 (DB2). A relational database interregion links.
management system in which data is presented to the
user in the form of tables. DMB. See Data management block (DL/I).
database-level sharing. A kind of IMS data sharing DPL. Distributed program link.
that enables application programs in one IMS system to
read data while a program in another IMS system reads | DSALIM. Dynamic storage area. System initialization
it or updates it. | parameter.
DCT. System initialization parameter used to specify DTB. Dynamic transaction backout.
the destination control table suffix. See Destination
DTP. Distributed transaction processing.
control table. For more information, see the CICS
System Definition Guide. DUMP. System initialization parameter used to specify
whether CICS is to take SDUMPs. The default is YES.
deadlock. Unresolved contention for the use of a
See the CICS System Definition Guide for more
resource. An error condition in which processing
information.
cannot continue because each of two elements of the
process is waiting for an action by, or a response from, dump control. The CICS element that provides
the other. storage dumps for help during testing.
deferred work element (DWE). A work element dynamic transaction backout (DTB). The process of
created and placed on a chain (the DWE chain) to save canceling changes made by a transaction to stored data
information about an event that must be completed following the failure of that transaction for whatever
before task termination but is not completed at the reason. Dynamic transaction backout is required with
present time. DWEs are also used to save information resource definition online.
about work to be backed out in case of an abend.
event monitoring point (EMP). Point in the CICS extended read-only dynamic storage area (ERDSA).
code at which CICS monitoring data is collected. You An area of storage allocated above the 16MB line and
cannot relocate these system-defined points. used for eligible, reentrant CICS and user application
programs, which must be link-edited with the RENT
exception class data. CICS monitoring information on and AMODE(31) attributes. The storage is obtained in
exception conditions raised by a transaction, such as key 0, non-fetch-protected storage.
queuing for VSAM strings or waiting for temporary
storage. This data highlights possible problems in Extended Recovery Facility (XRF). A facility that
system operations. increases the availability of CICS transaction
processing, as seen by the end users. Availability is
exception trace entry. An entry made to the internal improved by having a second CICS system (the
trace table and any other active trace destinations when alternate system) ready to continue processing the
CICS detects an exception condition. It gives workload, if and when particular failures that disrupt
information about what was happening at the time the user services occur on the first system (the active
failure occurred and what was being used. system).
exclusive-key storage. In MVS key-controlled storage Extended Recovery Facility (XRF) complex. All the
protection, storage with storage keys other than required hardware and software components (MVS
open-key. images and licensed programs) that provide the XRF
function for a CICS system.
EXEC. Key word used in CICS command language.
All CICS commands begin with the keywords EXEC extended system queue area (ESQA). A major
CICS. element of MVS/ESA virtual storage above the 16MB
line. This storage area contains tables and queues
Execution Diagnostic Facility (EDF). A facility used relating to the entire system. It duplicates above the
for testing application programs interactively online, 16MB line the system queue area (SQA).
without making any modifications to the source
program or to the program preparation procedure. The extended user dynamic storage area (EUDSA).
facility intercepts execution of the program at various Storage area allocated above the 16MB line and
points and displays information about the program at reserved exclusively for those user application
these points. Also displayed are any screens sent by the programs that execute in user-key, that are eligible to
user program, so that the programmer can converse reside above the 16MB line, but are not eligible for the
with the application program during testing just as a ERDSA (that is, they are not reentrant.)
user would do on the production system.
external response time. Elapsed time from pressing
extended CICS dynamic storage area (ECDSA). the ENTER key or another AID key until the action
Storage area allocated above the 16MB line for CICS requested by the terminal user is completed, and the
code and control blocks which are eligible to reside next entry can be started.
above the 16MB line, but are not eligible for the
ERDSA (that is, they are not reentrant.) See the CICS external security manager (ESM). A program, such as
System Definition Guide for more information. RACF, that performs security checking for CICS users
and resources.
extended common system area (ECSA). A major
element of MVS/ESA virtual storage above the 16MB external throughput rate (ETR). The amount of useful
line. This area contains pageable system data areas that work completed in a unit of time (for example, the
are addressable by all active virtual storage address number of transactions completed per elapsed second).
Glossary 659
extrapartition transient data. A CICS facility for FREEMAIN. EXEC CICS command used to release
temporarily saving data in the form of queues, called main storage. For programming information, see the
destinations. Each extrapartition TD destination CICS Application Programming Reference manual.
requires a resource definition that links it to a QSAM
data set outside the CICS region. Each extrapartition function shipping. The process, transparent to the
TD destination uses a different QSAM data set. application program, by which CICS accesses resources
Extrapartition destinations are used for data that is when those resources are actually held on another CICS
either coming from a source outside the region, or system.
being directed from a source within the region to a
destination outside the region. Extrapartition data
written by CICS is usually intended for subsequent
G
input to non-CICS batch programs. Examples of data GTF. Generalized trace facility—a data-collection
that might be written to extrapartition destinations routine in MVS. GTF traces the following system
include logging records, statistics, and transaction error events: seek addresses on start I/O records, SRM
messages. Contrast with intrapartition transient data. activity, page faults, I/O activity, and supervisor
services. Execution options specify the system events to
F be traced.
FCT. System initialization parameter used to specify GCD. Global catalog data set. Global catalog is a
the suffix of the file control table to be used. This VSAM key-sequenced data set (KSDS). Essential for
parameter is effective only on a CICS cold or initial recovery purposes.
start. See the CICS System Definition Guide for more
information. H
file control table (FCT). Table containing the high performance option (HPO). A option provided
characteristics of the files accessed by file control. with MVS to improve performance by reducing the
transaction pathlength; that is, the number of
file request thread element (FRTE). An element used
instructions needed to service each request.
by CICS file control to link related requests together as
a file thread; to record the existence of READ SET high private area. Part of the CICS address space,
storage to be released at syncpoint and the existence of consisting of the local system queue area (LSQA), the
any other outstanding work that must be completed at scheduler work area (SWA), and subpools 229 and 230.
syncpoint; to register a task as a user of a file to The area at the high end of the CICS address space is
prevent it being closed while still in use. not specifically used by CICS, but contains information
and control blocks that are needed by the operating
file-owning region (FOR). A CICS address space
system to support the region and its requirements.
whose primary purpose is to manage files and
databases. Deprecated term for data-owning region Hiperspace. A high-performance storage area in the
(DOR). See also application-owning region (AOR), processor or multiprocessor.
and terminal-owning region (TOR).
host computer. The primary or controlling computer
first failure data capture (FFDC). Data relevant to a in a data communication system.
CICS exception condition that is recorded as soon as
possible after the condition has been detected. host processor. The primary or controlling computer
in a multiple computer installation.
fixed-block-architecture (FBA) device. A disk storage
device that stores data in blocks of fixed size. These HPO. System initialization parameter used to indicate
blocks are addressed by block number relative to the whether you want to use the VTAM authorized path
beginning of the particular file. feature of the high performance option. The default is
NO. You can code this parameter only in the system
format. The arrangement or layout of data on a data initialization table. See the CICS System Definition Guide
medium, usually a display screen with CICS. for more information.
format independence. The ability to send data to a
device without having to be concerned with the format I
in which the data is displayed. The same data may
appear in different formats on different devices. I/O. Input/output (primarily from and to terminals)
fragmentation. The breaking up of free storage into ICP. System initialization parameter used to specify
small areas (by intervening used storage areas). This that you want to cold start the CICS interval control
leads to the effective storage available for use being program. See the CICS System Definition Guide for more
reduced. information.
in-flight task. A task that is in progress when a CICS internal trace. CICS trace facility that is always
system failure or immediate shutdown occurs. During present in virtual storage. When CICS detects an
emergency restart, a task that caused records to be exception condition, an entry always goes to the
written to the system log, but for which no syncpoint internal trace table, even if you have turned tracing off.
record has been found for the current UOW. This task The internal trace table is a wraparound table whose
was interrupted before the UOW completed. size can be set by the TRTABSZ system initialization
parameter and can be changed by the CICS SET
in-flight transaction. Any transaction that was still in TRACE DEST command. See the CICS Problem
process when system termination occurred. Determination Guide for more information.
Information Management System (IMS). A database interregion communication (IRC). The method by
manager used by CICS to allow access to data in DL/I which CICS provides communication between a CICS
databases. IMS provides for the arrangement of data in region and another region in the same processor. Used
an hierarchical structure and a common access for multiregion operation (MRO) . Compare with
approach in application programs that manipulate IMS intersystem communication.
databases.
intersystem communication (ISC). Communication
initialization. Actions performed by the CICS system between separate systems by means of SNA
to construct the environment in the CICS region to networking facilities or by means of the
enable CICS applications to be run. The stage of the application-to-application facilities of VTAM. ISC links
XRF process when the active or the alternate CICS CICS systems and other systems, and may be used for
system is started, signs on to the control data set, and user application to user application communication, or
begins to issue its surveillance signal. for transparently executing CICS functions on a remote
CICS system. Compare with multiregion operation and
initialization phase. The process of bringing up the interregion communication.
active CICS system and the alternate CICS system in an
XRF complex. The two actions are performed interval control element (ICE). An element created for
independently. each time-dependent request received by the interval
control program. These ICEs are logically chained to
installation. A particular computing system, in terms the CSA in expiration time-of-day sequence.
of the work it does and the people who manage it, Expiration of a time-ordered request is detected by the
operate it, apply it to problems, service it, and use the expired request logic of the interval control program
work it produces. The task of making a program ready running as a CICS system task whenever the task
to do useful work. This task includes generating a dispatcher gains control. The type of service
program, initializing it, and applying any changes to it. represented by the expired ICE is initiated, providing
all resources required for the service are available, and
interactive. Pertaining to an application in which each
the ICE is removed from the chain. If the resources are
entry entails a response from a system or program, as
not available, the ICE remains on the chain and another
in an inquiry system or an airline reservation system.
Glossary 661
attempt to initiate the requested service is made the
next time the task dispatcher gains control.
L
interval control program (ICP). The CICS program Last-in-first-out (LIFO). A queuing technique in
that provides time-dependent facilities. Together with which the next item to be retrieved is the last item
task control, interval control (sometimes called time placed in the queue.
management) provides various optional task functions
LIFO storage. Storage used by reentrant CICS
(system stall detection, runaway task control, task
management modules to save registers.
synchronization, etc.) based on specified intervals of
time, or the time of day. link pack area (LPA). A major element of MVS/ESA
virtual storage below the 16MB line. The storage areas
interval statistics. CICS statistics gathered during a
that make up the LPA contain all the common reentrant
specified interval. See also end-of-day statistics,
modules shared by the system, and exists to provide
requested statistics, requested reset statistics, and
economy of real storage by sharing one copy of the
unsolicited statistics.
modules, protection because LPA code cannot be
intrapartition transient data (TD). A CICS facility for overwritten even by key 0 programs, and reduced
temporarily saving data in the form of queues, called pathlength because the modules can be branched to.
destinations. All intrapartition TD destinations are held The LPA is duplicated above the 16MB line as the
as queues in the same VSAM data set, which is extended link pack area (ELPA).
managed by CICS. Data is written to the queue by a
LISTCAT. A VSAM tool that provides information
user task. The queue can be used subsequently as input
that interprets the actual situation of VSAM data sets.
data by other tasks within the CICS region. All access is
sequential, governed by read and write pointers. Once local. In data communication, pertaining to devices
a record has been read it cannot be read subsequently that are accessed directly without use of a
by another task. An intrapartition destination requires a telecommunication line. Contrast with remote.
resource definition containing information that locates Synonym for channel-attached.
the queue in the intrapartition data set. Applications
that might use intrapartition queues include message local DL/I. DL/I residing in the CICS address space.
switching, data collection, and queuing of orders.
local resource. In CICS intercommunication, a
ISC. System initialization parameter used to include resource that is owned by the local system.
the CICS programs required for interregion or
intersystem communication. See the CICS System local shared resources (LSR). Files that share a
Definition Guide for more information. common pool of buffers and a common pool of strings;
that is, control blocks supporting the I/O operations.
Contrast with nonshared resources.
J
local system. The system in a multisystem
job control language (JCL). Control language used to environment on which the application program is
describe a job and its requirements to an operating executing. The local application may process data from
system. databases located on both the same (local) system and
another (remote) system. Contrast with remote system.
K local system queue area (LSQA). An element of the
CICS address space. It generally contains the control
key 0. Storage used by CICS for the extended
blocks for storage and contents supervision. See also
read-only dynamic storage area (ERDSA). The
high private area.
allocation of key 0 storage for the ERDSA is optional.
local work area. Area provided for the use of a single
key-controlled storage protection. An MVS facility for
task-related user exit program. It is associated with a
protecting access to storage. Access to key-controlled
single task and lasts for the duration of the task only.
storage is permitted only when the storage key
matches the access key associated with the request. log. A recording of changes made to a file. This
recording can be used for subsequent recovery of the
keypoint. The periodic recording of system
file. See also dynamic log, journal, and system log.
information and control blocks on the system log—also
the data so recorded. See also activity keypoint, and logging. The recording (by CICS) of recovery
warm keypoint. information onto the system log, for use during
emergency restart. A specific journaling function that
records changes made to the system activity
environment and database environment. These records
look-aside query. Query performed in one partition mirror task. A task required to service any incoming
by an operator working in another partition. Using request that specifies a CICS mirror transaction (CSMI,
partitions, a partially completed operation need not be CSM1, CSM2, CSM3, CSM5, CPMI, CVMI).
transmitted to the host processor before releasing the
screen for an inquiry. mirror transaction. Recreates the request that is
function shipped from one system to another, issues the
LPA. System initialization parameter used to indicate request on the second system, and passes the acquired
whether any CICS management modules can be used data back to the first system.
from the link pack area. The default is NO. See the
CICS System Definition Guide for more information. MN. System initialization parameter used to indicate
whether monitoring is to be switched on or off at
LUTYPE6.1 (LU6.1). Type of logical unit used for initialization. The default is OFF. See the CICS System
processor-to-processor sessions. LUTYPE6.1 is a Definition Guide for more information.
development of LUTYPE6. CICS—IMS
intercommunication uses LUTYPE6.1 sessions. MNEVE. System initialization parameter used to
indicate whether SYSEVENT monitoring is to be made
LUTYPE6.2 (LU6.2). Type of logical unit used for active during CICS initialization. The default is OFF.
CICS intersystem (ISC) sessions. LUTYPE6.2 is a See the CICS System Definition Guide for more
development of LUTYPE6.1. The LUTYPE6.2 information.
architecture supports both CICS host to system-level
products and CICS host to device-level products. CICS MNEXC. System initialization parameter used to
ISC uses LUTYPE6.2 sessions. APPC is the the protocol indicate whether the monitoring exception class is to be
boundary of the LU6.2 architecture. made active during CICS initialization. The default is
OFF. See the CICS System Definition Guide for more
information.
M
MNPER. System initialization parameter used to
main storage. (ISO) Program-addressable storage from indicate whether the monitoring performance class is to
which instructions and data can be loaded directly into be made active during CICS initialization. The default
registers for subsequent execution or processing. See is OFF. See the CICS System Definition Guide for more
also real storage, storage, virtual storage. information.
map. A format established for a page or a portion of a modegroup. A VTAM LOGMODE entry, which can
page, or a set of screen format descriptions. A concept specify (among other things) the class of service
of CICS BMS that maps the relationship between the required for a group of APPC sessions.
names of program variables and the position in which
their values will appear on a display device. The map modename. The name of a modeset and of the
also contains other formatting information such as field corresponding modegroup.
attributes. A map describes constant fields, and their
position on the display; the format of input and output modeset. In CICS, a group of APPC sessions. A
fields; the attributes of constant and variable fields and modeset is linked by its modename to a modegroup
the symbolic names of variable fields. (VTAM LOGMODE entry) that defines the class of
service for the modeset.
message control program (MCP). In ACF/TCAM, a
specific implementation of an access method, including modified link pack area (MLPA). An element of
I/O routines, buffering routines, activation and MVS/ESA virtual storage. This area provides a
deactivation routines, service facilities, and SNA temporary extension to the PLPA existing only for the
support. life of the current IPL. You can use this area to add or
replace altered LPA-eligible modules without having to
recreate the LPA. See also link pack area (LPA) and
pageable link pack area (PLPA).
Glossary 663
monitoring. The regular assessment of an ongoing MVS/DFP. MVS/Data Facility Product, a major
production system against defined thresholds to check element of MVS, including data access methods and
that the system is operating correctly. Running a data administration utilities.
hardware or software tool to measure the performance
characteristics of a system. Note that CICS MVS/ESA extended nucleus. A major element of
distinguishes between monitoring and statistics, but MVS/ESA virtual storage. This area duplicates above
IMS does not. See also statistics. the 16MB line the MVS/ESA nucleus.
monitoring control table (MCT). A table describing MVS/ESA nucleus. A major element of MVS/ESA
the way the user data fields in the accounting and virtual storage. This static storage area contains control
performance class monitoring records are to be programs and key control blocks. The area includes the
manipulated at each user event monitoring point nucleus load module and is of variable size, depending
(EMP). It also identifies the CICS user journals in on the installation’s configuration. The nucleus is
which the data for each monitoring class is to be duplicated above the 16MB line as the MVS/ESA
recorded. The MCT contains the definition of user extended nucleus.
event monitor points (EMPs), and specifies the journal
data sets used to record the data. MXT. System initialization parameter used to specify
the maximum number of tasks that CICS allows to
monitoring record. Any of three types of task-related exist at any time. The default is 32. See the CICS System
activity record (performance, event, and exception) Definition Guide for more information.
built by the CICS monitoring domain. Monitoring
records are available to the user for accounting, tuning,
and capacity planning purposes.
N
NEB. See Node error block.
MROBTCH. System initialization parameter used to
specify the number of events that must occur before NEP. See Node error program.
CICS is posted for dispatch due to the batching
mechanism. The default is one. See the CICS System NETNAME (netname). In CICS, the name by which a
Definition Guide for more information. CICS terminal or a CICS system is known to
ACF/VTAM.
MROLRM. System initialization parameter used to
specify whether you want to establish an MRO NetView. A network management product that can
long-running mirror task. The default is NO. See the provide rapid notification of events and automated
CICS System Definition Guide for more information. operations.
multiregion operation (MRO). Communication NetView Performance Monitor (NPM). A program
between CICS systems in the same processor without product that collects and reports on data in the host
the use of SNA network facilities. This allows several and NCP.
CICS systems in different regions to communicate with
each other, and to share resources such as files, network. An interconnected group of nodes. The
terminals, temporary storage, and so on. Contrast with assembly of equipment through which connections are
intersystem communication. made between data stations.
multitasking. Concurrent execution of application network configuration. In SNA, the group of links,
programs within a CICS region. nodes, machine features, devices, and programs that
make up a data processing system, a network, or a
multithreading. Use, by several transactions, of a communication system.
single copy of an application program.
network control program (ACF/NCP). A program that
MVS image. A single copy of the MVS operating controls the operation of a communication controller
system. This can be a physical processing system (such (3745, 3725, 3720, 3705) in which it resides. NCP builds
as an IBM 3090) that is partitioned into one or more the backup sessions to the alternate CICS system for
processors, where each partition is capable of running XRF-capable terminals. NCP is generated by the user
under the control of a single MVS operating system. from a library of IBM-supplied modules.
Alternatively, if you are running MVS with the
processor resource/systems manager (PR/SM), an MVS Network Logic Data Manager (NLDM). A program
image can consist of multiple logical partitions, with that collects and interprets records of errors detected in
each logical partition (LP) operating a copy of MVS. a network and suggests possible solutions. NLDM
Also referred to as a single- or multi-MVS environment, consists of commands and data services processors that
according to the number of MVS systems. comprise the NetView software monitor component.
node error program (NEP). A user-replaceable performance evaluation. The determination of how
program used to allow user-dependent processing well a specific system is meeting or may be expected to
whenever a communication error is reported to CICS meet specific processing requirements at specific
interfaces. Performance evaluation, by determining
nonconversational. A mode of CICS operation in such factors as throughput rate, turnaround time, and
which resources are allocated, used, and released constrained resources, can provide important inputs
immediately on completion of the task. and data for the performance improvement process.
pageable link pack area (PLPA). An element of planned takeover. In XRF, a planned shutdown of the
MVS/ESA virtual storage. This area contains supervisor active CICS system, and takeover by the alternate
call routines, access methods, and other read-only system, for maintenance or operational reasons.
system programs along with read-only reenterable user
programs selected by an installation to be shared polling. The process whereby stations are invited, one
among users of the system. Optional functions or at a time, to transmit. The polling process usually
devices selected by an installation during system involves the sequential interrogation of several data
stations.
Glossary 665
post-takeover. The XRF phase, immediately following PRVMOD. System initialization parameter used to
takeover, when the new active CICS system does not specify the names of those modules that are not to be
have an alternate system. used from the LPA. See the CICS System Definition
Guide for more information.
pregenerated system. A CICS system distributed in a
form that has already undergone the system generation PSB directory (PDIR). A list or directory of program
process. specification blocks (PSBs) that define for DL/I the use
of data bases by application programs. It contains one
priority. A rank assigned to a task that determines its entry for each PSB to be used during CICS execution,
precedence in receiving system resources. and is loaded during initialization. Each entry contains
the size of the control block, the status, the storage
private area. A major element of MVS/ESA virtual location (if in storage), and the DASD address of the
storage below the 16MB line. It contains the local PSB in the ACBLIB. It is generated using DFHDLPSB
system queue area (LSQA), scheduler work area, macros. Contains entries defining each PSB to be
subpools 229 and 230, a 16KB system region area, and a accessed using local DL/I. Also contains entries for
private user region for running programs and storing remote PSBs, to which requests are function-shipped
data. This area is duplicated (except for the 16KB using remote DL/I.
system region area) above the 16MB line as the
extended private area. PSBCHK. System initialization parameter used to
request DL/I security checking of a remote terminal
profile. In CICS, a set of options specified in a initiating a transaction with transaction routing. This
resource definition that can be invoked by a transaction parameter is only applicable if the local CICS-DL/I
definition. Profiles control the interactions between the interface is being used. The default is to have the
transaction and terminals or logical units. CICS remote link checked but no check made against the
supplies profile definitions suitable for most purposes. remote terminal.
If a transaction definition does not specify a profile, a
standard profile is used. In RACF, data that describes PSBPL. System initialization parameter used to specify
the significant characteristics of a user, a resource, a the size of the PSB pool in 1024-byte blocks for local
group of users, or a group of resources. See resource CICS-DL/I interface support. This parameter is only
profile, discrete profile, generic profile, user profile, applicable if the local CICS-DL/I interface is being
resource group profile, data set profile. used. The default is four blocks.
program check. A condition that occurs when pseudoconversational. A type of CICS application
programming errors are detected by a processor during design that appears to the user as a continuous
execution. conversation, but that consists internally of multiple
tasks—also called “transaction-oriented programming.”
program communication block (PCB). IMS control
block that describes an application program’s interface purge. The abending of a task by task control to
to an IMS database or, additionally, for message alleviate a short-on-storage condition.
processing and batch message processing programs, to
the source and destination of messages. See also PUT. Program update tape.
program specification block (PSB).
PVDELAY. System initialization parameter used
program compression. An operation performed by define how long entries can remain in the PV
program control to relieve space in the DSA during a signed-on-from list on the remote system. The default
short-on-storage condition. The PPT is searched to is 30 minutes. See persistent verification. See the CICS
identify programs that have been dynamically loaded System Definition Guide for more information.
and are currently not in use. If a program is not in use,
the space it occupied is reclaimed.
Q
program isolation (PI). An IMS facility that protects
all activity of an application program from any other QSAM. Queued sequential access method.
active application program until that application
quasi-reentrant. Applied to a CICS application
program indicates, by reaching a syncpoint, that the
program that is serially reusable between entry and exit
data it has modified is consistent and complete.
points because it does not modify itself or store data
PRTYAGE. System initialization parameter used to within itself between calls on CICS facilities.
specify the number of milliseconds to be used in the
queue. A line or list formed by items in a system
priority aging algorithm for incrementing the priority
waiting for service; for example, tasks to be performed,
of a task. The default is 32 768 milliseconds. See the
or messages to be transmitted in a message-switching
CICS System Definition Guide for more information.
system. In CICS, the transient data and temporary
Glossary 667
processing a request for data transfer, for connecting or RMODE. In MVS, an attribute that specifies the
disconnecting a terminal, or for some other operation. residence mode of the load module produced by the
linkage editor. Unless a program is link-edited with
requested reset statistics. CICS statistics that the user RMODE(24), CICS loads it above the 16MB line if
has asked for by using the appropriate EXEC CICS or possible.
CEMT commands, which cause the statistics to be
written to the SMF data set immediately. Requested RMTRAN. System initialization parameter used with
reset statistics differ from requested statistics in that the XRF to specify the name of the transaction that you
statistics counters are reset, using an EXEC CICS or want an alternate CICS to initiate when logged-on class
CEMT command. 1 terminals are switched following a takeover. This
parameter is applicable only on an alternate CICS
requested statistics. CICS statistics that the user has region.
asked for by using the appropriate EXEC CICS or
CEMT commands, which cause the statistics to be rollback. A programmed return to a prior checkpoint.
written to the SMF data set immediately, instead of In CICS, the cancellation by an application program of
waiting for the current interval to expire. Contrast with the changes it has made to all recoverable resources
requested statistics. during the current logical unit of work.
residence mode (RMODE). Attribute of a program rotational position sensing (RPS). A feature that
indicating where it can reside, that is, either above or permits a disk storage device to disconnect from a
below the 16MB line. block multiplexer channel (or its equivalent), allowing
the channel to service other devices on the channel
resource. Any facility of the computing system or during positional delay.
operating system required by a job or task, and
including main storage, input/output devices, the RPL. See request parameter list.
processing unit, data sets, and control or processing
programs. RPS. See rotational position sensing.
Resource Access Control Facility (RACF). An IBM RSD (restart data set). The direct-access data set used
licensed product that provides for access control by to contain the information necessary to restart CICS.
identifying and verifying users to the system,
authorizing access to protected resources, logging
detected unauthorized attempts to enter the system,
S
and logging detected accesses to protected resources. SAM. Sequential access method.
resource control table (RCT). A control table that sample statistics program (DFH0STAT). IBM-supplied
defines the relationship between CICS transactions and batch program that provides information that is useful
DB2 resources. For details, refer to the DB2 Version 2 in calculating the storage requirements of a CICS
Administration Guide. system, for example, the sizes of the dynamic storage
areas.
resource measurement facility (RMF). An IBM
program that collects system-wide data describing the SAS (single address space). Single CICS region;
processor activity (WAIT time), I/O activity (channel usually used when contrasting with MRO.
and device utilization), main storage activity (demand
and swap paging statistics), and system resources scheduler work area (SWA). An element of the CICS
manager (SRM) activity (workload). RMF produces two address space. The SWA is made up of subpools 236
types of report, system-wide reports and address-space and 237 which contain information about the job and
reports. the step itself. Almost anything that appears in the job
stream for the step creates some kind of control block
response time. The elapsed time from entry of a last in this area.
input message segment to the first response segment.
SCS. SNA character string.
restart. Resumption of operation after recovery. Ability
to restart requires knowledge of where to start and SCS. System initialization parameter used to specify
ability to start at that point. how much of the dynamic storage area (DSA) you
want CICS to regard as the DSA storage cushion. The
restart data set (RDS). A VSAM KSDS used only default is 64KB.
during emergency restart. The RDS temporarily holds
the backout information read from the CICS system SDB. Structured database.
log. This allows CICS to be restored to a stable state
and to be restarted following an abrupt termination. SDF. Screen Definition Facility. An online application
development program product used to define or edit
BMS maps interactively.
service elements. The discrete hardware and software startup. The operation of starting up CICS by the
products that provide a terminal user with processing system operator.
ability.
startup jobstream. A set of job control statements used
session recovery. The process in which CICS switches to initialize CICS.
active sessions on class 1 terminals to backup sessions
or reestablishes service on class 2 terminals. statistics. System statistics are accumulated
continually by CICS management programs in CICS
short-on-storage (SOS). The condition in CICS that system tables during the execution of CICS. System
occurs when requests for storage from the dynamic statistics can be captured and recorded, either on
storage areas exceed available storage. CICS cannot request or automatically at intervals, by any operator
satisfy these requests, or can satisfy them only by using whose security code allows access to such information.
some of the storage cushions, even when all programs In addition, system statistics are recorded on normal
that are eligible for deletion, and are not in use, have termination of the system.
been deleted. See also storage cushion and program
See automatic statistics, unsolicited statistics,
compression.
end-of-day statistics, requested statistics, and
single threading. The execution of a program to requested reset statistics.
completion. Processing of one transaction is completed
statistics utility program (DFHSTUP STUP). Provides
before another transaction is started. (Compare this
a summary report facility that can be used to interpret
with multithreading.)
CICS statistics.
SIP. CICS system initialization program.
STATRCD. System initialization parameter used to set
SIT. System initialization parameter used to specify the statistics recording status at CICS initialization. The
the suffix, if any, of the system initialization table (SIT) default is OFF; CICS interval and unsolicited statistics
that you want CICS to load at the start of initialization. are not collected. End-of-day statistics are collected at
If you omit this parameter, CICS loads the the logical end of day and on shutdown. See the CICS
pregenerated, default SIT, DFHSIT$$. System Definition Guide for more information.
Glossary 669
storage. A functional unit into which data can be the interval, unsolicited, requested reset, and
placed and from which it can be retrieved. See main end-of-day statistics on an applid by applid basis.
storage, storage, virtual storage.
surveillance. In XRF, a series of processes by which
storage accounting area (SAA). A field at the start of the alternate CICS system monitors the active CICS
a CICS storage area that describes the area and enables system for a lapse of activity in order to detect
CICS to detect some storage violations. Each CICS potential failure conditions requiring a takeover. The
storage area has either an SAA or a storage check zone. active and alternate CICS systems use the CAVM
surveillance mechanism to monitor each other’s
storage control. The CICS element that gets working well-being.
storage areas.
surveillance signal. In XRF, the signal continuously
storage cushion. A noncontiguous area of storage in written to the CAVM data sets by the active and
the dynamic storage areas reserved for use by CICS alternate CICS systems to inform the each other of their
when processing a short-on-storage condition. states.
storage key. An indicator associated with each 4KB switch data traffic (SWDT). In an XRF configuration,
block of storage that is available in the CICS region. a VTAM session control request sent to the NCP that
Access to the storage is controlled by key-controlled initiates the switch of LU sessions from backup XRF
storage protection. session status to active XRF session status. The former
XRF session, if still ‘active’, is terminated with an
storage protection. A optional facility in CICS/ESA 3.3 UNBIND. The switch request is issued to VTAM from
that enables users to protect CICS code and control the application program (alternate CICS system).
blocks from being overwritten inadvertently by VTAM passes the request to the boundary network
application programs. node, where the sessions are actually switched by NCP.
storage protection key. An indicator that appears in switched connection. A connection that is established
the current program status word whenever an by dialing.
associated task has control of the system. This indicator
must match the storage keys of all main storage blocks synchronization. The stage of the XRF process when
that the task is to use. the active and the alternate are both initialized, are
aware of each other’s presence, and the alternate is
storage violation. An error in a storage accounting ready to begin tracking.
chain in the dynamic storage area. A storage violation
can be detected by the storage manager domain. synchronization level (sync level). The level of
synchronization (0, 1, or 2) established for an APPC
storage violation dump. A formatted dump taken as a session between intercommunicating CICS transactions.
result of a storage error detected by the storage control Level 0 gives no synchronization support, level 2
program, including a dump of the dynamic storage allows the exchange of private synchronization
error. requests, and level 2 gives full CICS synchronization
support with backout of all updates to recoverable
subpool 229. An element of the CICS address space
resources if failure occurs.
used primarily for the staging of messages. JES uses
this area for messages to be printed on the system log synchronization phase. The XRF phase, immediately
and JCL messages as well as SYSIN/SYSOUT buffers. after initialization, when the alternate system builds the
CICS control blocks to mirror those in the active
subpool 230. An element of the CICS address space
system.
used by VTAM for inbound message assembly for
segmented messages. Data management keeps data syncpoint. A logical point in execution of an
extent blocks (DEBs) here for any opened data set. application program where the changes made to the
databases by the program are consistent and complete
SUBTSKS. System initialization parameter used to
and can be committed to the database. The output,
define the number of task control blocks (TCBs) you
which has been held up to that point, is sent to its
want CICS to use for running tasks in concurrent
destination(s), the input is removed from the message
mode. A concurrent mode TCB allows CICS to perform
queues, and the database updates are made available to
management functions such as system subtasks. The
other applications. When a program terminates
default is none. See the CICS System Definition Guide for
abnormally, CICS recovery and restart facilities do not
more information.
backout updates prior to the last completed syncpoint.
summary report. A statistics report produced by the A syncpoint is created by any of the following:
CICS statistics utility program (STUP). It summarizes v A DL/I CHECKPOINT command or CHKP call
See also unit of work (UOW). system program. A program providing services in
general support of the running of a system.
SYSEVENT data. A class of monitoring data which
provides a special kind of transaction timing system queue area (SQA). A major element of
information. MVS/ESA virtual storage below the 16MB line. This
storage area contains tables and queues relating to the
SYSGEN. System generation. entire system. Its contents are highly dependent on the
configuration and job requirements at installation. The
system. In CICS, an assembly of hardware and equivalent area above the 16MB line is the extended
software capable of providing the facilities of CICS for system queue area (ESQA).
a particular installation.
system recovery table (SRT). A table listing the
system activity keypoint. A keypoint written to the ABEND or abnormal condition codes that CICS will
system log automatically while CICS is running intercept.
normally. (See also activity keypoint.)
system support program. A program product that
system dump (SDUMP). In CICS, an MVS SDUMP, defines and generates an NCP and provides it with
which may be formatted with a CICS-supplied IPCS utility programs.
exit to show all control blocks and storage areas in the
CICS region. Systems Network Architecture (SNA). The
description of the logical structure, formats, protocols,
system dump code. A code that identifies a and operational sequences for transmitting information
user-defined entry in the system dump table. Each units through and controlling the configuration and
entry defines system actions and dump attributes to be operation of networks.
associated with a code. Codes can be up to 8 characters
in length.
T
A code consisting of the last six characters of a CICS
message number describes actions to be taken and the takeover. In XRF, the shift of the workload from the
dump to be produced when that message is issued. For active to the alternate CICS system, and the switching
example, a dump code table entry for ZZxxxx describes of resources needed for this to happen.
the system dump to be produced with message
DFHZZxxxx, and overrides the documented system takeover phase. In XRF, the replacement of the failing
action for DFHZZxxxx. active CICS system by the alternate CICS system as the
Every system dump code can be invoked by EXEC session partner of the CICS users.
CICS PERFORM DUMP SYSTEM commands and the
takeover time. In XRF, the elapsed time between the
corresponding CEMT transactions. For the definition of
occurrence of a failure, the completion of switching all
system dump code entries, see system dump table.
terminals to the alternate CICS system, and the running
system generation (SYSGEN). In CICS, the process of of the first user transaction.
creating a particular system tailored to the
task. A unit of work for the processor; therefore the
requirements of a data processing installation.
basic multiprogramming unit under the control
system initialization table (SIT DFHSIT). A CICS program. Under CICS, the execution of a transaction
table that contains information to initialize and control for a particular user. Contrast with transaction.
system functions, module suffixes for selection of
task control. The CICS element that controls all CICS
user-specified versions of CICS modules and tables,
tasks.
and information used to control the initialization
process. You can generate several SITs, using the task control block (TCB). An MVS control block. A
resource definition macro DFHSIT, and then use the SIT TCB is created for each MVS task. Several TCBs are
system initialization parameter to select the one that created for CICS management programs. All CICS
best meets your current requirements at initialization application programs and non-reentrant CICS code run
time. under a single quasi-reentrant TCB.
system log. The (only) journal (identification=’01’) that task switching. Overlapping of I/O operations and
is used by CICS to log changes made to resources for processing between several tasks.
the purpose of backout on emergency restart.
TCA. Task control area.
Glossary 671
TCAM. Telecommunications access method. terminal. In CICS, a device, often equipped with a
keyboard and some kind of display, capable of sending
TCP. System initialization parameter used to include and receiving information over a communication
the pregenerated non-VTAM terminal control program, channel. A point in a system or communication
DFHTCP. network at which data can either enter or leave.
TCT. System initialization parameter used to indicate terminal control. The CICS modules that control all
which terminal control table, if any, is to be loaded. CICS terminal activity.
TCTTE. Terminal control table terminal entry. terminal control program (TCP). The program that
controls all CICS terminal activity.
TCTUA. Option of the ADDRESS command, used also
to pass information between application programs, but terminal control table (TCT). CICS control table
only if the same terminal is associated with the retained to define non-VTAM terminal networks.
application programs involved (which can be in
different tasks). The pointer reference is set to the terminal input/output area (TIOA). Area that is set
address of the TCTUA. If a TCTUA does not exist, the up by storage control and chained to the terminal
pointer reference is set to X'FF000000'. The data area control table terminal entry (TCTTE) as needed for
contains the address of the TCTUA of the principal terminal input/output operations.
facility, not that for any alternate facility that may have
been allocated. terminal list table (TLT). CICS control table that
allows terminal, or operator identifications, or both, to
TCTUAKEY. System initialization parameter used to be grouped logically. See supervisory terminal
specify the storage key for the TCTUAs if CICS is functions.
operating with storage protection. The default is
user-key: a user program executing in any key can terminal paging. A set of commands for retrieving
modify the TCTUA. See the CICS System Definition pages of an oversize output message in any order.
Guide for more information.
terminal-owning region (TOR). A CICS region which
TCTUALOC. System initialization parameter used to owns most or all of the terminals defined locally. See
include where terminal user areas (TCTUAs) are to be also application-owning region (AOR), data-owning
stored. The default is below the 16MB line. See the region (DOR).
CICS System Definition Guide for more information.
termination phase. The XRF phase in which the XRF
TD. CICS transient data. System initialization complex returns to two separate and independent
parameter used to specify the number of VSAM buffers environments and all XRF activity in the alternate
and strings to be used for intrapartition transient data. system stops.
See the CICS System Definition Guide for more
information. thread. In CICS, a link between a CICS application
and DBCTL. To DBCTL, a thread represents the CICS
Telecommunications Access Method (TCAM). An transaction that has issued a DL/I request. The system
access method used to transfer data between main initialization parameter DLTHRED specifies the number
storage and remote or local storage. of threads provided through the CICS local DL/I
interface.
Teleprocessing Network Simulator (TPNS). A
program used to test new functions before they threading. The process whereby various transactions
encounter production volumes. undergo concurrent execution.
temporary storage (TS). A CICS facility for throughput. The total data processing work
temporarily saving data in the form of sequential successfully completed during an evaluation period.
queues. A TS queue is held in main storage or on a
VSAM data set on DASD. All queues not in main throughput rate. The data processing work
storage are in a single VSAM data set. A task can create successfully completed per unit of time.
a TS queue with a name selected by the task. The
TIOA. Terminal input/output area. TIOAs are
queue exists until deleted by a task (usually, but not
acquired and chained to the TCTTE as needed for
necessarily, the task that created it). Compare transient
terminal input/output operations. The field TCTTESC
data. Possible uses of temporary storage include
addresses the first terminal-class storage area obtained
storage of screen images for terminal paging and
for a task (the beginning of the chain) and the field
storage of incomplete data for suspended tasks. In
TCTTEDA gives the address of the active TIOA. CICS
general, TS queues do not require resource definition,
terminal control passes data received from a terminal to
but see temporary storage table (TST).
the CICS application program in the TIOA, and writes
data from the TIOA to the terminal.
Glossary 673
associated data areas. It can be accessed and modified VTAM access method. The default is YES. See the CICS
by user applications and by CICS. See CICS-key, System Definition Guide for more information.
storage protection.
VTAM Performance Analysis and Reporting System
(VTAMPARS). A program offering that provides
V information on network traffic through the VTAM
component of the network.
validity of reference. Direct reference to the required
pages, without intermediate storage references that
retrieve unwanted data. W
virtual machine (VM). A functional simulation of a warm keypoint. A keypoint written to the restart data
computer and its associated devices. Contrast with real set during controlled shutdown (after all system
machine. activity has ceased). During a subsequent warm restart,
information in the warm keypoint is used to reestablish
virtual storage (VS). (ISO) The notional storage space system tables to the status they had at controlled
that may be regarded as addressable main storage by shutdown. See also keypoint.
the user of a computer system in which virtual
addresses are mapped into real addresses. The size of warm start. Initialization of a CICS system using
virtual storage is limited by the addressing scheme of selected system status information obtained during the
the computing system and by the amount of auxiliary previous termination. Automatic restart after a normal
storage available and not by the number of main (controlled) shutdown.
storage locations.
working set. The set of a user’s pages that must be
Virtual Storage Access Method (VSAM). An access active in order to avoid excessive paging. The amount
method for direct or sequential processing of fixed- and of real storage required in order to avoid excessive
variable-length records on direct access devices. paging.
Index 677
intrapartition buffer statistics 491, 493 LLA (library lookaside) 197, 300 messages
intrapartition transient data reports 293, loader and program storage switching (CMSG transaction) 321
327 DFH0STAT report 543 TCAM unsolicited input 214
IOAREALEN operand 201, 310 loader statistics 49 MLPA (modified link pack area) 618
IPCS (interactive problem control local shared resources (LSR) 240 MNEVE 69
system) 26, 30 local system queue area (LSQA) 622 mode TCBs 47
ISC (intersystem communication) 305 locking model 247 modem, IBM 586X 33
2MB LPA 618 log manager modified link pack area (MLPA) 618
and MRO 284, 305, 321 average blocksize 273 modules
implementation 284 log stream management 297
mirror transactions 306 statistics 413 shared 341
sessions 209 log stream statistics 55 monitoring
splitting 188 logging control table (MCT) 71
ISC/IRC (intersystem/interregion after recovery 329, 334 domain statistics 428
communication) exceptional incidents 8 event monitoring point (EMP) 69
attach time entries 410 logging and journaling generalized trace facility (GTF) 29
ISC/IRC attach time statistics 410 HIGHOFFLOAD threshold 276 monthly 15
ISC/IRC system and mode entry integrated coupling migration facility other CICS data 26
statistics 57, 396 (ICMF) 271 performance class data 65
log streams per structure 274 purpose 65
LOWOFFLOAD threshold 276 record types 65
J monitoring 271
staging data sets 278
Resource Measurement Facility
(RMF) 27
Java 255
logical recovery 328 techniques 11, 12
Java applications storage 257
logon/logoff requests 210 monitoring CICS-JVM interface 261
Java language support 255
logstreams MRO
Java Program Object with LE 259
DFH0STAT report 574 and XCF 286
Java program objects 255
LOWOFFLOAD threshold in MVS sysplex environment 286
Java virtual machine programs 259
HIGHOFFLOAD threshold 276 MRO (multiregion operation) 284, 305
job initiators 193
LPA (link pack area) 27, 618 2MB LPA 618
journaling
LSQA (local system queue area) 622 and ISC 286, 305, 321
HIGHOFFLOAD threshold 276
LSR (local shared resources) batching requests 311
integrated coupling migration facility
buffer allocation 230 cross-memory services 183, 185, 618
(ICMF) 271
buffer allocations for 236 end user information 8
log streams per structure 274
LSRPOOL parameter 231, 234 fastpath facilities 185
LOWOFFLOAD threshold 276
maximum keylength for 239 function shipping 310, 312
staging data sets 278
resource percentile (SHARELIMIT) IEAICS parameters 68
journalname
for 239 sessions 206
statistics 55, 411
to create VSAM files, data tables, LSR splitting 188
journalname statistics 55
pools 234 transaction routing 285, 286, 310
journalnames
VSAM considerations 225
DFH0STAT report 573 MROBTCH, system initialization
VSAM local 240
journals parameter 311
VSAM string settings for 238
buffers full 19 MROLRM, system initialization
LSR pools
user 330 parameter 312
DFH0STAT report 587
JVM CPU overhead 259 MSGINTEG operand 208
LSRpool file statistics 426
JVM Just-in-time compiler 259 multiregion operation (MRO) 8
LSRpool statistics 416
JVM storage usage 260 MVS
LSRPOOL statistics 56
JVM TCB mode 260 COBOL Version 4 316
LU6.1 397
LU6.2 397 common system area (CSA) 616
cross-memory services 305, 306
K data collection
kernel storage 639 M ACF/VTAM 32
KEYLENGTH parameter 239 main temporary storage 321, 322 GTF 26
keypoint frequency, AKPFREQ 279 map alignment 298 IPCS 26
map set suffixing 315 SMF 152
master terminal transactions (CEMT) 13 extended common system area
L MAXACTIVE, transaction class 288 (ECSA) 616
language environment (LE) 318 maximum tasks HPO 207, 212
LE runtime options for Java 256 MXT, system initialization IEAICS member 68
limit conditions 161 parameter 287 library lookaside 300
line-transmission faults 474 times limit reached 19 link pack area (LPA) 285
link pack area (LPA) 27, 30, 285, 341 MAXKEYLENGTH parameter 239 LLA (library lookaside) 197
CLPA (create link pack area) 618 MAXNUMRECS parameter 244 load macro 316
ELPA (extended link pack area) 297 MCT (monitoring control table) 71 NetView 33
MLPA (modified link pack area) 618 measurement nucleus and extended nucleus 616
PLPA (pageable link pack area) 618 full-load 171 program loading subtask 158, 159
LISTCAT (VSAM) 26, 34 single-transaction 174 QUASI task 163
Index 679
performance costs (continued) real storage 283 RUWAPOOL system initialization
transient data 649 checklist 183 parameter 318
variable 641 constraints 168
WRITE 646
Performance Reporter
isolation 190
working set 162
S
and exceptions 152 receive-any S40D abend 188, 192, 623
periodic reports 15 control element (RACE) 205 S80A abend 188, 192, 621
physical I/Os, extra 233 input area (RAIA) 203, 205 S822 abend 188, 192
PL/I pool (RAPOOL) 163, 203, 204 scheduler work area (SWA) 622
application programs 298 requests 205 SDFHENV dataset 259
Release 5.1 300 RECEIVESIZE attribute 209 SDSA subpool 626
shared library 317 record-level sharing (RLS) 251 Secure Sockets Layer for Web
planning review 13 recovery security 224
PLPA (pageable link pack area) 618 SENDSIZE attribute 209
logical 327, 328
polling, NPDELAY 214 sequential query language (SQL) 36
options 326
pool threads for DB2 267 serial functions 163
physical 326
post-development review 8 service classes 128
recoverable resources 334
PPGRT parameter 190 service definitions 127
recovery manager
PPGRTR parameter 190 service policies 128
DFH0STAT report 612
prefixed storage area (PSA) 618 set, working 162
statistics 445
PRIORITY CICS attachment facility shared resources
recovery manager statistics
parameter 269 modules 341
statistics 56 nucleus code 297
PRIORITY operand 291 RECOVSTATUS operand 330
private area 619 PL/I library 317
regions shared ts queue server
problem diagnosis 169
exit interval (ICV or TIME) 194 coupling facility statistics 503
procedures for monitoring 12
increasing size 192 shared TS queue server statistics 64
processor cycles 162
terminal-owning 286 SHARELIMIT parameter 239
processor cycles checklist 184
reports short-on-storage (SOS) 8
processor usage 169
DASD activity in RMF 172 shutdown
program
system activity in RMF 172 AIQMAX 342
statistics 53, 431
program autoinstall request/response unit (RU) 203 CATA 342
DFH0STAT report 577 requested reset statistics 24 CATD 342
statistics 430 requested statistics 24 signon 293
program totals report requirements definition 7 single-transaction measurement 174
DFH0STAT report 556 RESIDENT option in COBOL 316 CICS auxiliary trace 175
programming considerations 315 resident programs 299 SIT (system initialization table) 26
programs resource contention 164 SMF
COBOL 298 resource measurement facility SMSVSAM, Type 42 records 254
DFH0STAT report 554 (RMF) 172 SMSVSAM
isolation (PI) trace 35 Resource Measurement Facility SMF Type 42 records 254
nonresident 299 (RMF) 27 SNA (Systems Network Architecture)
PL/I 298 resource security level checking 334 message chaining 209
putting above 16MB line 300 TIOA for devices 201
resources
resident 299 transaction flows 208
local shared (LSR) 225, 240
storage layout 299 SNT (signon table) 293
manager (SRM) 29
transient 299 OPPRTY 291
nonshared (NSR) 225, 235, 237
programs by DSA and LPA software constraints 163
recoverable 334
DFH0STAT report 559 SOS (short-on-storage)
shared (LSR) 236, 238, 239
PRTYAGE, system initialization caused by subpool storage
response time 156
parameter 291 fragmentation 636
contributors 23
PRVMOD, system initialization CICS constraint 158
DASD 6
parameter 298 end user information 8
internal 157
PSA (prefixed storage area) 618 LE run time options for AMODE(24)
network 6
PURGETHRESH, transaction class 289 programs 159, 319
system 6
purging of tasks 159 limit conditions 161
review process 13 review of occurrences 18
PVDELAY, system initialization RLS using FILE definition 253
parameter 410 use of temporary data sets 158
RMF (Resource Measurement splitting resources
PWSS parameter 190
Facility) 27 independent address spaces 286
introduction 13 online systems 284
R operations 69 using ISC 188
RAIA (receive any, input area) 203 periodic use 15 using MRO 188, 286
RAMAX, system initialization SYSEVENT information 67 SQA (system queue area) 617
parameter 203 transaction reporting 67 SQL (sequential query language)
RAPOOL, system initialization RMF workload manager data activity 36
parameter 204 explanation of 135 SRM (system resources manager)
RDSA subpool 626 RU (request/response unit) 203 activities traced by GTF 29
Index 681
temporary storage 163, 321 (continued) TPNS (Teleprocessing Network TRIGGERLEVEL operand 330
data sharing 325 Simulator) 20 TRT (trace table) 333
DFH0STAT report 561 trace TS, system initialization parameter 50
main 321, 322 auxiliary 24, 171, 175 tuning 177
performance improvements CICS facility 26 CICS under MVS 187
multiple VSAM buffers 323, 327 GTF 26, 29, 30 DASD 199
multiple VSAM strings 323, 328 internal 24 I/O operations 199
requests on cold-started system 325 table (TRT) 333 reviewing results of 178
secondary extents 322 VTAM 32 trade-offs 177
statistics 49, 468 trade-offs, acceptable 177 VSAM 225, 340
summary of performance TRANISO, system initialization
variables 324 parameter 294
temporary storage queues
DFH0STAT report 566
transaction
CATA 218
U
UDSA subpool 626
temporary storage requests CATD 218
unaligned maps 298
75 percent rule 325 CEMT 13
unsolicited items
terminal autoinstall CMSG 321
input messages 214
DFH0STAT report 577 CSAC 18
statistics 24
terminal control definition 3
user domain
faults 474
full scans 195 statistics 499
looping 302
region exit interval (ICV or user domain statistics 50
TIME) 194 profile 5
user options
routing 284, 305
statistics 474 event monitoring points 69
security 334
terminal input/output area (TIOA) 202 journals 330
volume 5
terminal statistics 57 USERMOD 297
workload 5
terminals USRDELAY, system initialization
transaction class
automatic installation 216 parameter 410
statistics 478
compression of output data transaction classes
streams 215
concurrent logon/logoff requests 210
DFH0STAT report 549
MAXACTIVE 288
V
HPO with VTAM 207 PURGETHRESH 289 violation of storage 161
input/output area (SESSIONS transaction classes DFHTCLSX and virtual storage 283
IOAREALEN) 310 DFHTCLQ2 checklist 182
input/output area (TIOA) 201, 208 effects of 309 constraints 167
input/output area (TYPETERM transaction data insufficient 193
IOAREALEN) 201 initialization 644 internal limits 169
message block sizes 169 VLF (virtual lookaside facility) 197
transaction dump
minimizing SNA transaction statistics 376 VNCA (VTAM node control
flows 208 transaction isolation and applications application) 33
negative poll delay (NPDELAY) 214 storage, transaction isolation 336 volume of transactions 5
receive-any input areas transaction isolation and real storage VPACING operand 302
(RAMAX) 203 transaction isolation 301 VSAM 34
receive-any pool (RAPOOL) 204 16MB line 624
transaction manager
scan delay (ICVTSD) 211 DFH0STAT report 526 AIX considerations 233
terminal-owning region (TOR) 286 statistics 482 buffer allocations for LSR 236
use of SNA chaining 209 buffer allocations for NSR 235
transaction manager statistics 46
TERMPRIORITY operand 291 buffers and strings 321
transaction totals
testing phase 8 calls 312
DFH0STAT report 552
The CICS monitoring facility 24 catalog 35, 231, 331
transactions
The sample statistics program data sets 171, 284, 321
DFH0STAT report 551
(DFH0STAT) 25 definition parameters 234
transient data 163, 326 DSN sharing 232
THREADLIMIT parameter 268
concurrent input/output I/O 243
THREADWAIT parameter 267
operations 323, 328 LISTCAT 26, 34
time stamp, definition
DFH0STAT report 569 local shared resources (LSR) 240
for monitoring 74 extrapartition 329
timings maximum keylength for LSR 239
indirect destinations 330 multiple buffers 323, 327
transaction initialization 644 intrapartition 327 multiple strings 323, 328
TIOA (terminal input/output area) 202 performance improvements number of buffers 230
Tivoli Performance Reporter for multiple VSAM buffers 323, 327 resource percentile (SHARELIMIT) for
MVS 113 multiple VSAM strings 323, 328 LSR 239
Tivoli Performance Reporter for transient data queue totals resource usage (LSRPOOL) 234
OS/390 31 DFH0STAT report 572 restart data set 218
tools for monitoring 23 transient data queues shared resources 153
TOR (terminal-owning region) 286 DFH0STAT report 571 shared resources statistics 416
TPNS (teleprocessing network transient data statistics 50 string settings for LSR 238
simulator) 37 transient programs 299 string settings for NSR 237
W
WEB= specifying Web domain 339
weekly monitoring 14
working set 162
workload 6
workload management in a sysplex 123
workload manager (MVS) 123
workload manager requirements 125
X
XRF (extended recovery facility)
alternate system 191
restart delay 219
takeover 159, 217
XTCTOUT, global user exit (TCAM) 216
XZCOUT1, global user exit (VTAM) 216
Index 683
684 CICS TS for OS/390: CICS Performance Guide
Sending your comments to IBM
If you especially like or dislike anything about this book, please use one of the
methods listed below to send your comments to IBM.
Feel free to comment on what you regard as specific errors or omissions, and on
the accuracy, organization, subject matter, or completeness of this book.
Please limit your comments to the information in this book and the way in which
the information is presented.
When you send comments to IBM, you grant IBM a nonexclusive right to use or
distribute your comments in any way it believes appropriate, without incurring
any obligation to you.
You can send your comments to IBM in any of the following ways:
v By mail, to this address:
Information Development Department (MP095)
IBM United Kingdom Laboratories
Hursley Park
WINCHESTER,
Hampshire
United Kingdom
v By fax:
– From outside the U.K., after your international access code use
44–1962–870229
– From within the U.K., use 01962–870229
v Electronically, use the appropriate network ID:
– IBM Mail Exchange: GBIBM2Q9 at IBMMAIL
™
– IBMLink : HURSLEY(IDRCF)
– Internet: idrcf@hursley.ibm.com
SC33-1699-03
Spine information: