CICS Performance Guide
CICS Performance Guide
SC33-1699-03
CICS® Transaction Server for OS/390®
SC33-1699-03
Note!
Before using this information and the product it supports, be sure to read the general information under “Notices” on
page xiii.
Contents v
Isolating (fencing) real storage for CICS (PWSS and SNA transaction flows (MSGINTEG, and
PPGRTR) . . . . . . . . . . . . . . . 190 ONEWTE) . . . . . . . . . . . . . . 208
Recommendations . . . . . . . . . . . 191 Effects . . . . . . . . . . . . . . . 208
How implemented . . . . . . . . . . 191 Where useful . . . . . . . . . . . . 208
How monitored . . . . . . . . . . . 191 Limitations . . . . . . . . . . . . . 208
Increasing the CICS region size . . . . . . . 192 How implemented . . . . . . . . . . 209
How implemented . . . . . . . . . . 192 How monitored . . . . . . . . . . . 209
How monitored . . . . . . . . . . . 192 SNA chaining (TYPETERM RECEIVESIZE,
Giving CICS a high dispatching priority or BUILDCHAIN, and SENDSIZE) . . . . . . . 209
performance group . . . . . . . . . . . 192 Effects . . . . . . . . . . . . . . . 209
How implemented . . . . . . . . . . 193 Where useful . . . . . . . . . . . . 210
How monitored . . . . . . . . . . . 193 Limitations . . . . . . . . . . . . . 210
Using job initiators . . . . . . . . . . . 193 Recommendations . . . . . . . . . . . 210
Effects . . . . . . . . . . . . . . . 194 How implemented . . . . . . . . . . 210
Limitations . . . . . . . . . . . . . 194 How monitored . . . . . . . . . . . 210
How implemented . . . . . . . . . . 194 Number of concurrent logon/logoff requests
How monitored . . . . . . . . . . . 194 (OPNDLIM) . . . . . . . . . . . . . . 210
Region exit interval (ICV) . . . . . . . . . 194 Effects . . . . . . . . . . . . . . . 211
Main effect . . . . . . . . . . . . . 195 Where useful . . . . . . . . . . . . 211
Secondary effects . . . . . . . . . . . 195 Limitations . . . . . . . . . . . . . 211
Where useful . . . . . . . . . . . . 196 Recommendations . . . . . . . . . . . 211
Limitations . . . . . . . . . . . . . 196 How implemented. . . . . . . . . . . 211
Recommendations . . . . . . . . . . . 196 How monitored . . . . . . . . . . . 211
How implemented . . . . . . . . . . 197 Terminal scan delay (ICVTSD) . . . . . . . . 211
How monitored . . . . . . . . . . . 197 Effects . . . . . . . . . . . . . . . 212
Use of LLA (MVS library lookaside) . . . . . . 197 Where useful . . . . . . . . . . . . 213
Effects of LLACOPY . . . . . . . . . . 198 Limitations . . . . . . . . . . . . . 213
The SIT Parameter LLACOPY . . . . . . . 198 Recommendations . . . . . . . . . . . 213
DASD tuning . . . . . . . . . . . . . 199 How implemented . . . . . . . . . . 214
Reducing the number of I/O operations . . . 199 How monitored . . . . . . . . . . . 214
Tuning the I/O operations . . . . . . . . 199 Negative poll delay (NPDELAY) . . . . . . . 214
Balancing I/O operations . . . . . . . . 200 NPDELAY and unsolicited-input messages in
TCAM. . . . . . . . . . . . . . . 214
Chapter 16. Networking and VTAM 201 Effects . . . . . . . . . . . . . . . 214
Terminal input/output area (TYPETERM Where useful . . . . . . . . . . . . 215
IOAREALEN or TCT TIOAL) . . . . . . . . 201 Compression of output terminal data streams . . 215
Effects . . . . . . . . . . . . . . . 201 Limitations . . . . . . . . . . . . . 215
Limitations . . . . . . . . . . . . . 202 Recommendations . . . . . . . . . . . 215
Recommendations . . . . . . . . . . . 202 How implemented . . . . . . . . . . 216
How implemented . . . . . . . . . . 203 How monitored . . . . . . . . . . . 216
How monitored . . . . . . . . . . . 203 Automatic installation of terminals . . . . . . 216
Receive-any input areas (RAMAX) . . . . . . 203 Maximum concurrent autoinstalls (AIQMAX) 216
Effects . . . . . . . . . . . . . . . 203 The restart delay parameter (AIRDELAY) . . . 216
Where useful . . . . . . . . . . . . 204 The delete delay parameter (AILDELAY) . . . 217
Limitations . . . . . . . . . . . . . 204 Effects . . . . . . . . . . . . . . . 218
Recommendations . . . . . . . . . . . 204 Recommendations . . . . . . . . . . . 218
How implemented . . . . . . . . . . 204 How monitored . . . . . . . . . . . 219
How monitored . . . . . . . . . . . 204
Receive-any pool (RAPOOL) . . . . . . . . 204 | Chapter 17. CICS Web support . . . . 221
Effects . . . . . . . . . . . . . . . 205 | CICS Web performance in a sysplex . . . . . . 221
Where useful . . . . . . . . . . . . 205 | CICS Web support performance in a single address
Limitations . . . . . . . . . . . . . 205 | space . . . . . . . . . . . . . . . . 222
Recommendations . . . . . . . . . . . 206 | CICS Web use of DOCTEMPLATE resources . . . 222
How implemented . . . . . . . . . . 206 | CICS Web support use of temporary storage . . . 223
How monitored . . . . . . . . . . . 206 | CICS Web support of HTTP 1.0 persistent
High performance option (HPO) with VTAM. . . 207 | connections . . . . . . . . . . . . . . 223
Effects . . . . . . . . . . . . . . . 207 | CICS Web security. . . . . . . . . . . . 223
Limitations . . . . . . . . . . . . . 207 | CICS Web 3270 support . . . . . . . . . . 223
Recommendations . . . . . . . . . . . 207 | Secure sockets layer support . . . . . . . . 224
How implemented . . . . . . . . . . 207
How monitored . . . . . . . . . . . 207 Chapter 18. VSAM and file control . . 225
vi CICS TS for OS/390: CICS Performance Guide
VSAM considerations: general objectives . . . . 225 How implemented . . . . . . . . . . 240
Local shared resources (LSR) or Nonshared How monitored . . . . . . . . . . . 240
resources (NSR) . . . . . . . . . . . 225 Hiperspace buffers . . . . . . . . . . . 240
Number of strings . . . . . . . . . . . 227 Effects . . . . . . . . . . . . . . . 241
Size of control intervals . . . . . . . . . 229 Limitations . . . . . . . . . . . . . 241
Number of buffers (NSR) . . . . . . . . 230 Recommendations . . . . . . . . . . . 241
Number of buffers (LSR) . . . . . . . . 230 How implemented . . . . . . . . . . 241
CICS calculation of LSR pool parameters . . . 231 Subtasking: VSAM (SUBTSKS=1) . . . . . . . 241
Data set name sharing . . . . . . . . . 232 Effects . . . . . . . . . . . . . . . 242
AIX considerations . . . . . . . . . . 233 Where useful . . . . . . . . . . . . 243
Situations that cause extra physical I/O . . . 233 Limitations . . . . . . . . . . . . . 243
Other VSAM definition parameters . . . . . 234 Recommendations . . . . . . . . . . . 243
VSAM resource usage (LSRPOOL) . . . . . . 234 How implemented . . . . . . . . . . 244
Effects . . . . . . . . . . . . . . . 234 | How monitored . . . . . . . . . . . 244
Where useful . . . . . . . . . . . . 234 Data tables . . . . . . . . . . . . . . 244
Limitations . . . . . . . . . . . . . 234 Effects . . . . . . . . . . . . . . . 244
Recommendations . . . . . . . . . . . 234 Recommendations . . . . . . . . . . . 244
How implemented . . . . . . . . . . 234 How implemented . . . . . . . . . . 245
VSAM buffer allocations for NSR (INDEXBUFFERS How monitored . . . . . . . . . . . 245
and DATABUFFERS) . . . . . . . . . . . 235 | Coupling facility data tables . . . . . . . . 245
Effects . . . . . . . . . . . . . . . 235 | Locking model . . . . . . . . . . . . 247
Where useful . . . . . . . . . . . . 235 | Contention model . . . . . . . . . . . 247
Limitations . . . . . . . . . . . . . 235 | Effects . . . . . . . . . . . . . . . 248
Recommendations . . . . . . . . . . . 235 | Recommendations . . . . . . . . . . . 248
How implemented . . . . . . . . . . 235 | How implemented . . . . . . . . . . 249
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 249
VSAM buffer allocations for LSR . . . . . . . 236 | CFDT statistics . . . . . . . . . . . . 250
Effects . . . . . . . . . . . . . . . 236 | RMF reports. . . . . . . . . . . . . 251
Where useful . . . . . . . . . . . . 236 | VSAM record-level sharing (RLS). . . . . . . 251
Recommendations . . . . . . . . . . . 236 | Effects . . . . . . . . . . . . . . . 252
How implemented . . . . . . . . . . 236 | How implemented . . . . . . . . . . 253
How monitored . . . . . . . . . . . 236 | How monitored . . . . . . . . . . . 254
VSAM string settings for NSR (STRINGS) . . . . 237
Effects . . . . . . . . . . . . . . . 237 | Chapter 19. Java program objects . . 255
Where useful . . . . . . . . . . . . 237 | Overview. . . . . . . . . . . . . . . 255
Limitations . . . . . . . . . . . . . 237 | Performance considerations. . . . . . . . . 255
Recommendations . . . . . . . . . . . 237 | DLL initialization . . . . . . . . . . . 255
How implemented . . . . . . . . . . 237 | LE runtime options . . . . . . . . . . 256
How monitored . . . . . . . . . . . 237 | API costs . . . . . . . . . . . . . . 257
VSAM string settings for LSR (STRINGS) . . . . 238 | CICS system storage . . . . . . . . . . 257
Effects . . . . . . . . . . . . . . . 238 | Workload balancing of IIOP method call requests 258
Where useful . . . . . . . . . . . . 238 | CICS dynamic program routing . . . . . . 258
Limitations . . . . . . . . . . . . . 238 | TCP/IP port sharing . . . . . . . . . . 258
Recommendations . . . . . . . . . . . 238 | Dynamic domain name server registration for
How implemented . . . . . . . . . . 238 | TCP/IP . . . . . . . . . . . . . . 258
How monitored . . . . . . . . . . . 238
Maximum keylength for LSR (KEYLENGTH and
| Chapter 20. Java virtual machine
MAXKEYLENGTH) . . . . . . . . . . . 239
Effects . . . . . . . . . . . . . . . 239 | (JVM) programs . . . . . . . . . . 259
Where useful . . . . . . . . . . . . 239 | Overview. . . . . . . . . . . . . . . 259
Recommendations . . . . . . . . . . . 239 | Performance considerations. . . . . . . . . 259
How implemented . . . . . . . . . . 239 | Storage usage . . . . . . . . . . . . 260
Resource percentile for LSR (SHARELIMIT) . . . 239 | How monitored . . . . . . . . . . . . 261
Effects . . . . . . . . . . . . . . . 239
Where useful . . . . . . . . . . . . 240 Chapter 21. Database management 263
Recommendations . . . . . . . . . . . 240 DBCTL minimum threads (MINTHRD). . . . . 263
How implemented . . . . . . . . . . 240 Effects . . . . . . . . . . . . . . . 263
VSAM local shared resources (LSR) . . . . . . 240 Where useful . . . . . . . . . . . . 263
Effects . . . . . . . . . . . . . . . 240 Limitations . . . . . . . . . . . . . 263
Where useful . . . . . . . . . . . . 240 Implementation . . . . . . . . . . . 263
Recommendations . . . . . . . . . . . 240 How monitored . . . . . . . . . . . 264
Contents vii
DBCTL maximum threads (MAXTHRD) . . . . 264 How implemented . . . . . . . . . . 286
Effects . . . . . . . . . . . . . . . 264 Maximum task specification (MXT) . . . . . . 287
Where useful . . . . . . . . . . . . 264 Effects . . . . . . . . . . . . . . . 287
Limitations . . . . . . . . . . . . . 264 Limitations . . . . . . . . . . . . . 287
Implementation . . . . . . . . . . . 264 Recommendations . . . . . . . . . . . 287
How monitored . . . . . . . . . . . 264 How implemented . . . . . . . . . . 288
DBCTL DEDB parameters (CNBA, FPBUF, FPBOF) 264 How monitored . . . . . . . . . . . 288
Where useful . . . . . . . . . . . . 265 Transaction class (MAXACTIVE) . . . . . . . 288
Recommendations . . . . . . . . . . . 265 Effects . . . . . . . . . . . . . . . 288
How implemented . . . . . . . . . . 266 Limitations . . . . . . . . . . . . . 288
How monitored . . . . . . . . . . . 266 Recommendations . . . . . . . . . . . 288
CICS DB2 attachment facility . . . . . . . . 266 How implemented . . . . . . . . . . 289
Effects . . . . . . . . . . . . . . . 267 How monitored . . . . . . . . . . . 289
Where useful . . . . . . . . . . . . 267 Transaction class purge threshold
How implemented . . . . . . . . . . 267 (PURGETHRESH) . . . . . . . . . . . . 289
How monitored . . . . . . . . . . . 267 Effects . . . . . . . . . . . . . . . 290
CICS DB2 attachment facility (TCBLIMIT, and Where useful . . . . . . . . . . . . 290
THREADLIMIT) . . . . . . . . . . . . 268 Recommendations . . . . . . . . . . . 290
Effect . . . . . . . . . . . . . . . 268 How implemented . . . . . . . . . . 290
Limitations . . . . . . . . . . . . . 268 How monitored . . . . . . . . . . . 290
Recommendations . . . . . . . . . . . 268 Task prioritization . . . . . . . . . . . . 291
How monitored . . . . . . . . . . . 269 Effects . . . . . . . . . . . . . . . 291
CICS DB2 attachment facility (PRIORITY) . . . . 269 Where useful . . . . . . . . . . . . 292
Effects . . . . . . . . . . . . . . . 269 Limitations . . . . . . . . . . . . . 292
Where useful . . . . . . . . . . . . 269 Recommendations . . . . . . . . . . . 292
Limitations . . . . . . . . . . . . . 269 How implemented . . . . . . . . . . 293
Recommendations . . . . . . . . . . . 269 How monitored . . . . . . . . . . . 293
How implemented . . . . . . . . . . 269 Simplifying the definition of CICS dynamic
How monitored . . . . . . . . . . . 269 storage areas . . . . . . . . . . . . 293
Extended dynamic storage areas . . . . . . 294
Chapter 22. Logging and journaling 271 Dynamic storage areas(below the line) . . . . 295
Coupling facility or DASD-only logging? . . . . 271 Using modules in the link pack area (LPA/ELPA) 297
Integrated coupling migration facility . . . . 271 Effects . . . . . . . . . . . . . . . 297
Monitoring the logger environment . . . . . . 271 Limitations . . . . . . . . . . . . . 297
Average blocksize . . . . . . . . . . . . 273 Recommendations . . . . . . . . . . . 297
Number of log streams in the CF structure . . . 274 How implemented . . . . . . . . . . 298
AVGBUFSIZE and MAXBUFSIZE parameters 274 Map alignment . . . . . . . . . . . . . 298
Recommendations . . . . . . . . . . . 275 Effects . . . . . . . . . . . . . . . 298
Limitations . . . . . . . . . . . . . 275 Limitations . . . . . . . . . . . . . 298
How implemented . . . . . . . . . . 276 How implemented . . . . . . . . . . 299
How monitored . . . . . . . . . . . 276 How monitored . . . . . . . . . . . 299
LOWOFFLOAD and HIGHOFFLOAD parameters Resident, nonresident, and transient programs . . 299
on log stream definition . . . . . . . . . . 276 Effects . . . . . . . . . . . . . . . 299
Recommendations . . . . . . . . . . . 277 Recommendations . . . . . . . . . . . 300
How implemented . . . . . . . . . . 278 How monitored . . . . . . . . . . . 300
How monitored . . . . . . . . . . . 278 Putting application programs above the 16MB line 300
Staging data sets . . . . . . . . . . . . 278 Effects . . . . . . . . . . . . . . . 300
Recommendations . . . . . . . . . . . 279 Where useful . . . . . . . . . . . . 301
Activity keypoint frequency (AKPFREQ) . . . . 279 Limitations . . . . . . . . . . . . . 301
Limitations . . . . . . . . . . . . . 280 How implemented . . . . . . . . . . 301
Recommendations . . . . . . . . . . . 281 Transaction isolation and real storage requirements 301
How implemented . . . . . . . . . . 281 Limiting the expansion of subpool 229 using
How monitored . . . . . . . . . . . 281 VTAM pacing . . . . . . . . . . . . . 302
DASD-only logging . . . . . . . . . . . 281 Recommendations . . . . . . . . . . . 302
How implemented . . . . . . . . . . 303
Chapter 23. Virtual and real storage 283
Tuning CICS virtual storage . . . . . . . . 283 Chapter 24. MRO and ISC . . . . . . 305
Splitting online systems: virtual storage . . . . 284 CICS intercommunication facilities . . . . . . 305
Where useful . . . . . . . . . . . . 285 Limitations . . . . . . . . . . . . . 306
Limitations . . . . . . . . . . . . . 285 How implemented . . . . . . . . . . 306
Recommendations . . . . . . . . . . . 286 How monitored . . . . . . . . . . . 307
Contents ix
DBCTL session termination . . . . . . . . 364 Storage Reports . . . . . . . . . . . . 533
Dispatcher domain . . . . . . . . . . 367 Loader and Program Storage Report. . . . . . 543
Dump domain . . . . . . . . . . . . 373 Storage Subpools Report . . . . . . . . . 547
System dumps . . . . . . . . . . . . 373 Transaction Classes Report . . . . . . . . . 549
Transaction dumps . . . . . . . . . . 376 Transactions Report . . . . . . . . . . . 551
Enqueue domain . . . . . . . . . . . 378 Transaction Totals Report . . . . . . . . . 552
Front end programming interface (FEPI) . . . 381 Programs Report . . . . . . . . . . . . 554
File control . . . . . . . . . . . . . 385 Program Totals Report . . . . . . . . . . 556
ISC/IRC system and mode entries . . . . . 396 DFHRPL Analysis Report . . . . . . . . . 558
System entry . . . . . . . . . . . . 397 Programs by DSA and LPA Report . . . . . . 559
Mode entry . . . . . . . . . . . . . 405 Temporary Storage Report . . . . . . . . . 561
ISC/IRC attach time entries . . . . . . . 410 Temporary Storage Queues Report . . . . . . 566
Journalname . . . . . . . . . . . . . 411 Tsqueue Totals Report . . . . . . . . . . 567
Log stream . . . . . . . . . . . . . 413 Temporary Storage Queues by Shared TS Pool . . 567
LSRpool . . . . . . . . . . . . . . 416 Transient Data Report . . . . . . . . . . 569
Monitoring domain . . . . . . . . . . 428 Transient Data Queues Report . . . . . . . . 571
Program autoinstall . . . . . . . . . . 430 Transient Data Queue Totals Report . . . . . . 572
Loader . . . . . . . . . . . . . . 431 Journalnames Report . . . . . . . . . . . 573
Program . . . . . . . . . . . . . . 442 Logstreams Report . . . . . . . . . . . 574
Recovery manager. . . . . . . . . . . 445 Autoinstall and VTAM Report . . . . . . . . 577
Statistics domain . . . . . . . . . . . 451 Connections and Modenames Report . . . . . 580
Storage manager . . . . . . . . . . . 452 | TCP/IP Services Report . . . . . . . . . . 584
Table manager . . . . . . . . . . . . 464 LSR Pools Report . . . . . . . . . . . . 587
TCP/IP Services - resource statistics . . . . . 465 Files Report . . . . . . . . . . . . . . 592
TCP/IP Services - request statistics . . . . . 467 File Requests Report . . . . . . . . . . . 593
Temporary storage . . . . . . . . . . 468 Data Tables Reports . . . . . . . . . . . 595
Terminal control . . . . . . . . . . . 474 Coupling Facility Data Table Pools Report . . . . 597
Transaction class (TCLASS) . . . . . . . . 478 Exit Programs Report. . . . . . . . . . . 598
Transaction manager . . . . . . . . . . 482 Global User Exits Report . . . . . . . . . 599
Transient data . . . . . . . . . . . . 491 DB2 Connection Report . . . . . . . . . . 600
User domain statistics . . . . . . . . . 499 DB2 Entries Report . . . . . . . . . . . 606
VTAM statistics . . . . . . . . . . . 500 Enqueue Manager Report . . . . . . . . . 609
Recovery Manager Report . . . . . . . . . 612
Appendix B. Shared temporary Page Index Report. . . . . . . . . . . . 614
storage queue server statistics. . . . 503
Shared TS queue server: coupling facility statistics 503 Appendix F. MVS and CICS virtual
Shared TS queue server: buffer pool statistics. . . 505 storage . . . . . . . . . . . . . . 615
Shared TS queue server: storage statistics . . . . 506 MVS storage . . . . . . . . . . . . . 616
The MVS common area . . . . . . . . . 616
| Appendix C. Coupling facility data Private area and extended private area . . . . 619
| tables server statistics . . . . . . . 509 The CICS private area . . . . . . . . . . 619
High private area . . . . . . . . . . . 621
| Coupling facility data tables: list structure statistics 509
MVS storage above region . . . . . . . . . 623
| Coupling facility data tables: table accesses
The CICS region . . . . . . . . . . . . 623
| statistics . . . . . . . . . . . . . . . 511
CICS virtual storage . . . . . . . . . . 623
| Coupling facility data tables: request statistics . . 512
MVS storage . . . . . . . . . . . . . 624
| Coupling facility data tables: storage statistics . . 513
The dynamic storage areas . . . . . . . . . 625
CICS subpools . . . . . . . . . . . . 626
| Appendix D. Named counter sequence | Short-on-storage conditions caused by subpool
| number server . . . . . . . . . . . 515 | storage fragmentation . . . . . . . . . . 636
| Named counter sequence number server statistics 515 CICS kernel storage . . . . . . . . . . . 639
| Named counter server: storage statistics . . . . 516
Appendix G. Performance data . . . . 641
Appendix E. The sample statistics Variable costs . . . . . . . . . . . . . 641
program, DFH0STAT . . . . . . . . 519 Logging . . . . . . . . . . . . . . 642
Analyzing DFH0STAT Reports . . . . . . . 520 Syncpointing . . . . . . . . . . . . 643
| System Status Report . . . . . . . . . . . 521 Additional costs . . . . . . . . . . . . 644
Transaction Manager Report . . . . . . . . 526 Transaction initialization and termination . . . . 644
Dispatcher Report . . . . . . . . . . . . 528 Receive . . . . . . . . . . . . . . 644
Dispatcher TCBs Report . . . . . . . . . . 530 Attach/terminate . . . . . . . . . . . 644
Contents xi
xii CICS TS for OS/390: CICS Performance Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply in the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore this statement may not apply
to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM United Kingdom
Laboratories, MP151, Hursley Park, Winchester, Hampshire, England, SO21 2JN.
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
Other company, product, and service names may be trademarks or service marks
of others.
Notices xv
xvi CICS TS for OS/390: CICS Performance Guide
Preface
What this book is about
This book is intended to help you to:
v Establish performance objectives and monitor them
v Identify performance constraints, and make adjustments to the operational CICS
system and its application programs.
This book does not discuss the performance aspects of the CICS Transaction Server
for OS/390 Release 3 Front End Programming Interface. For more information
about the Front End Programming Interface, See the CICS Front End Programming
Interface User’s Guide. This book does not contain Front End Programming Interface
dump statistics.
If you have a performance problem and want to correct it, read Parts 3 and 4. You
may need to refer to various sections in Part 2.
Notes on terminology
The following abbreviations are used throughout this book:
v “CICS” refers to the CICS element in the CICS Transaction Server for OS/390®
v “MVS” refers to the operating system, which can be either an element of
OS/390, or MVS/Enterprise System Architecture System Product (MVS/ESA SP).
v “VTAM®” refers to ACF/VTAM.
v “DL/I” refers to the database component of IMS/ESA.
If you have any questions about the CICS Transaction Server for OS/390 library,
see CICS Transaction Server for OS/390: Planning for Installation which discusses both
hardcopy and softcopy books and the ways that the books can be ordered.
ACF/VTAM
ACF/VTAM Installation and Migration Guide, GC31-6547-01
ACF/VTAM Network Implementation Guide, SC31-6548
DATABASE 2
DB2 for OS/390 Administration Guide, SC26-8957
DFSMS/MVS
DFSMS/MVS NaviQuest User’s Guide, SC26-7194
DFSMS/MVS DFSMSdfp Storage Administration Reference, SC26-4920
IMS/ESA
IMS/ESA Version 5 Admin Guide: DB, SC26-8012
IMS/ESA Version 5 Admin Guide: System, SC26-8013
IMS/ESA Version 5 Performance Analyzer’s User’s Guide, SC26-9088
IMS/ESA Version 6 Admin Guide: DB, SC26-8725
IMS/ESA Version 6 Admin Guide: System, SC26-8720
IMS Performance Analyzer User’s Guide SC26-9088
MVS
OS/390 MVS Initialization and Tuning Guide, SC28-1751
OS/390 MVS Initialization and Tuning Reference, SC28-1752
OS/390 MVS JCL Reference, GC28-1757
OS/390 MVS System Management Facilities (SMF), GC28-1783
OS/390 MVS Planning: Global Resource Serialization, GC28-1759
OS/390 MVS Planning: Workload Management, GC28-1761
OS/390 MVS Setting Up a Sysplex, GC28-1779
OS/390 RMF
OS/390 RMF User’s Guide, GC28-1949-01
OS/390 Performance Management Guide, SC28-1951-00
OS/390 RMF Report Analysis, SC28-1950-01
OS/390 RMF Programmers Guide, SC28-1952-01
Tuning tools
Generalized Trace Facility Performance Analysis (GTFPARS) Program
Description/Operations Manual, SB21-2143
Network Performance Analysis and Reporting System Program Description/Operations,
SB21-2488
Network Program Products Planning, SC30-3351
Others
CICS Workload Management Using CICSPlex SM and the MVS/ESA Workload
Manager, GG24-4286
System/390 MVS Parallel Sysplex Performance, GG24-4356
Bibliography xxi
System/390 MVS/ESA Version 5 Workload Manager Performance Studies, SG24-4352
IBM 3704 and 3705 Control Program Generation and Utilities Guide, GC30-3008
IMSASAP II Description/Operations, SB21-1793
Screen Definition Facility II Primer for CICS/BMS Programs, SH19-6118
Systems Network Architecture Management Services Reference,SC30-3346
Teleprocessing Network Simulator General Information, GH20-2487
Subsequent updates will probably be available in softcopy before they are available
in hardcopy. This means that at any time from the availability of a release, softcopy
versions should be regarded as the most up-to-date.
For CICS Transaction Server books, these softcopy updates appear regularly on the
Transaction Processing and Data Collection Kit CD-ROM, SK2T-0730-xx. Each reissue
of the collection kit is indicated by an updated order number suffix (the -xx part).
For example, collection kit SK2T-0730-06 is more up-to-date than SK2T-0730-05. The
collection kit is also clearly dated on the cover.
Updates to the softcopy are clearly marked by revision codes (usually a “#”
character) to the left of the changes.
| “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 replaces the
| chapter on Performance Reporter for MVS..
| A chapter has been added, “Chapter 19. Java program objects” on page 255, to
| introduce performance considerations when using Java language support.
| “Chapter 20. Java virtual machine (JVM) programs” on page 259 describes
| performance implications for programs run using the MVS Java Virtual Machine
| (JVM).
| “Chapter 8. Managing Workloads” on page 123 has been revised to discuss more
| fully the implications and benefits of using the MVS workload manager, and to
| introduce the CICSPlex SM dynamic routing program used by the WLM.
| Changes have also been made to several reports in the sample statistics program,
| DFH0STAT.
Good performance is the achievement of agreed service levels. This means that
system availability and response times meet user’s expectations using resources
available within the budget.
There are several basic steps in tuning a system, some of which may be just
iterative until performance is acceptable. These are:
1. Agree what good performance is.
2. Set up performance objectives (described in Chapter 1. Establishing
performance objectives).
3. Decide on measurement criteria (described in Chapter 3. Performance
monitoring and review).
4. Measure the performance of the production system.
5. Adjust the system as necessary.
6. Continue to monitor the performance of the system and anticipate future
constraints (see “Monitoring for the future” on page 15).
Parts 1 and 2 of this book describe how to monitor and assess performance.
Recommendations given in this book, based on current knowledge of CICS, are general in
nature, and cannot be guaranteed to improve the performance of any particular system.
Performance objectives often consist of a list of transactions and expected timings for
each. Ideally, through them, good performance can be easily recognized and you
know when to stop further tuning. They must, therefore, be:
v Practically measurable
v Based on a realistic workload
v Within the budget.
After you have defined the workload and estimated the resources required, you
must reconcile the desired response with what you consider attainable. These
objectives must then be agreed and regularly reviewed with users.
The word user here means the terminal operator. A user, so defined, sees CICS
performance as the response time, that is, the time between the last input action (for
example, a keystroke) and the expected response (for example, a message on the
screen). Several such responses might be required to complete a user function, and
the amount of work that a user perceives as a function can vary enormously. So,
the number of functions per period of time is not a good measure of performance,
unless, of course, there exists an agreed set of benchmark functions.
A more specific unit of measure is therefore needed. The words transaction and task
are used to describe units of work within CICS. Even these can lead to ambiguities,
because it would be possible to define transactions and tasks of varying size.
However, within a particular system, a series of transactions can be well defined
and understood so that it becomes possible to talk about relative performance in
terms of transactions per second (or minute, or hour).
are allocated, used, and released immediately on completion of the task. In this
mode the words transaction and task are more or less synonymous.
Conversational mode is potentially wasteful in a system that does not have
Conversational
├────────────────── Transaction ──────────────────┤
│ │
├───────────────────── Task ──────────────────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘
abundant resources. There are further questions and answers during which
resources are not released. Resources are, therefore, tied up unnecessarily waiting
for users to respond, and performance may suffer accordingly. Transaction and task
are, once again, more or less synonymous.
Pseudoconversational mode allows for slow response from the user. Transactions
Pseudoconversational
├────────────────── Transaction ──────────────────┤
│ │
├───────── Task ─────────┼──────── Task ──────────┤
│ ┌────┐ ┌────┐ │
├──Input──┤Work├──Output─┼─Input──┤Work├──Output──┤
└────┘ └────┘
are broken up into more than one task, yet the user need not know this. The
resources in demand are released at the end of each task, giving a potential for
improved performance.
You should consider whether to define your criteria in terms of the average, the
90th percentile, or even the worst-case response time. Your choice may depend on
the audit controls of your installation and the nature of the transactions in
question.
Later, transactions with common profiles can be merged, for convenience into
transaction categories.
Establish the priority of each transaction category, and note the periods during
which the priorities change.
See “Chapter 2. Gathering data for performance objectives” on page 7 for more
detailed recommendations on this step.
Any assumptions that you make about your installation must be used consistently
in future monitoring. These assumptions include computing-system factors and
business factors.
Business factors are concerned with work fluctuations. Allow for daily peaks (for
example, after receipt of mail), weekly peaks (for example, Monday peak after
weekend mail), and seasonal peaks as appropriate to the business. Also allow for
the peaks of work after planned interruptions, such as preventive maintenance and
public holidays.
Remember that, after the system has been brought into service, no amount of
tuning can compensate for poor initial design.
Post-development review
Review the performance of the complete system in detail. The main purposes are
to:
v Validate performance against objectives
v Identify resources whose use requires regular monitoring
v Feed the observed figures back into future estimates.
To achieve this, you should:
1. Identify discrepancies from the estimated resource use
2. Identify the categories of transactions that have caused these discrepancies
3. Assign priorities to remedial actions
4. Identify resources that are consistently heavily used
5. Provide utilities for graphic representation of these resources
6. Project the loadings against the planned future system growth to ensure that
adequate capacity is available
7. Update the design document with the observed performance figures
8. Modify the estimating procedures for future systems.
The data logged should include the date and time, location, duration, cause (if
known), and the action taken to resolve the problem.
Tasks (not to be confused with the task component of a CICS transaction) include:
v Running one or more of the tools described in “Chapter 4. An overview of
performance-measurement tools” on page 23
v Collating the output
v Examining it for trends.
You should allocate responsibility for these tasks between operations personnel,
programming personnel, and analysts. You must identify the resources that are to
be regarded as critical, and set up a procedure to highlight any trends in the use of
these resources.
Because the tools require resources, they may disturb the performance of a
production system.
Give emphasis to peak periods of activity, for both the new application and the
system as a whole. It may be necessary to run the tools more frequently at first to
confirm that the expected peaks correspond with the actual ones.
It is not normally practical to keep all the detailed output. Arrange for summarized
reports to be filed with the corresponding CICS statistics, and for the output from
the tools to be held for an agreed period, with customary safeguards for its
protection.
When to review?
You should plan for the following broad levels of monitoring activity:
v Dynamic (online) monitoring.
v Daily monitoring.
v Periodic (weekly and monthly) monitoring.
v Keeping sample reports as historical data. You can also keep historical data in a
database such as the Performance Reporter database.
Dynamic monitoring
Dynamic monitoring, is “on-the-spot” monitoring that you can, and should, carry
out at all times. This type of monitoring generally includes the following:
v Observing the system’s operation continuously to discover any serious
short-term deviation from performance objectives.
Use the CEMT transaction (CEMT INQ|SET MONITOR), together with end-user
feedback. You can also use the Resource Measurement Facility (RMF) to collect
information about processor, channel, coupling facility, and I/O device usage.
v Obtaining status information. Together with status information obtained by
using the CEMT transaction, you can get status information on system
processing during online execution. This information could include the queue
levels, active regions, active terminals, and the number and type of
conversational transactions. You could get this information with the aid of an
automated program invoked by the master terminal operator. At prearranged
times in the production cycle (such as before scheduling a message, at shutdown
of part of the network, or at peak loading), the program could capture the
transaction processing status and measurements of system resource levels.
Daily monitoring
The overall objective here is to measure and record key system parameters daily.
The daily monitoring data usually consists of counts of events and gross level
timings. In some cases, the timings are averaged for the entire CICS system.
v Record both the daily average and the peak period (usually one hour) average
of, for example, messages, tasks, processor usage, I/O events, and storage used.
Compare these against your major performance objectives and look for adverse
trends.
v List the CICS-provided statistics at the end of every CICS run. You should date
and time-stamp the data that is provided, and file it for later review. For
example, in an installation that has settled down, you might review daily data at
the end of the week; generally, you can carry out reviews less frequently than
collection, for any one type of monitoring data. If you know there is a problem,
you might increase the frequency; for example, reviewing daily data
immediately it becomes available.
You should be familiar with all the facilities in CICS for providing statistics at
times other than at shutdown. The main facilities, using the CEMT transaction,
are invocation from a terminal (with or without reset of the counters) and
automatic time-initiated requests.
v File an informal note of any incidents reported during the run. These may
include a shutdown of CICS that causes a gap in the statistics, a complaint from
your end users of poor response times, a terminal going out of service, or any
other item of significance. This makes it useful when reconciling disparities in
detailed performance figures that may be discovered later.
v Print the system console log for the period when CICS was active, and file a
copy of the console log in case it becomes necessary to review the CICS system
performance in the light of the concurrent batch activity.
v Run one of the performance analysis tools described in “Chapter 4. An overview
of performance-measurement tools” on page 23 for at least part of the day if
there is any variation in load from day to day. File the summaries of the reports
produced by the tools you use.
v Transcribe onto a graph any items identified as being consistently heavily used
in the post-development review phase (described in “Chapter 2. Gathering data
for performance objectives” on page 7).
v Collect CICS statistics, monitoring data, and RMF™ data into the Performance
Reporter database.
Weekly monitoring
Here, the objective is to periodically collect detailed statistics on the operation of
your system for comparison with your system-oriented objectives and workload
profiles.
v Run the CICS monitoring facility with performance class active, and process it. It
may not be necessary to do this every day, but it is important to do it regularly
and to keep the sorted summary output as well as the detailed reports.
Monthly monitoring
v Run RMF.
v Review the RMF and performance analysis listings. If there is any indication of
excessive resource usage, follow any previously agreed procedures (for example,
notify your management), and do further monitoring.
v Date- and time-stamp the RMF output and keep it for use in case performance
problems start to arise. You can also use the output in making estimates, when
detailed knowledge of component usage may be important. These aids provide
detailed data on the usage of resources within the system, including processor
usage, use of DASD, and paging rates.
v Produce monthly Performance Reporter reports showing long-term trends.
In a complex production system there is usually too much performance data for it
to be comprehensively reviewed every day. Key components of performance
degradation can be identified with experience, and those components are the ones
to monitor most closely. You should identify trends of usage and other factors
(such as batch schedules) to aid in this process.
Generally, there should be a progressive review of data. You should review daily
data weekly, and weekly data monthly, unless any incident report or review raises
questions that require an immediate check of the next level of detail. This should
be enough to detect out-of-line situations with a minimum of effort.
The review procedure also ensures that additional data is available for problem
determination, should it be needed. The weekly review should require
approximately one hour, particularly after experience has been gained in the
process and after you are able to highlight the items that require special
consideration. The monthly review will probably take half a day at first. After the
procedure has been in force for a period, it will probably be completed more
quickly. However, when new applications are installed or when the transaction
volumes or numbers of terminals are increased, the process is likely to take longer.
Review the data from the RMF listings only if there is evidence of a problem from
the gross-level data, or if there is an end-user problem that can’t be solved by the
review process. Thus, the only time that needs to be allocated regularly to the
detailed data is the time required to ensure that the measurements were correctly
made and reported.
Do not discard all the data you collect, after a certain period. Discard most, but
leave a representative sample. For example, do not throw away all weekly reports
after three months; it is better to save those dealing with the last week of each
month. At the end of the year, you can discard all except the last week of each
quarter. At the end of the following year, you can discard all the previous year’s
data except for the midsummer week. Similarly, you should keep a representative
selection of daily figures and monthly figures.
The intention is that you can compare any report for a current day, week, or month
with an equivalent sample, however far back you want to go. The samples become
more widely spaced but do not cease.
When you measure performance against objectives and report the results to users,
you have to identify any systematic differences between the measured data and
If the measurements differ greatly from the estimates, you must revise application
response-time objectives or plan a reduced application workload, or upgrade your
system. If the difference is not too large, however, you can embark on tuning the
total system. Parts 3 and 4 of this book tell you how to do this tuning activity.
Some of the questions are not strictly to do with performance. For instance, if the
transaction statistics show a high frequency of transaction abends with usage of the
abnormal condition program, this could perhaps indicate signon errors and,
therefore, a lack of terminal operator training. This, in itself, is not a performance
problem, but is an example of the additional information that can be provided by
monitoring.
1. How frequently is each available function used?
a. Has the usage of transaction identifiers altered?
b. Does the mix vary from one time of the day to another?
c. Should statistics be requested more frequently during the day to verify this?
In these cases, you have to identify the function by program or data set usage,
with appropriate reference to the CICS program statistics, file statistics, or other
statistics. In addition, you may be able to put user tags into the monitoring
data (for example, a user character field in the case of the CICS monitoring
facility), which can be used as a basis for analysis by products such as the
TIVOLI Performance Reporter.
In addition to the above, you should regularly review certain items in the CICS
statistics, such as:
v Times the MAXTASK limit reached (transaction manager statistics)
v Peak tasks (transaction class statistics)
v Times cushion released (storage manager statistics)
v Storage violations (storage manager statistics)
v Maximum RPLs posted (VTAM statistics)
v Short-on-storage count (storage manager statistics)
v Wait on string total (file control statistics)
v Use of DFHSHUNT log streams.
| v Times aux. storage exhausted (temporary storage statistics)
| v Buffer waits (temporary storage statistics)
| v Times string wait occurred (temporary storage statistics)
| v Times NOSPACE occurred (transient data global statistics)
| v Intrapartition buffer waits (transient data global statistics)
| v Intrapartition string waits (transient data global statistics)
You should also satisfy yourself that large numbers of dumps are not being
produced.
Furthermore, you should review the effects of and reasons for system outages and
their duration. If there is a series of outages, you may be able to detect a common
cause of them.
When a major change to the system is planned, increase the monitoring frequency
before and after the change. A major change includes the addition of:
v A new application or new transactions
If the system performance has altered as a result of a major change to the system,
data for before-and-after comparison of the appropriate statistics provides the best
way of identifying the reasons for the alteration.
Consider having extra tools installed to make it easier to project and test future
usage of the system. Tools such as the Teleprocessing Network Simulator (TPNS)
program can be used to test new functions under volume conditions before they
actually encounter production volumes. Procedures such as these can provide you
with insight as to the likely performance of the production system when the
changes are implemented, and enable you to plan option changes, equipment
changes, scheduling changes, and other methods for stopping a performance
problem from arising.
You have to monitor all of these factors to determine when constraints in the
system may develop. A variety of programs could be written to monitor all these
resources. Many of these programs are currently supplied as part of IBM products
such as CICS or IMS/ESA, or are supplied as separate products. This chapter
describes some of the products that can give performance information on different
components of a production system.
The list of products in this chapter is far from being an exhaustive summary of
performance monitoring tools, yet the data provided from these sources comprises
a large amount of information. To monitor all this data is an extensive task.
Furthermore, only a small subset of the information provided is important for
identifying constraints and determining necessary tuning actions, and you have to
identify this specific subset for your particular CICS system.
You also have to bear in mind that there are two different types of tools:
1. Tools that directly measure whether you are meeting your objectives
2. Additional tools to look into internal reasons why you might not be meeting
objectives.
None of the tools can directly measure whether you are meeting end-user response
time objectives. The lifetime of a task within CICS is comparable, that is, usually
related to, response time, and bad response time is usually correlated with long
lifetime within CICS, but this correlation is not exact because of other contributors
to response time.
Obviously, you want tools that help you to measure your objectives. In some cases,
you may choose a tool that looks at some internal function that contributes
towards your performance objectives, such as task lifetime, rather than directly
measuring the actual objective, because of the difficulty of measuring it.
When you have gained experience of the system, you should have a good idea of
the particular things that are most significant in that particular system and,
therefore, what things might be used as the basis for exception reporting. Then,
one way of simply monitoring the important data might be to set up
exception-reporting procedures that filter out the data that is not essential to the
tuning process. This involves setting standards for performance criteria that
identify constraints, so that the exceptions can be distinguished and reported while
You often have to gather a considerable amount of data before you can fully
understand the behavior of your own system and determine where a tuning effort
can provide the best overall performance improvement. Familiarity with the
analysis tools and the data they provide is basic to any successful tuning effort.
Remember, however, that all monitoring tools cost processing effort to use. Typical
costs are 5% additional processor cycles for the CICS monitoring facility
(performance class), and up to 1% for the exception class. The CICS trace facility
overhead is highly dependent on the workload used. The overhead can be in
excess of 25%.
In general, then, we recommend that you use the following tools in the sequence
of priorities shown below:
1. CICS statistics
2. CICS monitoring data
3. CICS internal and auxiliary trace.
In this chapter, the overview of the various tools for gathering or analyzing data is
arranged as follows:
v CICS performance data
v Operating system performance data
v Performance data for other products.
CICS statistics
CICS statistics are the simplest and the most important tool for permanently
monitoring a CICS system. They collect information on the CICS system as a
whole, without regard to tasks.
The CICS statistics domain writes five types of statistics to SMF data sets: interval,
end-of-day, requested, requested reset, and unsolicited statistics.
Each of these sets of data is described and a more general description of CICS
statistics is given in “Chapter 5. Using CICS statistics” on page 39and “Appendix A.
CICS statistics tables” on page 345.
See “Appendix E. The sample statistics program, DFH0STAT” on page 519 for the
details and interpretation of the report.
The CICS trace facilities can also be useful for analyzing performance problems
such as excessive waiting on events in the system, or constraints resulting from
inefficient system setup or application program design.
Several types of tracing are provided by CICS, and are described in the CICS
Problem Determination Guide. Trace is controlled by:
v The system initialization parameters (see the CICS System Definition Guide).
v CETR (see the CICS Supplied Transactions manual). CETR also provides for trace
selectivity by, for instance, transaction type or terminal name.
v CEMT SET INTTRACE, CEMT SET AUXTRACE, or CEMT SET GTFTRACE (see
the CICS Supplied Transactions manual).
v EXEC CICS SET TRACEDEST, EXEC CICS SET TRACEFLAG, or EXEC CICS
SET TRACETYPE (see the CICS System Programming Reference for programming
information).
This data, used with the data produced by the measurement tools, provides the
basic information that you should have for evaluating your system’s performance.
RMF measures and reports system activity and, in most cases, uses a sampling
technique to collect data. Reporting can be done with one of three monitors:
1. Monitor I measures and reports the use of system resources (that is, the
processor, I/O devices, storage, and data sets on which a job can enqueue
during its execution). It runs in the background and measures data over a
period of time. Reports can be printed immediately after the end of the
measurement interval, or the data can be stored in SMF records and printed
RMF should be active in the system 24 hours a day, and you should run it at a
dispatching priority above other address spaces in the system so that:
v The reports are written at the interval requested
v Other work is not delayed because of locks held by RMF.
A report is generated at the time interval specified by the installation. The largest
system overhead of RMF occurs during the report generation: the shorter the
interval between reports, the larger the burden on the system. An interval of 60
minutes is recommended for normal operation. When you are addressing a specific
problem, reduce the time interval to 10 or 15 minutes. The RMF records can be
directed to the SMF data sets with the NOREPORT and RECORD options; the
report overhead is not incurred and the SMF records can be formatted later.
Note: There may be some discrepancy between the CICS initialization and
termination times when comparing RMF reports against output from the
CICS monitoring facility.
For further details of RMF, see the OS/390 Resource Measurement Facility (RMF)
Users Guide, SC28-1949.
Guidance on how to use RMF with the CICS monitoring facility is given in “Using
CICS monitoring SYSEVENT information with RMF” on page 67. In terms of CPU
costs this is an inexpensive way to collect performance information. Shorter reports
throughout the day are needed for RMF because a report of a full day’s length
includes startup and shutdown and does not identify the peak period.
GTF should run at a dispatching priority (DPRTY) of 255 so that records are not
lost. If GTF records are lost and the DPRTY is specified at 255, specify the BUF
operand on the execute statement as greater than 10 buffers.
You can use these options to get the data normally needed for CICS performance
studies:
TRACE=SYS,RNIO,USR (VTAM)
TRACE=SYS (Non-VTAM)
If you need data on the units of work dispatched by the system and on the length
of time it takes to execute events such as SVCs, LOADs, and so on, the options are:
TRACE=SYS,SRM,DSP,TRC,PCI,USR,RNIO
The TRC option produces the GTF trace records that indicate GTF interrupts of
other tasks that it is tracing. This set of options uses a higher percentage of
processor resources, and you should use it only when you need a detailed analysis
or timing of events.
No data-reduction programs are provided with GTF. To extract and summarize the
data into a meaningful and manageable form, you can either write a
data-reduction program or use one of the program offerings that are available.
For further details, see the OS/390 MVS Diagnosis: Tools and Service Aids.
GTF reports
You can produce reports from GTF data using the interactive problem control
system (IPCS). The reports generated by IPCS are useful in evaluating both system
and individual job performance. It produces job and system summary reports as
well as an abbreviated detail trace report. The summary reports include
information on MVS dispatches, SVC usage, contents supervision, I/O counts and
timing, seek analysis, page faults, and other events traced by GTF. The detail trace
reports can be used to follow a transaction chronologically through the system.
Before GTF is run, you should plan the events to be traced. If specific events such
as start I/Os (SIOs) are not traced, and the SIO-I/O timings are required, the trace
must be re-created to get the data needed for the reports.
If there are any alternative paths to a control unit in the system being monitored,
you should include the PATHIO input statement in the report execution statement.
Without the PATHIO operand, there are multiple I/O lines on the report for the
device with an alternative path: one line for the primary device address and one
for the secondary device address. If this operand is not included, the I/Os for the
primary and alternate device addresses have to be combined manually to get the
totals for that device.
A large number of ready-made reports are available, and in addition you can
generate your own reports to meet specific needs.
In the reports the Tivoli Performance Reporter uses data from CICS monitoring
and statistics. Tivoli Performance Reporter also collects data from the MVS system
and from products such as RMF, TSO, IMS™ and NetView. This means that data
from CICS and other systems can be shown together, or can be presented in
separate reports.
Reports can be presented as plots, bar charts, pie charts, tower charts, histograms,
surface charts, and other graphic formats. The Tivoli Performance Reporter for
OS/390 simply passes the data and formatting details to Graphic Data Display
See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information about the Tivoli Performance Reporter for OS/390 as a CICS
performance measurement tool.
This section gives an overview of the tools that can be used to monitor information
on various access methods and other programs used with CICS and the operating
system.
ACF/VTAM
ACF/VTAM® (program number 5735-RC2) provides information about buffer
usage either to GTF in SMF trace data or to the system console through DISPLAY
and BFRUSE commands. Other tuning statistics can also be recorded on the system
console through the MODIFY procname, TNSTAT command. (This command is
described in the ACF/VTAM Diagnostic Techniques manual.)
LISTCAT (VSAM)
VSAM LISTCAT provides information that interprets the actual situation of VSAM
data sets. This information includes counts of the following:
v Whether and how often control interval (CI) or control area (CA) splits occur
(splits should occur very rarely, especially in CA).
DB monitor (IMS)
The IMS DB monitor report print program (DFSUTR30) provides information on
batch activity (a single-thread environment) to IMS databases, and is activated
through the DLMON system initialization parameter. As in the case of CICS
auxiliary trace, this is for more in-depth investigation of performance problems by
single-thread studies of individual transactions.
The DB monitor cannot be started and stopped from a terminal. After the DB
monitor is started in a CICS environment, the only way to stop it is to shut down
CICS. The DB monitor cannot be started or stopped dynamically.
When the DB monitor runs out of space on the IMSMON data set, it stops
recording. The IMSMON data set is a sequential data set, for which you can
allocate space with IEFBR14. The DCB attributes are:
DCB=(RECFM=VB,LRECL=2044,BLKSIZE=2048)
If you are running the DB monitor in a multithread (more than one) environment,
the only statistics that are valid are the VSAM buffer pool statistics.
DBT can help you maintain data integrity by assisting the detection and repair of
errors before a problem disrupts operations. It speeds database reorganization by
providing a clear picture of how data is stored in the database, by allowing the
user to simulate various database designs before creating a new database, and by
providing various sort, unload, and reload facilities. DBT also improves
For further information, see the IMS System Utilities/Database Tools (DBT) General
Information manual.
IMSASAP:
v Produces a comprehensive set of reports, organized by level of detail and area of
analysis, to satisfy a wide range of IMS/ESA system analysis requirements
v Provides report selection and reporting options to satisfy individual
requirements and to assist in efficient analysis
v Produces alphanumerically collated report items in terms of ratios, rates, and
percentages to facilitate a comparison of results without additional computations
v Reports on schedules in progress including wait-for-input and batch message
processing programs
v Provides reports on IMS/ESA batch programs.
Statistics are collected during CICS online processing for later offline analysis. The
statistics domain writes statistics records to a System Management Facilities (SMF)
data set. The records are of SMF type 110, sub-type 002. Monitoring records and
some journaling records are also written to the SMF data set as type 110 records.
You might find it useful to process statistics and monitoring records together. For
programming information about SMF, and about other SMF data set
considerations, see the CICS Customization Guide.
End-of-day statistics are always written to the SMF data set, regardless of
the settings of any of the following:
v The system initialization parameter, STATRCD, or
v CEMT SET STATISTICS or
v The RECORDING option of EXEC CICS SET STATISTICS.
Requested statistics
are statistics that the user has asked for by using one of the following
commands:
v CEMT PERFORM STATISTICS RECORD
v EXEC CICS PERFORM STATISTICS RECORD
v EXEC CICS SET STATISTICS ON|OFF RECORDNOW.
These commands cause the statistics to be written to the SMF data set
immediately, instead of waiting for the current interval to expire. The
PERFORM STATISTICS command can be issued with any combination of
resource types or you can ask for all resource types with the ALL option.
For more details about CEMT commands see the CICS Supplied
Transactions; for programming information about the equivalent EXEC
CICS commands, see the CICS System Programming Reference.
Requested reset statistics
differ from requested statistics in that all statistics are collected and
statistics counters are reset. You can reset the statistics counters using the
following commands:
v CEMT PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS PERFORM STATISTICS RECORD ALL RESETNOW
v EXEC CICS SET STATISTICS ON|OFF RESETNOW RECORDNOW
The PERFORM STATISTICS command must be issued with the ALL option
if RESETNOW is present.
You can also invoke requested reset statistics when changing the recording
status from ON to OFF, or vice versa, using CEMT SET STATISTICS
RECORDING ON
Expiry of INTERVAL
Writes to the SMF data set
Resets counters
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified
Expiry of ENDOFDAY
Writes to SMF data set
Resets counters
RECORDING OFF
Expiry of INTERVAL
No action
EXEC CICS PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL(RESETNOW)
CEMT PERFORM STATISTICS
Writes to the SMF data set
Resets counters only
if ALL and RESETNOW specified
Expiry of ENDOFDAY
Writes to SMF data set
Resets counters
08 09 10 11 12 13 14 15 16 17 18 19 20 21
I E
Resetting statistics counters
Unsolicited statistics
are automatically gathered by CICS for dynamically
allocated and deallocated resources. CICS writes these
Note: To ensure that accurate statistics are recorded unsolicited statistics (USS)
must be collected. An unsolicited record resets the statistics fields it contains.
In particular, during a normal CICS shutdown, files are closed before the
end of day statistics are gathered. This means that file and LSRPOOL end of
day statistics will be zero, while the correct values will be recorded as
unsolicited statistics.
For detailed information about the reset characteristics, see “Appendix A. CICS
statistics tables” on page 345.
The arrival of the end-of-day time, as set by the ENDOFDAY parameters, always
causes the current interval to be ended (possibly prematurely) and a new interval
to be started. Only end-of-day statistics are collected at the end-of-day time, even if
it coincides exactly with the expiry of an interval.
Changing the end-of-day value changes the times at which INTERVAL statistics are
recorded immediately. In Figure 2, when the end-of-day is changed from midnight
to 1700 just after 1400, the effect is for the interval times to be calculated from the
new end-of-day time. Hence the new interval at 1500 as well as for the times after
new end-of-day time.
When you change any of the INTERVAL values (and also when CICS is
initialized), the length of the current (or first) interval is adjusted so that it expires
after an integral number of intervals from the end-of-day time.
Change to
INTERVAL(020000)
Change to
ENDOFDAY(170000)
08 09 10 11 12 13 14 15 16 17 18 19 20 21
I I I I I E I I
Note: Interval statistics are taken precisely on a minute boundary. Thus users with
many CICS regions on a single MVS image could have every region writing
statistics at the same time, if you have both the same interval and the same
end of day period specified. This could cost up to several seconds of the
entire CPU. If the cost becomes too noticeable, in terms of user response
time around the interval expiry, you should consider staggering the
intervals. One way of doing this while still maintaining very close
correlation of intervals for all regions is to use a PLT program like the
supplied sample DFH$STED which changes the end-of-day, and thus each
interval expiry boundary, by a few seconds. See the CICS Operations and
Utilities Guide for further information about DFH$STED.
For more information about the statistics domain statistics, see page 451.
For more information about transaction manager statistics, see page 482.
For more information, see the transaction class statistics on page 478
The CICS DB2 global and resource statistics are described in the CICS statistics
tables on page 352. For more information about CICS DB2 performance, see the
CICS DB2 Guide.
Dispatcher statistics
TCB statistics
The “Accum CPU time/TCB” is the amount of CPU time consumed by each CICS
TCB since the last time statistics were reset. Totaling the values of “Accum time in
MVS wait” and “Accum time dispatched” gives you the approximate time since
the last time CICS statistics were reset. The ratio of the “Accum CPU time /TCB”
to this time shows the percentage usage of each CICS TCB. The “Accum CPU
time/TCB” does not include uncaptured time, thus even a totally busy CICS TCB
would be noticeably less than 100% busy from this calculation. If a CICS region is
more than 70% busy by this method, you are approaching that region’s capacity.
The 70% calculation can only be very approximate, however, depending on such
factors as the workload in operation, the mix of activity within the workload, and
which release of CICS you are currently using. Alternatively, you can calculate if
your system is approaching capacity by using RMF to obtain a definititve
measurement, or you can use RMF with your monitoring system. For more
information, see OS/390 RMF V2R6 Performance Management Guide, SC28-1951.
Note: “Accum time dispatched” is NOT a measurement of CPU time because MVS
can run higher priority work, for example, all I/O activity and higher
priority regions, without CICS being aware.
For more information, see the CICS statistics tables on page 452.
Loader statistics
“Average loading time” = “Total loading time” / “Number of library load
requests”. This indicates the response time overhead suffered by tasks when
accessing a program which has to be brought into storage. If “Average loading
time” has increased over a period, consider MVS library lookaside usage.
“Not-in-use” program storage is freed progressively so that the “Amount of the
dynamic storage area occupied by not in use programs”, and the free storage in
the dynamic storage area are optimized for performance. Loader attempts to keep
not-in-use programs in storage long enough to reduce the performance overhead of
reloading the program. As the amount of free storage in the dynamic storage
decreases, the not-in-use programs are freemained in order of those least frequently
used to avoid a potential short-on-storage condition.
Note: The values reported are for the instant at which the statistics are gathered
and vary since the last report.
Note: This factor is meaningful only if there has been a substantial degree of
loader domain activity during the interval and may be distorted by startup
usage patterns.
This is an indication of the response time impact which may be suffered by a task
due to contention for loader domain resources.
Note: This calculation is not performed on requests that are currently waiting.
For more information, see the CICS statistics tables on page 431.
The “Writes more than control interval” is the number of writes of records whose
length was greater than the control interval (CI) size of the TS data set. This value
The number of “times aux. storage exhausted” is the number of situations where
one or more transactions may have been suspended because of a NOSPACE
condition, or (using a HANDLE CONDITION NOSPACE command, the use of
RESP on the WRITEQ TS command, or WRITEQ TS NOSUSPEND command) may
have been forced to abend. If this item appears in the statistics, increase the size of
the temporary storage data set. “Buffer writes” is the number of WRITEs to the
temporary storage data set. This includes both WRITEs necessitated by recovery
requirements and WRITEs forced by the buffer being needed to accommodate
another CI. I/O activity caused by the latter reason can be minimized by
increasing buffer allocation using the system initialization parameter, TS=(b,s),
where b is the number of buffers and s is the number of strings.
The “Peak number of strings in use” item is the peak number of concurrent I/O
operations to the data set. If this is significantly less than the number of strings
specified in the TS system initialization parameter, consider reducing the system
initialization parameter to approach this number.
If the “Times string wait occurred” is not zero, consider increasing the number of
strings. For details about adjusting the size of the TS data set and the number of
strings and buffers, see the CICS System Definition Guide.
For more information, see the CICS statistics tables on page 468
You should aim to minimize the “Intrapartition buffer waits” and “string waits” by
increasing the number of buffers and the number of strings if you can afford any
associated increase in your use of real storage.
| For more information, see the CICS statistics tables on pages 503 and 468.
|
| User domain statistics
| The user domain attempts to minimize the number of times it calls the security
| domain to create user security blocks (such as the ACEE), because this operation is
| very expensive in both processor time and input/output operations. If possible,
| each unique representation of a user is shared between multiple transactions. A
| user-domain representation of a user can be shared if the following attributes are
| identical:
| v The userid.
| v The groupid.
| The user domain keeps a count of the number of concurrent usages of a shared
| instance of a user. The count includes the number of times the instance has been
| associated with a CICS resource (such as a transient data queue) and the number
| of active transactions that are using the instance.
| Whenever CICS adds a new user instance to the user domain, the domain attempts
| to locate that instance in its user directory. If the user instance already exists with
| the parameters described above, that instance is reused. USGDRRC records how
| many times this is done. However, if the user instance does not already exist, it
| needs to be added. This requires an invocation of the security domain and the
| external security manager. USGDRNFC records how many times this is necessary.
| When the count associated with the instance is reduced to zero, the user instance is
| not immediately deleted: instead it is placed in a timeout queue controlled by the
| USRDELAY system initialization parameter. While it is in the timeout queue, the
| user instance is still eligible to be reused. If it is reused, it is removed from the
| timeout queue. USGTORC records how many times a user instance is reused while
| it was being timed out, and USGTOMRT records the average time that user
| instances remain on the timeout queue until they are removed.
| However, if a user instance remains on the timeout queue for a full USRDELAY
| interval without being reused, it is deleted. USGTOEC records how many times
| this happens.
| You should be aware that high values of USRDELAY may affect your security
| administrator’s ability to change the authorities and attributes of CICS users,
| because those changes are not reflected in CICS until the user instance is refreshed
| in CICS by being flushed from the timeout queue after the USRDELAY interval.
| Some security administrators may require you to specify USRDELAY=0. This still
| allows some sharing of user instances if the usage count is never reduced to zero.
| Generally, however, remote users are flushed out immediately after the transaction
| they are executing has terminated, so that their user control blocks have to be
| reconstructed frequently. This results in poor performance. For more information,
| see “User domain statistics” on page 499.
VTAM statistics
The “peak RPLs posted” includes only the receive-any RPLs defined by the
RAPOOL system initialization parameter. In non-HPO systems, the value shown
can be larger than the value specified for RAPOOL, because CICS reissues each
receive-any request as soon as the input message associated with the posted RPL
has been disposed of. VTAM may well cause this reissued receive-any RPL to be
posted during the current dispatch of terminal control. While this does not
necessarily indicate a performance problem, a number much higher than the
In addition to indicating whether the value for the RAPOOL system initialization
parameter is large enough, you can also use the “maximum number of RPLs
posted” statistic (A03RPLX) to determine other information. This depends upon
whether your MVS system has HPO or not.
For HPO, RAPOOL(A,B) allows the user to tune the active count (B). The size of
the pool (A) should be dependent on the speed at which they get processed. The
active count (B) has to be able to satisfy VTAM at any given time, and is
dependent on the inbound message rate for receive-any requests.
Here is an example to illustrate the differences for an HPO and a non-HPO system.
Suppose two similar CICS executions use a RAPOOL value of 2 for both runs. The
number of RPLs posted in the MVS/HPO run is 2, while the MVS/non-HPO run
is 31. This difference is better understood when we look at the next item in the
statistics.
This item is not printed if the maximum number of RPLs posted is zero. In our
example, let us say that the MVS/HPO system reached the maximum 495 times.
The non-HPO MVS system reached the maximum of 31 only once. You might
deduce from this that the pool is probably too small (RAPOOL=2) for the HPO
system and it needs to be increased. An appreciable increase in the RAPOOL value,
from 2 to, say, 6 or more, should be tried. As you can see from the example given
below, the RAPOOL value was increased to 8 and the maximum was reached only
16 times:
MAXIMUM NUMBER OF RPLS POSTED 8
NUMBER OF TIMES REACHED MAXIMUM 16
In a non-HPO system, these two statistics are less useful, except that, if the
maximum number of RPLs posted is less than RAPOOL, RAPOOL can be reduced,
thereby saving virtual storage.
VTAM SOS simply means that a CICS request for service from VTAM was rejected
with a VTAM sense code indicating that VTAM was unable to acquire the storage
required to service the request. VTAM does not give any further information to
CICS, such as what storage it was unable to acquire.
This situation most commonly arises at network startup or shutdown when CICS
is trying to schedule requests concurrently, to a larger number of terminals than
during normal execution. If the count is not very high, it is probably not worth
tracking down. In any case, CICS automatically retries the failing requests later on.
If your network is growing, however, you should monitor this statistic and, if the
count is starting to increase, you should take action. Use D NET,BFRUSE to check
if VTAM is short on storage in its own region and increase VTAM allocations
accordingly if this is required.
The maximum value for this statistic is 99, at which time a message is sent to the
console and the counter is reset to zero. However, VTAM controls its own buffers
and gives you a facility to monitor buffer usage.
For more information, see the CICS statistics tables on page 500.
Dump statistics
Both transaction and system dumps are very expensive and should be thoroughly
investigated and eliminated.
For more information, see the CICS statistics tables on page 373.
Enqueue statistics
The enqueue domain supports the CICS recovery manager. Enqueue statistics
contain the global data collected by the enqueue domain for enqueue requests.
Waiting for an enqueue on a resource can add significant delays in the execution of
a transaction. The enqueue statistics allow you to assess the impact of waiting for
enqueues in the system and the impact of retained enqueues on waiters. Both the
current activity and the activity since the last reset are available.
For more information, see the CICS statistics tables on page 378.
Transaction statistics
Use these statistics to find out which transactions (if any) had storage violations.
It is also possible to use these statistics for capacity planning purposes. But
remember, many systems experience both increasing cost per transaction as well as
increasing transaction rate.
For more information, see the CICS statistics tables on page 484.
Program statistics
“Average fetch time” is an indication of how long it actually takes MVS to perform
a load from the partitioned data set in the RPL concatenation into CICS managed
storage.
The average for each RPL offset of “Program size” / “Average fetch time”. is an
indication of the byte transfer rate during loads from a particular partitioned data
set. A comparison of these values may assist you to detect bad channel loading or
file layout problems.
For more information, see the CICS statistics tables on page 442.
For more information, see the CICS statistics tables on page 382.
File statistics
File statistics collect data about the number of application requests against your
data sets. They indicate the number of requests for each type of service that are
processed against each file. If the number of requests is totalled daily or for every
CICS execution, the activity for each file can be monitored for any changes that
occur. Note that these file statistics may have been reset during the day; to obtain a
figure of total activity against a particular file during the day, refer to the
DFHSTUP summary report. Other data pertaining to file statistics and special
processing conditions are also collected.
The wait-on-string number is only significant for files related to VSAM data sets.
For VSAM, STRNO=5 in the file definition means, for example, that CICS permits
five concurrent requests to this file. If a transaction issues a sixth request for the
same file, this request must wait until one of the other five requests has completed
(“wait-on-string”).
The number of strings associated with a file when specified through resource
definition online.
String number setting is important for performance. Too low a value causes
excessive waiting for strings by tasks and long response times. Too high a value
increases VSAM virtual storage requirements and therefore real storage usage.
However, as both virtual storage and real storage are above the 16MB line, this
may not be a problem. In general, the number of strings should be chosen to give
near zero “wait on string” count.
Note: Increasing the number of strings can increase the risk of deadlocks because
of greater transaction concurrency. To minimize the risk you should ensure
that applications follow the standards set in the CICS Application
Programming Guide.
A file can also “wait-on-string” for an LSRpool string. This type of wait is reflected
in the local shared resource pool statistics section (see “LSRPOOL statistics” on
page 56) and not in the file wait-on-string statistics.
If you are using data tables, an extra line appears in the DFHSTUP report for those
files defined as data tables. “Read requests”, “Source reads”, and “Storage
alloc(K)” are usually the numbers of most significance. For a CICS-maintained
table a comparison of the difference between “read requests” and “source reads”
with the total request activity reported in the preceding line shows how the
request traffic divides between using the table and using VSAM and thus indicates
the effectiveness of converting the file to a CMT. “Storage alloc(K)” is the total
storage allocated for the table and provides guidance to the cost of the table in
storage resource, bearing in mind the possibility of reducing LSRpool sizes in the
light of reduced VSAM accesses.
Journalname statistics contain data about the use of each journal, as follows:
v The journal type (MVS logger, SMF or dummy)
v The log stream name for MVS logger journal types only
v The number of API journal writes
v The number of bytes written
v The number of flushes of journal data to log streams or SMF.
Note that the CICS system journalname and log stream statistics for the last three
items on this list are always zero. These entries appear in journalname statistics to
inform you of the journal type and log stream name for the special CICS system
journals.
For more information on journalname statistics, see the CICS statistics tables on
page 411.
Log stream statistics contain data about the use of each log stream including the
following:
v The number of write requests to the log stream
v The number of bytes written to the log stream
v The number of log stream buffer waits
v The number of log stream browse and delete requests.
For more information on log stream statistics, see the CICS statistics tables on page
413.
For more information on logging and journaling, see “Chapter 22. Logging and
journaling” on page 271.
For information about the SMF Type 88 records produced by the MVS system
logger, see the OS/390 MVS System Management Facilities (SMF) manual.
You should usually aim to have no requests that waited for a string. If you do then
the use of MXT may be more effective.
When the last open file in an LSRPOOL is closed, the pool is deleted. The
subsequent unsolicited statistics (USS) LSRPOOL record written to SMF can be
mapped by the DFHA08DS DSECT.
The fields relating to the size and characteristics of the pool (maximum key length,
number of strings, number and size of buffers) may be those which you have
specified for the pool, through resource definition online command DEFINE
LSRPOOL. Alternatively, if some, or all, of the fields were not specified, the values
of the unspecified fields are those calculated by CICS when the pool is built.
You should consider specifying separate data and index buffers if you have not
already done so. This is especially true if index CI sizes are the same as data CI
sizes.
You should also consider using Hiperspace™ buffers while retaining a reasonable
number of address space buffers. Hiperspace buffers tend to give CPU savings of
keeping data in memory, exploiting the relatively cheap expanded storage, while
allowing central storage to be used more effectively.
For more information, see the CICS statistics tables on pages 416.
For more information, see the CICS statistics tables on page 445.
For more information, see the CICS statistics tables on page 474.
The following section attempts to identify the kind of questions you may have in
connection with system performance, and describes how answers to those
questions can be derived from the statistics report. It also describes what actions, if
any, you can take to resolve ISC/IRC performance problems.
Some of the questions you may be seeking an answer to when looking at these
statistics are these:
v Are there enough sessions defined?
v Is the balance of contention winners to contention losers correct?
v Is there conflicting usage of APPC modegroups?
v What can be done if there are unusually high numbers, compared with normal
or expected numbers, in the statistics report?
All the fields below are specific to the mode group of the mode name given.
Table 3. ISC/IRC mode entries
Mode entry Field IRC LU6.1 APPC
Mode name A20MODE X
ATIs satisfied by contention losers A20ES1 X
ATIs satisfied by contention winners A20ES2 X
Peak contention losers A20E1HWM X
Peak contention winners A20E2HWM X
Peak outstanding allocates A20ESTAM X
Total specific allocate requests A20ESTAS X
Total specific allocates satisfied A20ESTAP X
Total generic allocates satisfied A20ESTAG X
Queued allocates A20ESTAQ X
Failed link allocates A20ESTAF X
Failed allocates due to sessions in use A20ESTAO X
Total bids sent A20ESBID X
Current bids in progress A20EBID X
Peak bids in progress A20EBHWM X
For more information about the usage of individual fields, see the CICS statistics
described under “ISC/IRC system and mode entries” on page 396.
Action: Consider making more sessions available with which to satisfy the allocate
requests. Enabling CICS to satisfy allocate requests without the need for queueing
may lead to improved performance.
However, be aware that increasing the number of sessions available on the front
end potentially increases the workload to the back end, and you should investigate
whether this is likely to cause a problem.
The following fields should give some guidance as to whether you need to
increase the number of contention winner sessions defined:
1. “Current bids in progress” (fields A14EBID and A20EBID) “Peak bids in progress”
(fields A14EBHWM and A20EBHWM)
The value “Peak bids in progress” records the maximum number of bids in
progress at any one time during the statistics reporting period. “Current bids in
progress” is always less than or equal to the “Peak bids in progress”.
Ideally, these fields should be kept to zero. If either of these fields is high, it
indicates that CICS is having to perform a large number of bids for contention
loser sessions.
2. “Peak contention losers” (fields A14E1HWM and A20E1HWM).
If the number of “Peak contention losers” is equal to the number of contention
loser sessions available, the number of contention loser sessions defined may be
too low. Alternatively, for APPC/LU6.1, CICS could be using the contention
loser sessions to satisfy allocates due to a lack of contention winner sessions.
This should be tuned at the front-end in conjunction with winners at the
back-end. For details of how to specify the maximum number of sessions, and
the number of contention winners, see the information on defining SESSIONS
in the CICS Resource Definition Guide.
For APPC, consider making more contention winner sessions available, which
should reduce the need to use contention loser sessions to satisfy allocate requests
and, as a result, should also make more contention loser sessions available.
For LU6.1, consider making more SEND sessions available, which decreases the
need for LU6.1 to use primaries (RECEIVE sessions) to satisfy allocate requests.
For IRC, there is no bidding involved, as MRO can never use RECEIVE sessions to
satisfy allocate requests. If “Peak contention losers (RECEIVE)” is equal to the
number of contention loser (RECEIVE) sessions on an IRC link, the number of
allocates from the remote system is possibly higher than the receiving system can
cope with. In this situation, consider increasing the number of RECEIVE sessions
available.
Note: The usage of sessions depends on the direction of flow of work. Any tuning
which increases the number of winners available at the front-end should
also take into account whether this is appropriate for the direction of flow of
work over a whole period, such as a day, week, or month.
This could cause a problem for any specific allocate, because CICS initially tries to
satisfy a generic allocate from the first modegroup before trying other modegroups
in sequence.
Group installed
ISCGROUP in CSD in CICS region:
Second user
- - - - - TCTME created
for MODEGRPX
ISC Persistent verification (PV) activity. If the number of “entries reused” in the PV
activity is low, and the “entries timed out” value is high, the PVDELAY system
initialization parameter should be increased. The “average reuse time between
entries” gives some indication of the time that could be used for the PVDELAY
system initialization parameter.
For more information, see the CICS statistics tables on page 410.
|
| Coupling facility data tables server statistics
| Coupling facility data tables server statistics are provided by the AXM page pool
| management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix C. Coupling facility data tables server statistics” on
| page 509.
|
| Named counter sequence number server statistics
| Named counter sequence number server statistics are provided by the AXM page
| pool management routines for the pools AXMPGANY and AXMPGLOW. For more
| information, see “Appendix D. Named counter sequence number server” on
| page 515.
Note: Statistics records and some journaling records are also written to the SMF
data set as type 110 records. You might find it particularly useful to process
the statistics records and the monitoring records together, because statistics
provide resource and system information that is complementary to the
transaction data produced by CICS monitoring. The contents of the statistics
fields, and the procedure for processing them, are described in
“Appendix A. CICS statistics tables” on page 345.
Monitoring data is useful both for performance tuning and for charging your users
for the resources they use.
Performance class data provides detailed, resource-level data that can be used for
accounting, performance analysis, and capacity planning. This data contains
information relating to individual task resource usage, and is completed for each
task when the task terminates.
| If the monitoring performance class is also being recorded, the performance class
| record for the transaction includes the total elapsed time the transaction was
| delayed by a CICS system resource shortage. This is measured by the exception
| class and the number of exceptions encountered by the transaction. The exception
| class records can be linked to the performance class records either by the
| transaction sequence number or by the network unit-of-work id. For more
| information on the exception class records, see “Exception class data” on page 107.
CICS invokes the MVS System Resource Manager (SRM) macro SYSEVENT at the
end of every transaction to record the elapsed time of the transaction.
You can enable SYSEVENT class monitoring by coding the MNEVE=ON (together
with MN=ON) system initialization parameters. Alternatively, you can use either
the CEMT command (CEMT SET MONITOR ON EVENT) or EXEC CICS SET
MONITOR STATUS(ON) EVENTCLASS(EVENT).
If the SYSEVENT option is used, at the end of each transaction CICS issues a Type
55 (X'37') SYSEVENT macro. This records each transaction ID, the associated
terminal ID, and the elapsed time duration of each transaction. This information is
collected by the SRM and output, depending on the Resource Measurement
Facility (RMF) options set, can be written to SMF data sets.
If you are running CICS with the MVS workload manager environment in goal
mode, the MVS workload manager provides transaction activity report reporting
which replaces the SYSEVENT class of monitoring.
The objective of using the CICS monitoring facility with RMF is to enable
transaction rates and internal response times to be monitored without incurring the
overhead of running the full CICS monitoring facility and associated reporting.
This approach may be useful when only transaction statistics are required, rather
than the very detailed information that CICS monitoring facility produces. An
example of this is the monitoring of a production system where the minimum
overhead is required.
For more information about how to use RMF, refer to the MVS Resource
Measurement Facility (RMF), Version 4.1.1 - Monitor I & II Reference and Users Guide.
If records are directed to SMF, refer to the OS/390 MVS System Management
Facilities (SMF) manual. The following example shows the additional parameters
that you need to add to your IEAICS member for two MRO CICS systems:
SUBSYS=ACIC,RPGN=100 /* CICS SYSTEM ACIC HAS REPORTING */
TRXNAME=CEMT,RPGN=101 /* GROUP OF 100 AND THERE ARE */
TRXNAME=USER,RPGN=102 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=103 /* SEPARATE TRANSACTIONS */
SUBSYS=BCIC,RPGN=200 /* CICS SYSTEM BCIC HAS REPORTING */
TRXNAME=CEMT,RPGN=201 /* GROUP OF 200 AND THERE ARE */
TRXNAME=USER,RPGN=202 /* THREE INDIVIDUAL GROUPS FOR */
TRXNAME=CSMI,RPGN=203 /* SEPARATE TRANSACTIONS */
Notes:
1. The reporting group (number 100) assigned to the ACIC subsystem reports on
all transactions in that system.
2. RMF reports on an individual transaction by name only if it is assigned a
unique reporting group. If multiple transactions are defined with one reporting
group, the name field is left blank in the RMF reports.
RMF operations
A RMF job has to be started and this includes the Monitor I session. The RMF job
should be started before initializing CICS. The RMF Monitor II session is started by
the command F RMF,S aa,MEMBER(xx) where ‘aa’ indicates alphabetic characters
| and ‘xx’ indicates alphanumeric characters.
|
| Using the CICS monitoring facility with Tivoli Performance Reporter for
| OS/390
| Tivoli Performance Reporter for OS/390 assists you in performance management
| and service-level management of a number of IBM products. The CICS
| Performance feature used by the Tivoli Performance Reporter provides reports for
| your use in analyzing the performance of CICS. See “Chapter 7. Tivoli Performance
| Reporter for OS/390” on page 113 for more information.
If you want to gather more performance class data than is provided at the
system-defined event monitoring points, you can code additional EMPs in your
application programs. At these points, you can add or change up to 16384 bytes of
user data in each performance record. Up to this maximum of 16384 bytes you can
have, for each ENTRYNAME qualifier, any combination of the following:
v Between 0 and 256 counters
v Between 0 and 256 clocks
v A single 8192-byte character string.
You could use these additional EMPs to count the number of times a certain event
occurs, or to time the interval between two events. If the performance class was
active when a transaction was started, but was not active when a user EMP was
issued, the operations defined in that user EMP would still execute on that
User EMPs can use the EXEC CICS MONITOR command. For programming
information about this command, refer to the CICS Application Programming
Reference.
Additional EMPs are provided in some IBM program products, such as DL/I.
From CICS’s point of view, these are like any other user-defined EMP. EMPs in
user applications and in IBM program products are identified by a decimal
number. The numbers 1 through 199 are available for EMPs in user applications,
and the numbers from 200 through 255 are for use in IBM program products. The
numbers can be qualified with an ‘entryname’, so that you can use each number
more than once. For example, PROGA.1, PROGB.1, and PROGC.1, identify three
different EMPs because they have different entrynames.
For each user-defined EMP there must be a corresponding monitoring control table
(MCT) entry, which has the same identification number and entryname as the EMP
that it describes.
You do not have to assign entrynames and numbers to system-defined EMPs, and
you do not have to code MCT entries for them.
Here are some ideas about how you might make use of the CICS and user fields
provided with the CICS monitoring facility:
v If you want to time how long it takes to do a table lookup routine within an
application, code an EMP with, say, ID=50 just before the table lookup routine
and an EMP with ID=51 just after the routine. The system programmer codes a
TYPE=EMP operand in the MCT for ID=50 to start user clock 1. You also code a
TYPE=EMP operand for ID=51 to stop user clock 1. The application executes.
When EMP 50 is processed, user clock 1 is started. When EMP 51 is processed,
the clock is stopped.
v One user field could be used to accumulate an installation accounting unit. For
example, you might count different amounts for different types of transaction.
Or, in a browsing application, you might count 1 unit for each record scanned
and not selected, and 3 for each record selected.
You can also treat the fullword count fields as 32-bit flag fields to indicate
special situations, for example, out-of-line situations in the applications, operator
errors, and so on. CICS includes facilities to turn individual bits or groups of
bits on or off in these counts.
v The performance clocks can be used for accumulating the time taken for I/O,
DL/I scheduling, and so on. It usually includes any waiting for the transaction
to regain control after the requested operation has completed. Because the
periods are counted as well as added, you can get the average time waiting for
I/O as well as the total. If you want to highlight an unusually long individual
case, set a flag on in a user count as explained above.
v One use of the performance character string is for systems in which one
transaction ID is used for widely differing functions. The application can enter a
subsidiary ID into the string to indicate which particular variant of the
transaction applies in each case.
Some users have a single transaction ID so that all user input is routed through
a common prologue program for security checking, for example. In this case, it
DFHMCT TYPE=EMP
There must be a DFHMCT TYPE=EMP macro definition for every user-coded EMP.
This macro has an ID operand, whose value must be made up of the
ENTRYNAME and POINT values specified on the EXEC CICS MONITOR
command. The PERFORM operand of the DFHMCT TYPE=EMP macro tells CICS
which user count fields, user clocks, and character values to expect at the
identified user EMP, and what operations to perform on them.
DFHMCT TYPE=RECORD
The DFHMCT TYPE=RECORD macro allows you to exclude specific system-defined
performance data from a CICS run. (Each performance monitoring record is
| approximately 1288 bytes long, without taking into account any user data that may
be added, or any excluded fields.)
Each field of the performance data that is gathered at the system-defined EMPs
belongs to a group of fields that has a group identifier. Each performance data
field also has its own numeric identifier that is unique within the group identifier.
For example, the transaction sequence number field in a performance record
belongs to the group DFHTASK, and has the numeric identifier ‘031’. Using these
identifiers, you can exclude specific fields or groups of fields, and reduce the size
of the performance records.
Full details of the MCT are provided in the CICS Resource Definition Guide, and
examples of MCT coding are included with the programming information in the
CICS Customization Guide.
These samples show how to use the EXCLUDE and INCLUDE operands to reduce
the size of the performance class record in order to reduce the volume of data
When CICS is initialized, you switch the monitoring facility on by specifying the
system initialization parameter MN=ON. MN=OFF is the default setting. You can
select the classes of monitoring data you want to be collected using the MNPER,
MNEXC, and MNEVE system initialization parameters. You can request the
collection of any combination of performance class data, exception class data, and
SYSEVENT data. The class settings can be changed whether monitoring itself is
ON or OFF. For guidance about system initialization parameters, refer to the CICS
System Definition Guide.
When CICS is running, you can control the monitoring facility dynamically. Just as
at CICS initialization, you can switch monitoring on or off, and you can change the
classes of monitoring data that are being collected. There are two ways of doing
this:
1. You can use the master terminal CEMT INQ|SET MONITOR command, which
is described in the CICS Supplied Transactions manual.
2. You can use the EXEC CICS INQUIRE and SET MONITOR commands;
programming information about these is in the CICS System Programming
Reference.
If you activate a class of monitoring data in the middle of a run, the data for that
class becomes available only for transactions that are started thereafter. You cannot
change the classes of monitoring data collected for a transaction after it has started.
It is often preferable, particularly for long-running transactions, to start all classes
of monitoring data at CICS initialization.
End of Product-sensitive programming interface
See “Chapter 7. Tivoli Performance Reporter for OS/390” on page 113 for more
information.
Or, instead, you may want to write your own application program to process
output from the CICS monitoring facility. The CICS Customization Guide gives
programming information about the format of this output.
CICS provides a sample program, DFH$MOLS, which reads, formats, and prints
monitoring data. It is intended as a sample program that you can use as a skeleton
if you need to write your own program to analyze the data set. Comments within
the program may help you if you want to do your own processing of CICS
monitoring facility output. See the CICS Operations and Utilities Guide for further
information on the DFH$MOLS program.
End of Product-sensitive programming interface
All of the exception class data and all of the system-defined performance class data
that can be produced by CICS monitoring is listed below. Each of the data fields is
presented as a field description, followed by an explanation of the contents. The
field description has the format shown in Figure 4, which is taken from the
performance data group DFHTASK.
Note: References in Figure 4 to the associated dictionary entries apply only to the
performance class data descriptions. Exception class data is not defined in
the dictionary record.
Neither the 32-bit timer component of a clock nor its 24-bit period count are
protected against wraparound. The timer capacity is about 18 hours, and the
period count runs modulo 16 777 216.
Note: All times produced in the offline reports are in GMT (Greenwich Mean
Time) not local time. Times produced by online reporting can be expressed
in either GMT or local time.
The CMF performance class record also provides a more detailed breakdown of the
transaction suspend (wait) time into separate data fields. These include:
v Terminal I/O wait time
v File I/O wait time
v RLS File I/O wait time
v Journal I/O wait time
v Temporary Storage I/O wait time
| v Shared Temporary Storage I/O wait time
v Inter-Region I/O wait time
v Transient Data I/O wait time
v LU 6.1 I/O wait time
v LU 6.2 I/O wait time
v FEPI suspend time
| v Local ENQ delay time
| v Global ENQ delay time
| v RRMS/MVS Indoubt wait time
| v Socket I/O wait time
v RMI suspend time
v Lock Manager delay time
v EXEC CICS WAIT EXTERNAL wait time
v EXEC CICS WAITCICS and WAIT EVENT wait time
v Interval Control delay time
v ″Dispatchable Wait″ wait time
| v IMS(DBCTL) wait time
| v DB2 ready queue wait time
| v DB2 connection wait time
| v DB2 wait time
| v CFDT server syncpoint wait time
| v Syncpoint delay time
| v CICS BTS run process/activity synchronous wait time
| v CICS MAXOPENTCBS delay time
| v JVM suspend time
Figure 5 on page 76 shows the relationship of dispatch time, suspend time, and
CPU time with the response time.
S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Delay Wait M
M E
E
Dispatch Dispatch
and and
Suspend Time
CPU CPU
Time Time
Dispatch
Wait
Improvements to the CMF suspend time and wait time measurements allow you to
perform various calculations on the suspend time accurately. For example, the
"Total I/O Wait Time" can be calculated as follows:
The "other wait time" (that is, uncaptured wait (suspend) time) can be calculated as
follows:
Note: The First Dispatch Delay performance class data field includes the MXT and
TRANCLASS First Dispatch Delay fields.
Response Time
S S
T T
A Suspend Time Dispatch Time O
R P
T
First T
T Dispatch Dispatch CPU Time I
I Wait Wait M
M E
E
PCload
Time
Dispatch
Wait
Figure 8 shows the relationship between the RMI elapsed time and the suspend
| time (fields 170 and 171).
| Note: In CICS Transaction Server for OS/390 Release 3, or later, the DB2 wait, the
| DB2 connection wait, and the DB2 readyq wait time fields as well as the
| IMS wait time field are included in the RMI suspend time.
| Care must be taken when using the JVM elapsed time (group name DFHTASK,
| field id: 253) and JVM suspend time (group name DFHTASK, field id: 254) fields in
| any calculation with other CMF timing fields. This is because of the likelihood of
| double accounting other CMF timing fields in the performance class record within
| the JVM time fields. For example, if a Java application program invoked by a
| transaction issues a read file (non-RLS) request using the Java API for CICS (JCICS)
| classes, the file I/O wait time will be included in the both the file I/O wait time
| field (group name DFHFILE, field id: 063), the transaction suspend time field
| (group name DFHTASK, field id: 014) as well as the JVM suspend time field.
| The JVM elapsed and suspend time fields are best evaluated from the overall
| transaction performance view and their relationship with the transaction response
| time, transaction dispatch time, and transaction suspend time. The performance
| class data also includes the amount of processor (CPU) time that a transaction used
| whilst in a JVM on a CICS J8 mode TCB in the J8CPUT field (group name:
| DFHTASK, field id: 260).
| Note: The number of Java API for CICS (JCICS) requests issued by the user task is
| included in the CICS OO foundation class request count field (group name:
| DFHCICS, field id: 025).
Dispatch Dispatch
and Dispatch and
CPU and CPU
Suspend Time CPU Suspend Time
Time Time Time
Dispatch Dispatch
Wait Wait
Figure 9 shows the relationship between the syncpoint elapsed time (field 173) and
the suspend time (field 14).
Note: All references to “Start time” and “Stop time” in the calculations below refer
to the middle 4 bytes of each 8 byte start/stop time field. Bit 51 of Start time
or Stop time represents a unit of 16 microseconds.
During the life of a user task, CICS measures, calculates, and accumulates the
storage occupancy at the following points:
v Before GETMAIN increases current user-storage values
v Before FREEMAIN reduces current user-storage values
v Just before the performance record is moved to the buffer.
S S
T T
A O
R P
T
.... .... .... ................... .............. ......... ......... ......... T
T I
I . . . M
M . . . E
E . . . . . . . .
. . . . . . . .
. . . . . . . .
G F G F F G F G
G = GETMAIN
F = FREEMAIN
Dotted line = Average storage occupancy
Note: On an XCTL event, the program storage currently in use is also decremented
by the size of the program issuing the XCTL, because the program is no
longer required.
Figure 11 on page 83 shows the relationships between the “high-water mark” data
fields that contain the maximum amounts of program storage in use by the user
task. Field PCSTGHWM (field ID 087) contains the maximum amount of program
storage in use by the task both above and below the 16MB line. Fields PC31AHWM
(139) and PC24BHWM (108) are subsets of PCSTGHWM, containing the maximum
amounts in use above and below the 16MB line, respectively. Further subset-fields
contain the maximum amounts of storage in use by the task in each of the CICS
dynamic storage areas (DSAs).
Note: The totaled values of all the subsets in a superset may not necessarily equate
to the value of the superset; for example, the value of PC31AHWM plus the
value of PC24BHWM may not equal the value of PCSTGHWM. This is
because the peaks in the different types of program storage acquired by the
user task do not necessarily occur simultaneously.
The “high-water mark” fields are described in detail in “User storage fields in
group DFHSTOR:” on page 92. For information about the program storage fields,
see “Program storage fields in group DFHSTOR:” on page 94.
16MB line
Figure 11. Relationships between the “high-water mark” program storage data fields
Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
006 (TYPE-T, ‘STOP’, 8 BYTES)
Finish time of measurement interval. This is either the time at which the user
task was detached, or the time at which data recording was completed in
support of the MCT user event monitoring point DELIVER option or the
monitoring options MNCONV, MNSYNC or FREQUENCY. For more
information, see “Clocks and time stamps” on page 73.
Note: Response Time = STOP − START. For more information, see “A note
about response time” on page 75.
| 025 (TYPE-A, ‘CFCAPICT’, 4 BYTES)
| Number of CICS OO foundation class requests, including the Java API for
| CICS (JCICS) classes, issued by the user task.
089 (TYPE-C, ‘USERID’, 8 BYTES)
User identification at task creation. This can also be the remote user identifier
for a task created as the result of receiving an ATTACH request across an MRO
or APPC link with attach-time security enabled.
103 (TYPE-S, ‘EXWTTIME’, 8 BYTES)
Accumulated data for exception conditions. The 32-bit clock contains the total
elapsed time for which the user waited on exception conditions. The 24-bit
period count equals the number of exception conditions that have occurred for
this task. For more information, see “Exception class data” on page 107
Note: The performance class data field ‘exception wait time’ will be updated
when exception conditions are encountered even when the exception
class is inactive.
112 (TYPE-C, ‘RTYPE’, 4 BYTES)
Performance record type (low-order byte-3):
C Record output for a terminal converse
D Record output for a user EMP DELIVER request
F Record output for a long-running transaction
S Record output for a syncpoint
T Record output for a task termination.
130 (TYPE-C, ‘RSYSID’, 4 bytes)
The name (sysid) of the remote system to which this transaction was routed
either statically or dynamically.
This field also includes the connection name (sysid) of the remote system to
which this transaction was routed when using the CRTE routing transaction.
The field will be null for those CRTE transactions which establish or cancel the
transaction routing session.
Note: If the transaction was not routed or was routed locally, this field is set to
null. Also see the program name (field 71).
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 187 (TYPE-S, ‘DB2RDYQW’, 8 bytes)
| The elapsed time in which the user task waited for a DB2 thread to become
| available.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 188 (TYPE-S, ‘DB2CONWT’, 8 bytes)
| The elapsed time in which the user task waited for a CICS DB2 subtask to
| become available.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 189 (TYPE-S, ‘DB2WAIT’, 8 bytes)
| The elapsed time in which the user task waited for DB2 to service the DB2
| EXEC SQL and IFI requests issued by the user task.
| For more information, see “Clocks and time stamps” on page 73, and “A note
| about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 157 (TYPE-A,‘SZALLCTO’, 4 bytes)
Number of times the user task timed out while waiting to allocate a
conversation.
158 (TYPE-A,‘SZRCVTO’, 4 bytes)
Number of times the user task timed out while waiting to receive data.
159 (TYPE-A,‘SZTOTCT’, 4 bytes)
Total number of all FEPI API and SPI requests made by the user task.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 070 (TYPE-A, ‘FCAMCT’, 4 BYTES)
Number of times the user task invoked file access-method interfaces. This
number excludes requests for OPEN and CLOSE.
How EXEC CICS file commands correspond to file control monitoring fields is
shown in Table 6.
Table 6. EXEC CICS file commands related to file control monitoring fields
EXEC CICS command Monitoring fields
READ FCGETCT and FCTOTCT
READ UPDATE FCGETCT and FCTOTCT
DELETE (after READ UPDATE) FCDELCT and FCTOTCT
DELETE (with RIDFLD) FCDELCT and FCTOTCT
REWRITE FCPUTCT and FCTOTCT
WRITE FCADDCT and FCTOTCT
STARTBR FCTOTCT
READNEXT FCBRWCT and FCTOTCT
READNEXT UPDATE FCBRWCT and FCTOTCT
READPREV FCBRWCT and FCTOTCT
READPREV UPDATE FCBRWCT and FCTOTCT
ENDBR FCTOTCT
RESETBR FCTOTCT
UNLOCK FCTOTCT
Note: The number of STARTBR, ENDBR, RESETBR, and UNLOCK file control
requests can be calculated by subtracting the file request counts,
FCGETCT, FCPUTCT, FCBRWCT, FCADDCT, and FCDELCT from the
total file request count, FCTOTCT.
174 (TYPE-S, ‘RLSWAIT’, 8 BYTES)
| Elapsed time in which the user task waited for RLS file I/O. For more
| information, see “Clocks and time stamps” on page 73, and “A note about wait
| (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 175 (TYPE-S, ‘RLSCPUT’, 8 BYTES)
The RLS File Request CPU (SRB) time field (RLSCPUT) is the SRB CPU time
this transaction spent processing RLS file requests. This field should be added
to the transaction CPU time field (USRCPUT) when considering the
measurement of the total CPU time consumed by a transaction. Also, this field
cannot be considered a subset of any other single CMF field (including
RLSWAIT). This is because the RLS field requests execute asynchronously
under an MVS SRB which can be running in parallel with the requesting
transaction. It is also possible for the SRB to complete its processing before the
requesting transaction waits for the RLS file request to complete.
Note: This clock field could contain a CPU time of zero with a count of greater
than zero. This is because the CMF timing granularity is measured in 16
microsecond units and the RLS file request(s) may complete in less than
that time unit.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
| 058 (TYPE-A, ‘JNLWRTCT’, 4 BYTES)
Number of journal write requests issued by the user task.
172 (TYPE-A, ‘LOGWRTCT’, 4 BYTES)
Number of CICS log stream write requests issued by the user task.
For a dynamic program link (DPL) mirror transaction, this field contains the
initial program name specified in the dynamic program LINK request. DPL
mirror transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.
For an ONC RPC or WEB alias transaction, this field contains the initial
application program name invoked by the alias transaction. ONC RPC or WEB
alias transactions can be identified using byte 1 of the transaction flags,
TRANFLAG (164), field.
072 (TYPE-A, ‘PCLURMCT’, 4 BYTES)
Number of program LINK URM (user-replaceable module) requests issued by,
or on behalf of, the user task.
| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 242 (TYPE-A, ‘SOBYENCT’, 4 BYTES)
| The number of bytes encrypted by the secure sockets layer for the user task.
| 243 (TYPE-A, ‘SOBYDECT’, 4 BYTES)
| The number of bytes decrypted by the secure sockets layer for the user task.
| 244 (TYPE-C, ‘CLIPADDR’, 16 BYTES)
| The client IP address (nnn.nnn.nnn.nnn)
| Note: This field is a component of the task suspend time, SUSPTIME (O14),
| field.
| 196 (TYPE-S, ’SYNCDLY’, 8 BYTES)
| The elapsed time in which the user task waited for a syncpoint request to be
| issued by it’s parent transaction. The user task was executing as a result of the
| parent task issuing a CICS BTS run-process or run-activity request to execute a
| process or activity synchronously. For more information, see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014)
| field.
If the originating terminal is VTAM across an ISC APPC or IRC link, the
NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid.
derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ‘DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.
v A 4-byte field containing the address space id (ASID) in which the client
program is running. This field contains the 4-character EBCDIC
representation of the 2-byte hex address space id.
098 (TYPE-C, ‘NETUOWSX’, 8 BYTES)
Name by which the network unit of work id is known within the originating
system. This name is assigned at attach time using either an STCK-derived
token (when the task is attached to a local terminal), or the network unit of
work id passed as part of an ISC APPC or IRC attach header.
| The first six bytes of this field are a binary value derived from the system
| clock of the originating system and which can wrap round at intervals of
| several months.
The last two bytes of this field are for the period count. These may change
during the life of the task as a result of syncpoint activity.
Note: When using MRO or ISC, the NETUOWSX field must be combined with
the NETUOWPX field (097) to uniquely identify a task, because
NETUOWSX is unique only to the originating CICS system.
102 (TYPE-S, ‘DISPWTT’, 8 BYTES)
Elapsed time for which the user task waited for redispatch. This is the
aggregate of the wait times between each event completion and user-task
redispatch.
Note: This field does not include the elapsed time spent waiting for first
dispatch. This field is a component of the task suspend time, SUSPTIME
(014), field.
109 (TYPE-C, ‘TRANPRI’, 4 BYTES)
Transaction priority when monitoring of the task was initialized (low-order
byte-3).
| Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
| 124 (TYPE-C, ‘BRDGTRAN’, 4 BYTES)
Bridge listener transaction identifier.
125 (TYPE-S, ‘DSPDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field. For more information, see “Clocks and time stamps” on page 73.
126 (TYPE-S, ‘TCLDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set for this transaction’s transaction class, TCLSNAME (166), being
reached. For more information, see “Clocks and time stamps” on page 73.
Note: This field is a subset of the first dispatch delay, DSPDELAY (125), field.
127 (TYPE-S, ‘MXTDELAY’, 8 BYTES)
The elapsed time waiting for first dispatch which was delayed because of the
limits set by the system parameter, MXT, being reached.
Note: The field is a subset of the first dispatch delay, DSPDELAY (125), field.
128 (TYPE-S, ‘LMDELAY’, 8 BYTES)
The elapsed time that the user task waited to acquire a lock on a resource. A
user task cannot explicitly acquire a lock on a resource, but many CICS
modules lock resources on behalf of user tasks using the CICS lock manager
(LM) domain.
For more information about CICS lock manager, see CICS Problem Determination
Guide.
For information about times, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
129 (TYPE-S, ‘ENQDELAY’, 8 BYTES)
The elapsed time waiting for a CICS task control local enqueue. For more
information, see “Clocks and time stamps” on page 73.
Note: This field is a subset of the task suspend time, SUSPTIME (014), field.
132 (TYPE-C, ‘RMUOWID’, 8 BYTES)
The identifier of the unit of work (unit of recovery) for this task. Unit of
recovery values are used to synchronize recovery operations among CICS and
other resource managers, such as IMS and DB2.
163 (TYPE-C, ‘FCTYNAME’, 4 BYTES)
Transaction facility name. This field is null if the transaction is not associated
with a facility. The transaction facility type (if any) can be identified using byte
0 of the transaction flags, TRANFLAG, (164) field.
Note: The field is a subset of the task suspend time, SUSPTIME (014), field
and also the RMITIME (170) field.
181 (TYPE-S, ‘WTEXWAIT’, 8 BYTES)
The elapsed time that the user task waited for one or more ECBs, passed to
CICS by the user task using the EXEC CICS WAIT EXTERNAL ECBLIST
command, to be MVS POSTed. The user task can wait on one or more ECBs. If
it waits on more than one, it is dispatchable as soon as one of the ECBs is
posted. For more information, see “Clocks and time stamps” on page 73, and
“A note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, (SUSPTIME) (014),
field.
182 (TYPE-S, ‘WTCEWAIT’, 8 BYTES)
The elapsed time the user task waited for:
v One or more ECBs, passed to CICS by the user task using the EXEC CICS
WAITCICS ECBLIST command, to be MVS POSTed. The user task can wait
on one or more ECBs. If it waits on more than one, it is dispatchable as soon
as one of the ECBs is posted.
v Completion of an event initiated by the same or by another user task. The
event would normally be the posting, at the expiration time, of a timer-event
control area provided in response to an EXEC CICS POST command. The
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
183 (TYPE-S, ‘ICDELAY’, 8 BYTES)
The elapsed time the user task waited as a result of issuing either:
v An interval control EXEC CICS DELAY command for a specified time
interval, or
v A specified time of day to expire, or
v An interval control EXEC CICS RETRIEVE command with the WAIT option
specified. For more information, see “Clocks and time stamps” on page 73,
and “A note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
184 (TYPE-S, ‘GVUPWAIT’, 8 BYTES)
The elapsed time the user task waited as a result of giving up control to
another task. A user task can give up control in many ways. Some
examples are application programs that use one or more of the following
EXEC CICS API or SPI commands:
v Using the EXEC CICS SUSPEND command. This command causes the
issuing task to relinquish control to another task of higher or equal
dispatching priority. Control is returned to this task as soon as no other
task of a higher or equal priority is ready to be dispatched.
v Using the EXEC CICS CHANGE TASK PRIORITY command. This
command immediately changes the priority of the issuing task and
causes the task to give up control in order for it to be dispatched at its
new priority. The task is not redispatched until tasks of higher or equal
priority, and that are also dispatchable, have been dispatched.
v Using the EXEC CICS DELAY command with INTERVAL (0). This
command causes the issuing task to relinquish control to another task of
higher or equal dispatching priority. Control is returned to this task as
soon as no other task of a higher or equal priority is ready to be
dispatched.
v Using the EXEC CICS POST command requesting notification that a
specified time has expired. This command causes the issuing task to
relinquish control to give CICS the opportunity to post the time-event
control area.
v Using the EXEC CICS PERFORM RESETTIME command to synchronize
the CICS date and time with the MVS system date and time of day.
v Using the EXEC CICS START TRANSID command with the ATTACH
option.
For more information, see “Clocks and time stamps” on page 73, and “A
note about wait (suspend) times” on page 76.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 195 (TYPE-S, ‘RUNTRWTT’, 8 BYTES)
| The elapsed time in which the user task waited for completion of a
| transaction that executed as a result of the user task issuing a CICS BTS
| run process, or run activity, request to execute a process, or activity,
| synchronously.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 248 (TYPE-A, ‘CHMODECT’, 4 BYTES)
| The number of CICS change-TCB modes issued by the user task.
| 249 (TYPE-S, ‘QRMODDLY’, 8 BYTES)
| The elapsed time for which the user task waited for redispatch on the
| CICS QR TCB. This is the aggregate of the wait times between each event
| completion. and user-task redispatch.
| Note: This field does not include the elapsed time spent waiting for the
| first dispatch. The QRMODDLY field is a component of the task
| suspend time, SUSPTIME (014), field.
| 250 (TYPE-S, ‘MXTOTDLY’, 8 BYTES)
| The elapsed time in which the user task waited to obtain a CICS open
| TCB, because the region had reached the limit set by the system parameter,
| MAXOPENTCBS.
| For more information, see “Clocks and time stamps” on page 73, and “A
| note about wait (suspend) times” on page 76.
| Note: This field is a subset of the task suspend time, SUSPTIME (014),
| field.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 044 (TYPE-A, ‘TSGETCT’, 4 BYTES)
Number of temporary-storage GET requests issued by the user task.
046 (TYPE-A, ‘TSPUTACT’, 4 BYTES)
Number of PUT requests to auxiliary temporary storage issued by the user
task.
047 (TYPE-A, ‘TSPUTMCT’, 4 BYTES)
Number of PUT requests to main temporary storage issued by the user task.
092 (TYPE-A, ‘TSTOTCT’, 4 BYTES)
| Total number of temporary storage requests issued by the user task. This field
| is the sum of the temporary storage READQ (TSGETCT), WRITEQ AUX
| (TSPUTACT), WRITEQ MAIN (TSPUTMCT), and DELETEQ requests issued by
| the user task.
Note: This field is a component of the task suspend time, SUSPTIME (014),
field.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 034 (TYPE-A, ‘TCMSGIN1’, 4 BYTES)
Number of messages received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
035 (TYPE-A, ‘TCMSGOU1’, 4 BYTES)
Number of messages sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
067 (TYPE-A, ‘TCMSGIN2’, 4 BYTES)
Number of messages received from the LUTYPE6.1 alternate terminal facilities
by the user task.
068 (TYPE-A, ‘TCMSGOU2’, 4 BYTES)
Number of messages sent to the LUTYPE6.1 alternate terminal facilities by the
user task.
069 (TYPE-A, ‘TCALLOCT’, 4 BYTES)
Number of TCTTE ALLOCATE requests issued by the user task for LUTYPE6.2
(APPC), LUTYPE6.1, and IRC sessions.
083 (TYPE-A, ‘TCCHRIN1’, 4 BYTES)
Number of characters received from the task’s principal terminal facility,
including LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
084 (TYPE-A, ‘TCCHROU1’, 4 BYTES)
Number of characters sent to the task’s principal terminal facility, including
LUTYPE6.1 and LUTYPE6.2 (APPC) but not MRO (IRC).
085 (TYPE-A, ‘TCCHRIN2’, 4 BYTES)
Number of characters received from the LUTYPE6.1 alternate terminal facilities
by the user task. (Not applicable to ISC APPC.)
086 (TYPE-A, ‘TCCHROU2’, 4 BYTES)
Number of characters sent to the LUTYPE6.1 alternate terminal facilities by the
user task. (Not applicable to ISC APPC.)
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 111 (TYPE-C, ‘LUNAME’, 8 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. If the task is executing in an application-owning or file-owning
region, the LUNAME is the generic applid of the originating connection for
MRO, LUTYPE6.1, and LUTYPE6.2 (APPC). The LUNAME is blank if the
originating connection is an external CICS interface (EXCI).
133 (TYPE-S, ‘LU61WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.1
connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.1 connections, but not the waits incurred due to
LUTYPE6.1 syncpoint flows. For more information see “Clocks and time
| stamps” on page 73, and “A note about wait (suspend) times” on page 76.
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 134 (TYPE-S, ‘LU62WTT’, 8 BYTES)
The elapsed time for which the user task waited for I/O on a LUTYPE6.2
(APPC) connection or session. This time also includes the waits incurred for
conversations across LUTYPE6.2 (APPC) connections, but not the waits
incurred due to LUTYPE6.2 (APPC) syncpoint flows. For more information, see
“Clocks and time stamps” on page 73, and “A note about wait (suspend)
| times” on page 76
| Note: This field is a component of the task suspend time, SUSPTIME (014),
| field.
| 135 (TYPE-A, ‘TCM62IN2’, 4 BYTES)
Number of messages received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
136 (TYPE-A, ‘TCM62OU2’, 4 BYTES)
Number of messages sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
137 (TYPE-A, ‘TCC62IN2’, 4 BYTES)
Number of characters received from the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
138 (TYPE-A, ‘TCC62OU2’, 4 BYTES)
Number of characters sent to the alternate facility by the user task for
LUTYPE6.2 (APPC) sessions.
165 (TYPE-A, ‘TERMINFO’, 4 BYTES)
Terminal or session information for this task’s principal facility as identified in
the ‘TERM’ field id 002. This field is null if the task is not associated with a
terminal or session facility.
Byte 0 Identifies whether this task is associated with a terminal or session.
This field can be set to one of the following values:
X'00' None
For a list of the typeterm definitions, see the CICS Resource Definition
Guide.
169 (TYPE-C, ‘TERMCNNM’, 4 BYTES)
Terminal session connection name. If the terminal facility associated with this
transaction is a session, this field is the name of the owning connection (sysid).
|
End of Product-sensitive programming interface
Exception records are produced after each of the following conditions encountered
by a transaction has been resolved:
v Wait for storage in the CDSA
v Wait for storage in the UDSA
v Wait for storage in the SDSA
v Wait for storage in the RDSA
v Wait for storage in the ECDSA
v Wait for storage in the EUDSA
v Wait for storage in the ESDSA
v Wait for storage in the ERDSA
v Wait for auxiliary temporary storage
v Wait for auxiliary temporary storage string
v Wait for auxiliary temporary storage buffer
| v Wait for coupling facility data tables locking (request) slot
| v Wait for coupling facility data tables non-locking (request) slot (With coupling
| facility data tables each CICS has a number of slots available for requests in the
| CF data table. When all available slots are in use, any further request must wait.)
v Wait for file buffer
v Wait for file string
| v Wait for LSRPOOL buffer
v Wait for LSRPOOL string
These records are fixed format. The format of these exception records is as follows:
MNEXCDS DSECT
EXCMNTRN DS CL4 TRANSACTION IDENTIFICATION
EXCMNTER DS XL4 TERMINAL IDENTIFICATION
EXCMNUSR DS CL8 USER IDENTIFICATION
EXCMNTST DS CL4 TRANSACTION START TYPE
EXCMNSTA DS XL8 EXCEPTION START TIME
EXCMNSTO DS XL8 EXCEPTION STOP TIME
EXCMNTNO DS PL4 TRANSACTION NUMBER
EXCMNTPR DS XL4 TRANSACTION PRIORITY
DS CL4 RESERVED
EXCMNLUN DS CL8 LUNAME
DS CL4 RESERVED
EXCMNEXN DS XL4 EXCEPTION NUMBER
EXCMNRTY DS CL8 EXCEPTION RESOURCE TYPE
EXCMNRID DS CL8 EXCEPTION RESOURCE ID
EXCMNTYP DS XL2 EXCEPTION TYPE
Note: The performance class exception wait time field, EXWTTIME (103), is a
calculation based on subtracting the start time of the exception
(EXCMNSTA) from the finish time of the exception (EXCMNSTO).
EXCMNTNO (TYPE-P, 4 BYTES)
Transaction identification number.
EXCMNTPR (TYPE-C, 4 BYTES)
Transaction priority when monitoring was initialized for the task (low-order
byte).
EXCMNLUN (TYPE-C, 4 BYTES)
VTAM logical unit name (if available) of the terminal associated with this
transaction. This field is nulls if the task is not associated with a terminal.
If the originating terminal is a VTAM device across an ISC APPC or IRC link,
the NETNAME is the networkid.LUname. If the terminal is non-VTAM, the
NETNAME is networkid.generic_applid
derived from the originating system. That is, the name is a 17-byte LU name
consisting of:
v An 8-byte eye-catcher set to ’DFHEXCIU’.
v A 1-byte field containing a period (.).
v A 4-byte field containing the MVSID, in characters, under which the client
program is running.
The first 6 bytes of this field are a binary value derived from the clock of the
originating system and wrapping round at intervals of several months. The last
two bytes of this field are for the period count. These may change during the
| life of the task as a result of syncpoint activity.
| Note: When using MRO or ISC, the EXCMNNSX field must be combined with
| the EXCMNNPX field to uniquely identify a task, because the
| EXCMNNSX field is unique only to the originating CICS system.
| EXCMNTRF (TYPE-C, 8 BYTES)
Transaction flags—a string of 64 bits used for signaling transaction definition
and status information:
Byte 0 Transaction facility identification
Bit 0 Transaction facility name = none
Bit 1 Transaction facility name = terminal
Bit 2 Transaction facility name = surrogate
Bit 3 Transaction facility name = destination
Bit 4 Transaction facility name = 3270 bridge
Bits 5–7
Reserved
Byte 1 Transaction identification information
Bit 0 System transaction
Bit 1 Mirror transaction
Bit 2 DPL mirror transaction
Bit 3 ONC RCP alias transaction
Bit 4 WEB alias transaction
Bit 5 3270 bridge transaction
| Bit 6 Reserved
| Bit 7 CICS BTS Run transaction
Byte 2 MVS Workload Manager information
Bit 0 Workload Manager report
Bit 1 Workload Manager notify, completion = yes
Bit 2 Workload Manager notify
Bits 3–7
Reserved
Byte 3 Transaction definition information
The following table shows the value and relationships of the fields EXCMNTYP,
EXCMNRTY, and EXCMNRID.
Overview.
Tivoli Performance Reporter for OS/390 is a reporting system which uses DB2. You
can use it to process utilization and throughput statistics written to log data sets by
computer systems. You can use it to analyze and store the data into DB2, and
present it in a variety of forms. Tivoli Performance Reporter consists of a base
product with several optional features that are used in systems management, as
shown in Table 9. Tivoli Performance Reporter for OS/390 uses Reporting Dialog/2
as the OS/2® reporting feature.
Table 9. Tivoli Performance Reporter for OS/390 and optional features
CICS IMS Network System Workstation AS/400® Reporting Accounting
Performance Performance Performance Performance Performance Performance Dialog/2
Tivoli Performance Reporter for OS/390 Base
The Tivoli Performance Reporter for OS/390 database can contain data from many
sources. For example, data from System Management Facilities (SMF), Resource
Measurement Facility (RMF), CICS, and Information Management System (IMS)
can be consolidated into a single report. In fact, you can define any non-standard
log data to Tivoli Performance Reporter for OS/390 and report on that data
together with data coming from the standard sources.
The Tivoli Performance Reporter for OS/390 CICS performance feature provides
reports for your use when analyzing the performance of CICS Transaction Server
for OS/390, and CICS/ESA, based on data from the CICS monitoring facility
(CMF) and, for CICS Transaction Server for OS/390, CICS statistics. These are
some of the areas that Tivoli Performance Reporter can report on:
The Tivoli Performance Reporter for OS/390 CICS performance feature collects
only the data required to meet CICS users’ needs. You can combine that data with
more data (called environment data), and present it in a variety of reports. Tivoli
Performance Reporter for OS/390 provides an administration dialog for
maintaining environment data. Figure 12 illustrates how data is organized for
presentation in Tivoli Performance Reporter for OS/390 reports.
Operating system
System data
Data written to
Logs various logs
Performance Reporter
Performance collects only
Reporter Performance relevant data
CICS Reporter
performance records
feature
User-supplied
Performance User- environment data
Reporter supplied maintained in the
tables data Performance Reporter
database
Required data
Report Report Report presented in
report format
The Tivoli Performance Reporter for OS/390 CICS performance feature processes
these records:
The following sections describe certain issues and concerns associated with
systems management and how you can use the Tivoli Performance Reporter for
OS/390 CICS performance feature.
S ---------------Response time------------------ F
T I
A -Suspend time-- --------Dispatch time------- N
R I
T ----Service time--- S
H
If both the Tivoli Performance Reporter for OS/390 CICS performance feature’s
statistics component and the Performance Reporter System Performance feature’s
MVS component are installed and active, these reports are available for analyzing
transaction rates and processor use by CICS region:
v The CICS Transaction Processor Utilization, Monthly report shows monthly
averages for the dates you specify.
v The CICS Transaction Processor Utilization, Daily report shows daily averages
for the dates you specify.
Tivoli Performance Reporter for OS/390 produces several reports that can help
analyze storage usage. For example, the CICS Dynamic Storage (DSA) Usage
report, shows pagepool usage.
Use this report to start verifying that you are meeting service-level objectives. First,
verify that the values for average response time are acceptable. Then check that the
transaction rates do not exceed agreed-to limits. If a transaction is not receiving the
appropriate level of service, you must determine the cause of the delay.
QWHCTOKN
Figure 16. Correlating a CICS performance-monitoring record with a DB2 accounting record
If you match the NETNAME and UOWID fields in a CICS record to the DB2
token, you can create reports that show the DB2 activity caused by a CICS
transaction.
The Tivoli Performance Reporter for OS/390 CICS performance feature creates
exception records for these incidents and exceptions:
v Wait for storage
v Wait for main temporary storage
v Wait for a file string
v Wait for a file buffer
v Wait for an auxiliary temporary storage string
v Wait for an auxiliary temporary storage buffer
v Transaction ABEND
v System ABEND
v Storage violations
v Short-of-storage conditions
v VTAM request rejections
v I/O errors on auxiliary temporary storage
v I/O errors on the intrapartition transient data set
v Autoinstall errors
v MXT reached
v DTB overflow
v Link errors for IRC and ISC
v Log stream buffer-full conditions
v CREAD and CWRITE fails (data space problems)
v Local shared resource (LSR) pool ( string waits (from A08BKTSW)
v Waits for a buffer in the LSR pool (from A09TBW)
v Errors writing to SMF
v No space on transient-data data set (from A11ANOSP)
v Waits for a transient-data string (from A11STNWT)
v Waits for a transient-data buffer (from A11ATNWT)
v Transaction restarts (from A02ATRCT)
v Maximum number of tasks in a class reached (CMXT) (from A15MXTM)
v Transmission errors (from A06TETE or AUSTETE).
CICS Incidents
DATE: '1998-09-20' to '1998-09-21'
Terminal
operator User Exception Exception
Sev Date Time ID ID ID description
--- ---------- -------- -------- -------- ------------------ ---------------------------
03 1995-09-20 15.42.03 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND AZTS
03 1995-09-21 00.00.00 SYSTEM TRANSACTION_ABEND CICS TRANSACTION ABEND APCT
03 1995-09-21 17.37.28 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL
03 1995-09-21 17.12.03 SYSTEM SHORT_OF_STORAGE CICS SOS IN PAGEPOOL
The CICS UOW Response Times report in Figure 18 shows an example of how
Tivoli Performance Reporter for OS/390 presents CICS unit- of-work response
times.
Adjusted
UOW UOW Response
start Tran CICS Program tran time
time ID ID name count (sec)
-------- ---- -------- -------- ----- --------
09.59.25 OP22 CICSPROD DFHAPRT 2 0.436
OP22 CICSPRDC OEPCPI22
...
Tivoli Performance Reporter report: CICS902
Figure 18. Tivoli Performance Reporter for OS/390 CICS UOW response times report
Monitoring availability
Users of CICS applications depend on the availability of several types of resources:
v Central site hardware and the operating system environment in which the CICS
region runs
v Network hardware, such as communication controllers, teleprocessing lines, and
terminals through which users access the CICS region
v CICS region
v Application programs and data. Application programs can be distributed among
several CICS regions.
In some cases, an application depends on the availability of many resources of the
same and of different types, so reporting on availability requires a complex
analysis of data from different sources. Tivoli Performance Reporter for OS/390
can help you, because all the data is in one database.
When running under goal mode in MVS 5.1.0 and later, CICS performance can be
reported in workload groups, service classes, and periods. These are a few
examples of Tivoli Performance Reporter reports for CICS in this environment.
Figure 20 shows how service classes were served by other service classes. This
report is available only when the MVS system is running in goal mode.
2.50 Response
time (s)
Active
2.00
Ready
Response Time (sec)
Idle
1.50
Look wait
I/O wait
1.00
Conv wait
Distr wait
0.50
Syspl wait
Timer wait
0.00
Other wait
8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00
Figure 19. Example of an MVSPM response time breakdown, hourly trend report
Service MVS Total Activ Ready Idle Lock I/O Conv Distr Local Netw Syspl Timer Other Misc
Workload class sysstate state state state wait wait wait wait wait wait wait wait wait wait
group /Period Ph ID (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
-------- ---------- --- --------- ----- ----- ----- ----- ----- -- --- ----- ----- ----- ----- ----- ----- -----
CICS CICS-1 /1 BTE CA0 6.6 0.0 0.0 0.0 0.0 0.0 6.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 29.4 0.0 0.0 0.0 0.0 0.0 14.7 0.0 0.0 0.0 0.0 0.0 14.6 0.0
C90 3.8 0.4 1.3 1.5 0.0 0.2 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 13.3 0.1 0.5 0.5 0.0 0.1 7.2 0.0 0.0 0.0 0.0 0.0 4.9 0.0
/1 EXE CA0 16.0 0.1 0.2 0.1 0.0 15.5 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0
C80 14.9 0.1 0.1 0.1 0.0 3.7 0.0 0.0 0.0 0.0 0.0 0.0 11.0 0.0
C90 14.0 1.6 4.5 4.8 0.0 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- --- ----- ----- ----- ----- ----- ----- -----
* 14.9 0.6 1.6 1.7 0.0 7.4 0.0 0.0 0.0 0.0 0.0 0.0 3.7 0.0
IMS IMS-1 /1 EXE CA0 20.7 0.4 0.7 0.0 0.0 0.0 19.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C80 1.1 0.2 0.1 0.7 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C90 22.2 5.3 11.9 1.2 0.0 0.2 3.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
----- ----- ----- ----- ----- ----- ---- ----- ----- ----- ----- ----- ----- -----
* 14.7 2.0 4.2 0.6 0.0 0.1 7.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Tivoli Performance Reporter report: MVSPM73
Figure 21 shows how much the various transaction states contribute to the average
response time. This report is available when the MVS system is running in goal
mode and when the subsystem is CICS or IMS.
Figure 19 on page 120 shows the average transaction response time trend and how
the various transaction states contribute to it. (The sum of the different states adds
up to the average execution time. The difference between the response time and
the execution time is mainly made up of switch time, for example, the time the
transactions spend being routed to another region for processing). This report is
available when the MVS system is running in goal mode and when the subsystem
is CICS or IMS.
To help you migrate to goal-oriented workload management, you can run any
MVS image in a sysplex in compatibility mode, using the performance management
tuning methods of releases of MVS before MVS/ESA 5.1.
Notes:
1. If you do not want to use the MVS workload management facility, you should
review your MVS performance definitions to ensure that they are still
appropriate for CICS Transaction Server for OS/390 Release 3. To do this,
review parameters in the IEAICS and IEAIPS members of the MVS PARMLIB
library. For more information about these MVS performance definitions, see the
OS/390 MVS Initialization and Tuning Guide.
| 2. If you use CICSPlex SM to control dynamic routing in a CICSplex or BTS-plex,
| you can base its actions on the CICS response time goals of the CICS
transactions as defined to the MVS workload manager. See “Using
CICSPlex SM workload management” on page 134. For full details, see the
CICSPlex SM Managing Workloads manual.
The main benefit is that you no longer have to continually monitor and tune CICS
to achieve optimum performance. You can set your workload objectives in the
service definition and let the workload component of MVS manage the resources
and the workload to achieve your objectives.
The MVS workload manager produces performance reports that you can use to
establish reasonable performance goals and for capacity planning.
For MVS workload manager operation across the CICS task-related user exit
interface to other subsystems, such as DB2 and DBCTL, you need the appropriate
releases of these products.
For more information about requirements for MVS workload management see the
following manuals: MVS Planning: Workload Management, and MVS Planning:
Sysplex Manager.
Resource usage
The CICS function for MVS workload management incurs negligible impact on
CICS storage.
All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload manager are subject to the effects of workload management.
If the CICS workload involves non-CICS resource managers, such as DB2 and
DBCTL, CICS can pass information through the resource manager interface (RMI1)
to enable MVS workload manager to relate the part of the workload within the
non-CICS resource managers to the part of the workload within CICS.
CICS does not pass information across ISC links to relate the parts of the task
execution thread on either side of the ISC link. If you use tasks that communicate
across ISC links, you must define separate performance goals, and service classes,
for the parts of the task execution thread on each side of the ISC link. These rules
apply to ISC links that are:
v Within the same MVS image (so called “intrahost ISC”)
v Between MVS images in the same sysplex (perhaps for compatibility reasons)
1. The CICS interface modules that handle the communication between a task-related user exit and the resource manager are usually
referred to as the resource manager interface (RMI) or the task-related user exit (TRUE) interface.
Workload management also collects performance and delay data, which can be
used by reporting and monitoring products, such as the Resource Measurement
Facility (RMF), the TIVOLI Performance Reporter for OS/390, or vendor products.
The service level administrator defines your installation’s performance goals, and
monitoring data, based on business needs and current performance. The complete
definition of workloads and performance goals is called a service definition. You
may already have this kind of information in a service level agreement (SLA).
This information helps you to set realistic goals for running your CICS work when
you switch to goal mode. The reporting data produced by RMF reports:
v Is organized by service class
v Contains reasons for any delays that affect the response time for the service class
(for example, because of the actions of a resource manager or an I/O
subsystem).
Note: It does not matter what goal you specify, since it is not used in
compatibility mode, but it cannot be discretionary.
– Specify the name of the service class under the classification rules for the
CICS subsystem:
Subsystem Type . . . . . . : CICS
Default Service Class . . : CICSALL
v In your ICS member in SYS1.PARMLIB (IEAICSxx), specify:
SUBSYS=CICS,
SRVCLASS=CICSALL,RPGN=100
v Install the workload definition in the coupling facility.
v Activate the test service policy, either by using options provided by the WLM
ISPF application, or by issuing the following MVS command:
VARY WLM,POLICY=CICSTEST
You receive response time information about CICS transactions in the RMF
Monitor I Workload Activity Report under report performance group 100. For more
information about defining performance goals and the use of SRVCLASS, see the
MVS Planning: Workload Management manual.
If you have varying performance goals, you can define several service policies.
You can activate only one service policy at a time for the whole sysplex, and, when
appropriate, switch to another policy.
Defining workloads
A workload comprises units of work that share some common characteristics that
makes it meaningful for an installation to manage or monitor as a group. For
example, all CICS work, or all CICS order entry work, or all CICS development
work.
You can also create service classes for started tasks and JES, and can assign
resource groups to those service classes. You can use such service classes to
manage the workload associated with CICS as it starts up, but before CICS
There is a default service class, called SYSOTHER. It is used for CICS transactions
for which MVS workload management cannot find a matching service class in the
classification rules—for example, if the couple data set becomes unavailable.
There is one set of classification rules for each service definition. The classification
rules apply to every service policy in the service definition; so there is one set of
rules for the sysplex.
You should use classification rules for every service class defined in your service
definition.
Classification rules categorize work into service classes and, optionally, report
classes, based on work qualifiers. You set up classification rules for each MVS
subsystem type that uses workload management. The work qualifiers that CICS
can use (and which identify CICS work requests to workload manager) are:
LU LU name
LUG LU name group
SI Subsystem instance (VTAM applid)
SIG Subsystem instance group
TN Transaction identifier
TNG Transaction identifier group
UI Userid
UIG Userid group.
Notes:
1. You should consider defining workloads for terminal-owning regions only.
Work requests do not normally originate in an application-owning region. They
(transactions) are normally routed to an application-owning region from a
terminal-owning region, and the work request is classified in the
terminal-owning region. In this case, the work is not reclassified in the
application-owning region.
If work orginates in the application-owning region it is classified in the
application-owning region; normally there would be no terminal.
2. You can use identifier group qualifiers to specify the name of a group of
qualifiers; for example, GRPACICS could specify a group of CICS tranids,
which you could specify on classification rules by TNG GRPACICS. This is a
useful alternative to specifying classification rules for each transaction
separately.
You can use classification groups to group disparate work under the same work
qualifier—if, for example, you want to assign it to the same service class.
Example of using classification rules: As an example, you might want all CICS
work to go into service class CICSB except for the following:
v All work from LU name S218, except the PAYR transaction, is to run in service
class CICSA
v Work for the PAYR transaction (payroll application) entered at LU name S218 is
to run in service class CICSC.
v All work from terminals other than LU name S218, and whose LU name begins
with S2, is to run in service class CICSD.
You could specify this by the following classification rules:
Subsystem Type . . . . . . . CICS
-------Qualifier----------- -------Class--------
Type Name Start Service Report
DEFAULTS: CICSB ________
1 LU S218 CICSA ________
2 TN PAYR CICSC ________
1 LU S2* CICSD ________
Note: In this classification, the PAYR transaction is nested as a sub-rule under the
classification rule for LU name S218, indicated by the number 2, and the
indentation of the type and name columns.
v For request 1, the work request for the payroll application runs in service class
CICSC. This is because the request is associated with the terminal with LU name
S218, and the TN—PAYR classification rule specifying service class CICSC is
nested under the LU—S218 classification rule qualifier.
v For request 2, the work request for the payroll application runs in service class
CICSB, because it is not associated with LU name S218, nor S2*, and there are
no other classification rules for the PAYR transaction. Likewise, any work
requests associated with LU names that do not start with S2 run in service class
CICSB, as there are classification rules for LU names S218 and S2* only.
v For request 3, the work request for the DEBT transaction runs in service class
CICSA, because it is associated with LU name S218, and there is no DEBT
classification rule nested under the LU—S218 classification rule qualifiers.
v For request 4, the work request for the ANOT transaction runs in service class
CICSD, because it is associated with an LU name starting S2, but not S218.
Note: It is helpful at this stage to record your service definition in a form that
will help you to enter it into the MVS workload manager ISPF
application. You are recommended to use the worksheets provided in
the MVS publication Planning: Workload Management.
9. Install MVS.
10. Set up a sysplex with a single MVS image, and run in workload manager
compatibility mode.
11. Upgrade your existing XCF couple data set.
12. Start the MVS workload manager ISPF application, and use it in the following
steps.
13. Allocate and format a new couple data set for workload management. (You
can do this from the ISPF application.)
14. Define your service definition.
15. Install your service definition on the couple data set for workload
management.
16. Activate a service policy.
17. Switch the MVS image into goal mode.
18. Start up a new MVS image in the sysplex. (That is, attach the new MVS image
to the couple data set for workload management, and link it to the service
policy.)
19. Switch the new MVS image into goal mode.
20. Repeat steps 18 and 19 for each new MVS image in the sysplex.
Notes:
1. CICS Transaction Server for OS/390 support for MVS workload manager is
initialized automatically during CICS startup.
2. All CICS regions (and other MVS subsystems) running on an MVS image with
MVS workload management are subject to the effects of workload manager.
In general, you should define CICS performance objectives to the MVS workload
manager first, and observe the effect on CICS performance. Once the MVS
workload manager definitions are working correctly, you can then consider tuning
the CICS parameters to further enhance CICS performance. However, you should
use CICS performance parameters as little as possible.
| For more information about CICSPlex SM, see the CICSPlex SM Concepts and
| Planning manual.
RMF provides data for subsystem work managers that support workload
management. In MVS these are IMS and CICS.
This chapter includes a discussion of some possible data that may be reported for
CICS and IMS, and provides some possible explanations for the data. Based on this
discussion and the explanations, you may decide to alter your service class
definitions. In some cases, there may be some actions that you can take, in which
case you can follow the suggestion. In other cases, the explanations are provided
only to help you better understand the data. For more information about using
RMF, see the RMF User’s Guide.
These explanations are given for two main sections of the reports:
v The response time breakdown in percentage section
v The state section, covering switched time.
The WAITING FOR main heading is further broken down into a number of
subsidiary headings. Where applicable, for waits other than those described for the
IDLE condition described above, CICS interprets the cause of the wait, and records
the ‘waiting for’ reason in the WLM performance block.
The waiting-for terms used in the RMF report equate to the WLM_WAIT_TYPE
parameter on the SUSPEND, WAIT_OLDC, WAIT_OLDW, and WAIT_MVS calls
used by the dispatcher, and the SUSPEND and WAIT_MVS calls used in the CICS
XPI. These are shown as follows (with the CICS WLM_WAIT_TYPE term, where
different from RMF, in parenthesis):
Term Description
LOCK Waiting on a lock. For example, waiting for:
v A lock on CICS resource
v A record lock on a recoverable VSAM file
v Exclusive control of a record in a BDAM file
v An application resource that has been locked by an EXEC CICS ENQ
command.
I/O (IO)
Waiting for an I/O request or I/O related request to complete. For
example:
v File control, transient data, temporary storage, or journal I/O.
v Waiting on I/O buffers or VSAM strings.
CONV
Waiting on a conversation between work manager subsystems. This
information is further analyzed under the SWITCHED TIME heading.
DIST Not used by CICS.
LOCAL (SESS_LOCALMVS)
Waiting on the establishment of a session with another CICS region in the
same MVS image in the sysplex.
SYSPL (SESS_SYSPLEX)
Waiting on establishment of a session with another CICS region in a
different MVS image in the sysplex.
REMOT (SESS_NETWORK)
Waiting on the establishment of an ISC session with another CICS region
(which may, or may not, be in the same MVS image).
TIMER
Waiting for a timer event or an interval control event to complete. For
example, an application has issued an EXEC CICS DELAY or EXEC CICS
WAIT EVENT command which has yet to complete.
PROD (OTHER_PRODUCT)
Waiting on another product to complete its function; for example, when
the work request has been passed to a DB2 or DBCTL subsystem.
For more information on the MVS workload manager states and resource names
used by CICS Transaction Server for OS/390 Release 3, see the CICS Problem
Determination Guide.
The text following the figure explains how to interpret the fields.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSHR RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
An RMF workload activity report contains “snapshot data” which is data collected
over a relatively short interval. The data for a given work request (CICS
transaction) in an MRO environment is generally collected for more than one CICS
region, which means there can be some apparent inconsistencies between the
execution (EXE) phase and the begin to end (BTE) data in the RMF reports. This is
caused by the end of a reporting interval occurring at a point when work has
completed in one region but not yet completed in an associated region. See
Figure 22.
For example, an AOR can finish processing transactions, the completion of which
are included in the current reporting interval, whilst the TOR may not complete its
processing of the same transactions during the same interval.
The fields in this RMF report describe an example CICS hotel reservations service
class (CICSHR), explained as follows:
CICS This field indicates that the subsystem work manager is CICS.
BTE This field indicates that the data in the row relates to the begin-to-end work
phase.
CICS transactions are analyzed over two phases: a begin-to-end (BTE)
phase, and an execution (EXE) phase.
The begin-to-end phase usually takes place in the terminal owning region
(TOR), which is responsible for starting and ending the transaction.
EXE This field indicates that the data in the row relates to the execution work
phase. The execution phase can take place in an application owning region
(AOR) and a resource-owning region such as an FOR. In our example, the
Note: In our example the two phases show the same number of
transactions completed, indicating that during the reporting interval
all the transactions routed by the TORs (ENDED) were completed
by the AORs (EXECUTD) and also completed by the TORs. This will
not normally be the case because of the way data is captured in
RMF reporting intervals. See “RMF reporting intervals” on page 137.
ACTUAL
Shown under TRANSACTION TIME, this field shows the average response
time as 0.114 seconds, for the 216 transactions completed in the BTE phase.
EXECUTION
Shown under TRANSACTION TIME, this field shows that on average it
took 0.078 seconds for the AORs to execute the transactions.
While executing these transactions, CICS records the states the transactions are
experiencing. RMF reports the states in the RESPONSE TIME BREAKDOWN IN
PERCENTAGE section of the report, with one line for the begin-to-end phase, and
another for the execution phase.
| The response time analysis for the BTE phase is described as follows:
| For BTE
| Explanation
| TOTAL
| The CICS BTE total field shows that the TORs have information covering
| 93.4% of the ACTUAL response time, the analysis of which is shown in the
| remainder of the row. This value is the ratio of sampled response times to
| actual response times. The sampled response times are derived by
| calculating the elapse times to be the number of active performance blocks
| (inflight transactions) multiplied by the sample interval time. The actual
| response times are those reported to RMF by CICS when each transaction
| ends. The proximity of the total value to 100% and a relatively small
| standard deviation value are measures of how accurately the sampled data
| represents the actual system behavior. “Possible explanations” on page 141
| shows how these reports can be distorted.
| ACTIVE
| On average, the work (transactions) was active in the TORs for only about
| 10.2% of the ACTUAL response time
| READY
| In this phase, the TORs did not detect that any part of the average
| response time was accounted for by work that was dispatchable but
| waiting behind other transactions.
| IDLE In this phase, the TORs did not detect that any part of the average
| response time was accounted for by transactions that were waiting for
| work.
| Note: In the analysis of the BTE phase, the values do not exactly add up to the
TOTAL value because of rounding—in our example, 10.2 + 83.3 = 93.5,
against a total shown as 93.4.
The response time analysis for the EXE phase is described as follows:
For EXE
Explanation
TOTAL
The CICS EXE total field shows that the AORs have information covering
67% of the ACTUAL response time.
ACTIVE
On average, the work is active in the AOR for only about 13.2% of the
average response time.
READY
On average the work is ready, but waiting behind other tasks in the region,
for about 7.1% of the average response time.
PROD On average, 46.7% of the average response time is spent outside the CICS
subsystem, waiting for another product to provide some service to these
transactions.
You can’t tell from this RMF report what the other product is, but the
probability is that the transactions are accessing data through a database
manager such as Database Control (DBCTL) or DB2.
Possible explanations
There several possible explanations for the unusual values shown in this sample
report:
v Long-running transactions
v Never-ending transactions
v Conversational transactions
v Dissimilar work in service class
Long-running transactions
| The RMF report in Figure 23 on page 138 shows both very high response times
| percentages and a large standard deviation of reported transaction times.
| The report shows for the recorded 15 minute interval that 1648 transactions
| completed in the TOR. These transactions had an actual average response time of
| 0.111seconds (note that this has a large standard deviation) giving a total of 182.9
| seconds running time (0.111 seconds multiplied by 1648 transactions). However, if
| there are a large number of long running transactions also running, these will be
| counted in the sampled data but not included in the the actual response time
| values. If the number of long running transactions is large, the distortion of the
| Total value will also be very large.
Never-ending transactions
Never-ending transactions differ from long-running transactions in that they persist
for the life of a region. For CICS, these could include the IBM reserved transactions
such as CSNC and CSSY, or customer defined transactions. Never-ending
transactions are reported in a similar way to long-running transactions, as
explained above. However, for never-ending CICS transactions, RMF might report
large percentages in IDLE, or under TIMER or MISC in the WAITING FOR section.
Possible actions
The following are some actions you could take for reports of this type:
Group similar work into the same service classes: Make sure your service classes
represent groups of similar work. This could require creating additional service
classes. For the sake of simplicity, you may have only a small number of service
classes for CICS work. If there are transactions for which you want the RMF
response time breakdown data, consider including them in their own service class.
Do nothing: For service classes representing dissimilar work such as the subsystem
default service class, recognize that the response time breakdown could include
long-running or never-ending transactions. Accept that RMF data for such service
classes does not make much sense.
Possible explanations
There are two possible explanations:
1. No transactions completed in the interval
2. RMF did not receive data from all systems in the sysplex.
RMF did not receive data from all systems in the sysplex.
The RMF post processor may have been given SMF records from only a subset of
the systems running in the sysplex. For example, the report may represent only a
single MVS image. If that MVS image has no TOR, its AORs receive CICS
transactions routed from another MVS image or from outside the sysplex. Since the
response time for the transactions is reported by the TOR, there is no transaction
response time for the work, nor are there any ended transactions.
Possible actions
The following are some actions you could take for reports of this type:
Do nothing
You may have created this service class especially to prevent the state samples of
long running transactions from distorting data for your production work. In this
case there is no action to take.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
CICS Trans not classified singly
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.091
MPL 0.00 QUEUED 000.00.00.020
ENDED 1731 EXECUTION 000.00.00.113
END/SEC 1.92 STANDARD DEVIATION 000.00.00.092
#SWAPS 0
EXECUTD 1086
Possible explanation
The situation illustrated by this example could be explained by the service class
containing a mixture of routed and non-routed transactions. In this case, the AORs
have recorded states which account for more time than the average response time
of all the transactions. The response time breakdown shown by RMF for the
execution phase of processing can again show percentages exceeding 100% of the
response time.
Possible actions
Define routed and non-routed transactions in different service classes.
REPORT BY: POLICY=HPTSPOL1 WORKLOAD=PRODWKLD SERVICE CLASS=CICSPROD RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=HIGH
-TRANSACTIONS-- TRANSACTION TIME HHH.MM.SS.TTT
AVG 0.00 ACTUAL 000.00.00.150
MPL 0.00 QUEUED 000.00.00.039
ENDED 3599 EXECUTION 000.00.00.134
END/SEC 4.00 STANDARD DEVIATION 000.00.00.446
#SWAPS 0
EXECUTD 2961
Possible actions
None.
Possible explanation
This situation could be caused by converting from ISC to MRO between the TOR
and the AOR.
When two CICS regions are connected via VTAM intersystem communication (ISC)
links, the perspective from a WLM viewpoint is that they behave differently from
when they are connected via multiregion (MRO) option. One key difference is that,
with ISC, both the TOR and the AOR are receiving a request from VTAM, so each
believes it is starting and ending a given transaction. So for a given user request
routed from the TOR via ISC to an AOR, there would be 2 completed transactions.
Let us assume they have response times of 1 second and .75 seconds respectively,
giving for an average of .875 seconds. When the TOR routes via MRO, the TOR
will describe a single completed transaction taking 1 second (in a begin-to-end
phase), and the AOR will report it’s .75 seconds as execution time. Therefore,
converting from an ISC link to an MRO connection, for the same workload, could
result in 1/2 the number of ended transactions and a corresponding increase in the
response time reported by RMF.
Possible action
Increase CICS transaction goals prior to your conversion to an MRO connection.
If you are in one of the first two categories, you can skip this chapter and the next
and go straight to “Chapter 12. CICS performance analysis” on page 169.
If the current performance does not meet your needs, you should consider tuning
the system. The basic rules of tuning are:
1. Identify the major constraints in the system.
2. Understand what changes could reduce the constraints, possibly at the expense
of other resources. (Tuning is usually a trade-off of one resource for another.)
3. Decide which resources could be used more heavily.
4. Adjust the parameters to relieve the constrained resources.
5. Review the performance of the resulting system in the light of:
v Your existing performance objectives
v Progress so far
v Tuning effort so far.
6. Stop if performance is acceptable; otherwise do one of the following:
v Continue tuning
v Add suitable hardware capacity
v Lower your system performance objectives.
Yes
Devise a tuning
Continue strategy that will:
monitoring the - Minimize usage
system as planned of resource
- Expand the
capacity of
the system
Identify
the variables
Predict
the effects
A typical measurement and evaluation plan might include the following items as
objectives, with statements of recording frequency and the measurement tool to be
used:
v Volume and response time for each department
v Network activity:
– Total transactions
– Tasks per second
– Total by transaction type
– Hourly transaction volume (total, and by transaction).
v Resource utilization examples:
– DSA utilization
– Processor utilization with CICS
– Paging rate for CICS and for the system
– Channel utilization
– Device utilization
– Data set utilization
– Line utilization.
v Unusual conditions:
– Network problems
– Application problems
– Operator problems
– Transaction count for entry to transaction classes
Performance degradation is often due to application growth that has not been
matched by corresponding increases in hardware resources. If this is the case, solve
the hardware resource problem first. You may still need to follow on with a plan
for multiple regions.
The tasks may simply be trying to do too much work for the system. You are
asking it to do too many things, it clearly takes time, and the users are simply
trying to put too much through a system that can’t do all the work that they want
done.
Another possibility is that the system is real-storage constrained, and therefore the
tasks progress more slowly than expected because of paging interrupts. These
would show as delays between successive requests recorded in the CICS trace.
Yet another possibility is that many of the CICS tasks are waiting because there is
contention for a particular function. There is a wait on strings on a particular data
set, for example, or there is an application enqueue such that all the tasks issue an
enqueue for a particular item, and most of them have to wait while one task
actually does the work. Auxiliary trace enables you to distinguish most of these
cases.
Again, CICS statistics may reveal heavy use of some resource. For example, you
may find a very large allocation of temporary storage in main storage, a very high
number of storage control requests per task (perhaps 50 or 100), or high program
use counts that may imply heavy use of program control LINK.
Both statistics and CICS monitoring may show exceptional conditions arising in the
CICS run. Statistics can show waits on strings, waits for VSAM shared resources,
waits for storage in GETMAIN requests, and so on. These also generate CICS
monitoring facility exception class records.
While these conditions are also evident in CICS auxiliary trace, they may not
appear so obviously, and the other information sources are useful in directing the
investigation of the trace data.
In addition, you may gain useful data from the investigation of CICS outages. If
there is a series of outages, common links between the outages should be
investigated.
The next chapter tells you how to identify the various forms of CICS constraints,
and Chapter 12 gives you more information on performance analysis techniques.
The fundamental thing that has to be understood is that practically every symptom
of poor performance arises in a system that is congested. For example, if there is a
slowdown in DASD, transactions doing data set activity pile up: there are waits on
strings; there are more transactions in the system, there is therefore a greater
virtual storage demand; there is a greater real storage demand; there is paging;
and, because there are more transactions in the system, the task dispatcher uses
more processor power scanning the task chains. You then get task constraints, your
MXT or transaction class limit is exceeded and adds to the processor overhead
because of retries, and so on.
The result is that the system shows heavy use of all its resources, and this is the
typical system stress. It does not mean that there is a problem with all of them; it
means that there is a constraint that has yet to be found. To find the constraint,
you have to find what is really affecting task life.
When checking whether the performance of a CICS system is in line with the
system’s expected or required capability, you should base this investigation on the
hardware, software, and applications that are present in the installation.
If, for example, an application requires 100 accesses to a database, a response time
of three to six seconds may be considered to be quite good. If an application
requires only one access, however, a response time of three to six seconds for disk
accesses would need to be investigated. Response times, however, depend on the
speed of the processor, and on the nature of the application being run on the
production system.
You should also observe how consistent the response times are. Sharp variations
indicate erratic system behavior.
The typical way in which the response time in the system may vary with
increasing transaction rate is gradual at first, then deteriorates rapidly and
suddenly. The typical curve shows a sharp change when, suddenly, the response
time increases dramatically for a relatively small increase in the transaction rate.
Response
time
C Unacceptable (poor) response time
Increasing load or
decreasing resource availability
Figure 29. Graph to show the effect of response time against increasing load
For stable performance, it is necessary to keep the system operating below this
point where the response time dramatically increases. In these circumstances, the
Response time can be considered as being made up of queue time and service
time. Service time is generally independent of usage, but queue time is not. For
example, 50% usage implies a queue time approximately equal to service time, and
80% usage implies a queue time approximately four times the service time. If
service time for a particular system is only a small component of the system
response, for example, in the processor, 80% usage may be acceptable. If it is a
greater portion of the system response time, for example, in a communication line,
50% usage may be considered high.
If you are trying to find the response time from a terminal to a terminal, you
should be aware that the most common “response time” obtainable from any aid
or tool that runs in the host is the “internal response time.” Trace can identify only
when the software in the host, that is, CICS and its attendant software, first “sees”
the message on the inbound side, and when it last “sees” the message on the
outbound side.
Internal response time gives no indication of how long a message took to get from
the terminal, through its control unit, across a line of whatever speed, through the
communication controller (whatever it is), through the communication access
method (whatever it is), and any delays before the channel program that initiated
the read is finally posted to CICS. Nor does it account for the time it might take
for CICS to start processing this input message. There may have been lots of work
for CICS to do before terminal control regained control and before terminal control
even found this posted event.
The same is true on the outbound side. CICS auxiliary trace knows when the
application issued its request, but that has little to do with when terminal control
found the request, when the access method ships it out, when the controllers can
get to the device, and so on.
While the outward symptom of poor performance is overall bad response, there
are progressive sets of early warning conditions which, if correctly interpreted, can
ease the problem of locating the constraint and removing it.
In the advice given so far, we have assumed that CICS is the only major program
running in your system. If batch programs or other online programs are running
simultaneously with CICS, you must ensure that CICS receives its fair share of the
system resources and that interference from other regions does not seriously
degrade CICS performance.
Storage stress
Stress is the term used in CICS for a shortage of free space in one of the dynamic
storage areas.
Storage stress can be a symptom of other resource constraints that cause CICS
tasks to occupy storage for longer than is normally necessary, or of a flood of tasks
which simply overwhelms available free storage, or of badly designed applications
that require unreasonably large amounts of storage.
User runtime control of storage usage is achieved through appropriate use of MXT
and transaction class limits. This is necessary to avoid the short-on-storage
condition that can result from unconstrained demand for storage.
Short-on-storage condition
CICS reserves a minimum number of free storage pages for use only when there is
not enough free storage to satisfy an unconditional GETMAIN request even when
all, not-in-use, nonresident programs have been deleted.
Whenever a request for storage results in the number of contiguous free pages in
one of the dynamic storage areas falling below its respective cushion size, or
failing to be satisfied even with the storage cushion, a cushion stress condition
exists. Details are given in the storage manager statistics (“Times request
suspended”, “Times cushion released”). CICS attempts to alleviate the storage
stress situation by releasing programs with no current user and slowing the
attachment of new tasks. If these actions fail to alleviate the situation or if the
stress condition is caused by a task that is suspended for SOS, a short-on-storage
condition is signaled. This is accompanied by message DFHSM0131 or
DFHSM0133.
If you have application programs that use temporary data sets, with a different
name for every data set created, it is important that your programs remove these
after use. See the CICS System Programming Reference for information about how
you can use the SET DSNAME command to remove unwanted temporary data sets
| from your CICS regions.
Purging of tasks
If a CICS task is suspended for longer than its DTIMOUT value, it may be purged
if SPURGE=YES is specified on the RDO transaction definition. That is, the task is
abended and its resources freed, thus allowing other tasks to use those resources.
In this way, CICS attempts to resolve what is effectively a deadlock on storage.
CICS hang
If purging tasks is not possible or not sufficient to solve the problem, CICS ceases
processing. You must then either cancel and restart the CICS system, or initiate or
allow an XRF takeover.
A page-in operation causes the MVS task which requires it to stop until the page
has been retrieved. If the page is to be retrieved from DASD, this has a significant
effect. When the page can be retrieved from expanded storage, the impact is only a
relatively small increase in processor usage.
The loading of a program into CICS storage can be a major cause of page-ins.
Because this is carried out under a subtask separate from CICS main activity, such
page-ins do not halt most other CICS activities.
What is paging?
The virtual storage of a processor may far exceed the size of the central storage
available in the configuration. Any excess must be maintained in auxiliary storage
(DASD), or in expanded storage. This virtual storage occurs in blocks of addresses
called “pages”. Only the most recently referenced pages of virtual storage are
assigned to occupy blocks of physical central storage. When reference is made to a
page of virtual storage that does not appear in central storage, the page is brought
in from DASD or expanded storage to replace a page in central storage that is not
in use and least recently used.
The newly referenced page is said to have been “paged in”. The displaced page
may need to be “paged out” if it has been changed.
A page-in from expanded storage incurs only a small processor usage cost, but a
page-in from DASD incurs a time cost for the physical I/O and a more significant
increase in processor usage.
Thus, extra DASD page-in activity slows down the rate at which transactions flow
through the CICS system, that is, transactions take longer to get through CICS, you
get more overlap of transactions in CICS, and so you need more virtual and real
storage.
If you suspect that a performance problem is related to excessive paging, you can
use RMF to obtain the paging rates.
Consider controlling CICS throughput by using MXT and transaction class limits in
CICS on the basis that a smaller number of concurrent transactions requires less
real storage, causes less paging, and may be processed faster than a larger number
of transactions.
What is an ideal CICS paging rate from DASD? Less than one page-in per second
is best to maximize the throughput capacity of the CICS region. Anything less than
five page-ins per second is probably acceptable; up to ten may be tolerable. Ten
per second is marginal, more is probably a major problem. Because CICS
performance can be affected by the waits associated with paging, you should not
allow paging to exceed more than five to ten pages per second.
Note: The degree of sensitivity of CICS systems to paging from DASD depends on
the transaction rate, the processor loading, and the average internal lifetime
of the CICS tasks. An ongoing, hour-on-hour rate of even five page-faults
per second may be excessive for some systems, particularly when you
realize that peak paging rates over periods of ten seconds or so could easily
be four times that figure.
What paging rates are excessive on various processors and are these rates
operating-system dependent? Excessive paging rates should be defined as those
which cause excessive delays to applications. The contribution caused by the
high-priority paging supervisor executing instructions and causing applications to
wait for the processor is probably a minor consideration as far as overall delays to
applications are concerned. Waiting on a DASD device is the dominant part of the
overall delays. This means that the penalty of “high” paging rates has almost
nothing to do with the processor type.
Storage violations can be reduced considerably if CICS has storage protection, and
transaction isolation, enabled.
See the CICS Problem Determination Guide for further information about diagnosing
and dealing with storage violations.
Hardware constraints
1. Processor cycles. It is not uncommon for transactions to execute more than one
million instructions. To execute these instructions, they must contend with
other tasks and jobs in the system. At different times, these tasks must wait for
such activities as file I/O. Transactions give up their use of the processor at
these points and must contend for use of the processor again when the activity
has completed. Dispatching priorities affect which transactions or jobs get use
of the processor, and batch or other online systems may affect response time
through receiving preferential access to the processor. Batch programs accessing
online databases also tie up those databases for longer periods of time if their
dispatching priority is low. At higher usages, the wait time for access to the
processor can be significant.
2. Real storage (working set). Just as transactions must contend for the processor,
they also must be given a certain amount of real storage. A real storage
shortage can be particularly significant in CICS performance because a normal
page fault to acquire real storage results in synchronous I/O. The basic design
of CICS is asynchronous, which means that CICS processes requests from
multiple tasks concurrently to make maximum use of the processor. Most
paging I/O is synchronous and causes the MVS task that CICS is using to wait,
and that part of CICS cannot do any further processing until the page
Software constraints
1. Database design. A data set or database needs to be designed to the needs of the
application it is supporting. Such factors as the pattern of access to the data set
(especially whether it is random or sequential), access methods chosen, and the
frequency of access determine the best database design. Such data set
characteristics as physical record size, blocking factors, the use of alternate or
secondary indexes, the hierarchical or relational structure of database segments,
database organization (HDAM, HIDAM, and so on), and pointer arrangements
are all factors in database performance.
The length of time between data set reorganizations can also affect
performance. The efficiency of accesses decreases as the data set becomes more
and more fragmented. This fragmentation can be kept to the minimum by
reducing the length of time between data set reorganizations.
2. Network design. This item can often be a major factor in response time because
the network links are much slower than most components of an online system.
Processor operations are measured in nanoseconds, line speeds in seconds.
Screen design can also have a significant effect on overall response time. A
1200-byte message takes one second to be transmitted on a relatively
high-speed 9600 bits-per-second link. If 600 bytes of the message are not
needed, half a second of response time is wasted. Besides screen design and
size, such factors as how many terminals are on a line, the protocols used
(SNA, bisynchronous), and full-or half-duplex capabilities can affect
performance.
3. Use of specific software interfaces or serial functions. The operating system, terminal
access method, database manager, data set access method, and CICS must all
communicate in the processing of a transaction. Only a given level of
concurrent processing can occur at these points, and this can also cause a
performance constraint. Examples of this include the VTAM receive any pool
(RAPOOL), VSAM data set access (strings), CICS temporary storage, CICS
One useful technique for isolating a performance constraint in a CICS system with
VTAM is to use the IBMTEST command issued from a user’s terminal. This
terminal must not be in session with CICS, but must be connected to VTAM.
where n is the number of times you want the data echoed, and data may consist of
any character string. If you enter no data, the alphabet and the numbers zero
through nine are returned to the terminal. This command is responded to by
VTAM.
IBMTEST is an echo test designed to give the user a rough idea of the VTAM
component of terminal response time. If the response time is fast in a
slow-response system, the constraint is not likely to be any component from VTAM
onward. If this response is slow, VTAM or the network may be the reason. This
sort of deductive process in general can be useful in isolating constraints.
To avoid going into session with CICS, you may have to remove APPLID= from
the LU statement or CONNECT=AUTO from the TERMINAL definition.
Resource contention
The major resources used or managed by CICS consist of the following:
v Processor
v Real storage
v Virtual storage
v Software (specification limits)
v Channels
v Control units
v Lines
v Devices
v Sessions to connected CICS systems.
Two sets of symptoms and solutions are provided in this chapter. The first set
provides suggested solutions for poor response, and the second set provides
suggested solutions for a variety of resource contention problems.
Solutions
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.
See “DASD tuning” on page 199 for suggested solutions.
Solutions
v Reduce the line utilization.
v Reduce delays in data transmission.
v Alter the network.
Solutions
v Control the amount of queuing which takes place for the use of the connections
to the remote systems.
v Improve the response time of the remote system.
See the “Virtual storage above and below 16MB line checklist” on page 182 for a
detailed list of suggested solutions.
Solutions
v Reduce the demands on real storage
v Tune the MVS system to obtain more real storage for CICS
v Obtain more central and expanded storage.
See the “Real storage checklist” on page 183 for a detailed list of suggested
solutions.
Solutions
v Increase the dispatching priority of CICS.
v Reevaluate the relative priorities of operating system jobs.
v Reduce the number of MVS regions (batch).
v Reduce the processor utilization for productive work.
v Use only the CICS facilities that you really require.
v Turn off any trace that is not being used.
v Minimize the data being traced by reducing the:
– Scope of the trace
– Frequency of running trace.
v Obtain a faster processor.
See the “Processor cycles checklist” on page 184 for a detailed list of suggested
solutions.
Application conditions
These conditions, measured both for individual transaction types and for the total
system, give you an estimate of the behavior of individual application programs.
You should gather data for each main transaction and average values for the total
system. This data includes:
v Program calls per transaction
v CICS storage GETMAINs and FREEMAINs (number and amount)
v Application program and transaction usage
v File control (data set, type of request)
v Terminal control (terminal, number of inputs and outputs)
v Transaction routing (source, target)
v Function shipping (source, target)
v Other CICS requests.
Rapid performance degradation often occurs after a threshold is exceeded and the
system approaches its ultimate load. You can see various indications only when the
Bear in mind that the performance constraints might possibly vary at different
times of the day. You might want to run a particular option that puts a particular
pressure on the system only at a certain time in the afternoon.
Before carrying out this analysis, you must have a clear picture of the functions
and the interactions of the following components:
v Operating system supervisor with the appropriate access methods
v CICS management modules and control tables
v VSAM data sets
v DL/I databases
v DB2
v External security managers
v Performance monitors
v CICS application programs
v Influence of other regions
v Hardware peripherals (disks and tapes).
Full-load measurement
A full-load measurement highlights latent problems in the system. It is important
that full-load measurement lives up to its name, that is, you should make the
measurement when, from production experience, the peak load is reached. Many
installations have a peak load for about one hour in the morning and again in the
afternoon. CICS statistics and various performance tools can provide valuable
information for full-load measurement. In addition to the overall results of these
tools, it may be useful to have the CICS auxiliary trace or RMF active for about
one minute.
Trace is a very heavy overhead. Use trace selectivity options to minimize this
overhead.
RMF
It is advisable to do the RMF measurement without any batch activity. (See
“Resource measurement facility (RMF)” on page 27 for a detailed description of
this tool. Guidance on how to use RMF with the CICS monitoring facility is given
in “Using CICS monitoring SYSEVENT information with RMF” on page 67.)
For full-load measurement, the system activity report and the DASD activity report
are important.
You should expect stagnant throughput and sharply climbing response times as the
processor load approaches 100%.
It is difficult to forecast the system paging rate that can be achieved without
serious detriment to performance, because too many factors interact. You should
observe the reported paging rates; note that short-duration severe paging leads to a
rapid increase in response times.
In addition to taking note of the count of start I/O operations and their average
length, you should also find out whether the system is waiting on one device only.
With disks, for example, it can happen that several frequently accessed data sets
are on one disk and the accesses interfere with each other. In each case, you should
investigate whether a system wait on a particular unit could not be minimized by
reorganizing the data sets.
Use IOQ(DASD) option in RMF monitor 1 to show DASD control unit contention.
After checking the relationship of accesses with and without arm movement, for
example, you may want to move to separate disks those data sets that are
periodically very frequently accessed.
Average-use Number
transaction Response
System
Paging rate
CICS
Maximum
DSA virtual storage
Average
Peak
Tasks
At MXT
CPU utilization
The use of this type of comparison chart requires the use of TPNS, RMF, and CICS
interval statistics running together for about 20 minutes, at a peak time for your
system. It also requires you to identify the following:
v A representative selection of terminal-oriented DL/I transactions accessing DL/I
databases
v A representative selection of terminal-oriented transactions processing VSAM
files
v The most heavily used transaction
v Two average-use nonterminal-oriented transactions writing data to intrapartition
transient data destinations
v The most heavily used volume in your system
v A representative average-use volume in your system.
To complete the comparison chart for each CICS run before and after a tuning
change, you can obtain the figures from the following sources:
Single-transaction measurement
You can use full-load measurement to evaluate the average loading of the system
per transaction. However, this type of measurement cannot provide you with
information on the behavior of a single transaction and its possible excessive
loading of the system. If, for example, nine different transaction types issue five
start I/Os (SIOs) each, but the tenth issues 55 SIOs, this results in an average of
ten SIOs per transaction type. This should not cause concern if they are executed
simultaneously. However, an increase of the transaction rate of the tenth
transaction type could possibly lead to poor performance overall.
Sometimes, response times are quite good with existing terminals, but adding a
few more terminals leads to unacceptable degradation of performance. In this case,
the performance problem may be present with the existing terminals, and has
simply been highlighted by the additional load.
You should measure each existing transaction that is used in a production system
or in a final test system. Test each transaction two or three times with different
data values, to exclude an especially unfavorable combination of data. Document
the sequence of transactions and the values entered for each test as a prerequisite
for subsequent analysis or interpretation.
Between the tests of each single transaction, there should be a pause of several
seconds, to make the trace easier to read. A copy of the production database or
data set should be used for the test, because a test data set containing 100 records
can very often result in completely different behavior when compared with a
production data set containing 100 000 records.
The condition of data sets has often been the main reason for performance
degradation, especially when many segments or records have been added to a
database or data set. Do not do the measurements directly after a reorganization,
because the database or data set is only in this condition for a short time. On the
other hand, if the measurement reveals an unusually large number of disk
accesses, you should reorganize the data and do a further measurement to evaluate
the effect of the data reorganization.
You may feel that single-transaction measurement under these conditions with only
one terminal is not an efficient tool for revealing a performance degradation that
might occur when, perhaps 40 or 50 terminals are in use. Practical experience has
shown, however, that this is usually the only means for revealing and rectifying,
with justifiable expense, performance degradation under full load. The main reason
for this is that it is sometimes a single transaction that throws the system behavior
out of balance. Single-transaction measurement can be used to detect this.
Ideally, single-transaction measurement should be carried out during the final test
phase of the transactions. This gives the following advantages:
v Any errors in the behavior of transactions may be revealed before production
starts, and these can be put right during validation, without loading the
production system unnecessarily.
v The application is documented during the measurement phase. This helps to
identify the effects of later changes.
From this trace, you can find out whether a specified application is running as it is
expected to run. In many cases, it may be necessary for the application
programmer responsible to be called in for the analysis, to explain what the
transaction should actually be doing.
If you have a very large number of transactions to analyze, you can select, in a
first pass, the transactions whose behavior does not comply with what is expected.
If, on the other hand, only a few transactions remain in this category, these
transactions should be analyzed next, because it is highly probable that most
performance problems to date arise from these.
A system is always constrained. You do not simply remove a constraint; you can
only choose the most satisfactory constraint. Consider which resources can accept
an additional load in the system without themselves becoming worse constraints.
Tuning usually involves a variety of actions that can be taken, each with its own
trade-off. For example, if you have determined virtual storage to be a constraint,
your tuning options may include reducing buffer allocations for data sets, or
reducing terminal scan delay (ICVTSD) to shorten the task life in the processor.
The first option increases data set I/O activity, and the second option increases
processor usage. If one or more of these resources are also constrained, tuning
could actually cause a performance degradation by causing the other resource to
be a greater constraint than the present constraint on virtual storage.
Important
Always tune DASD, the network, and the overall MVS system before tuning
any individual CICS subsystem through CICS parameters.
“Chapter 14. Performance checklists” on page 181 itemizes the actions you can take
to tune the performance of an operational CICS system.
The other chapters in this part contain the relevant performance tuning guidelines
for the following aspects of CICS:
v “Chapter 15. MVS and DASD” on page 187
v “Chapter 16. Networking and VTAM” on page 201
v “Chapter 18. VSAM and file control” on page 225
v “Chapter 21. Database management” on page 263
v “Chapter 22. Logging and journaling” on page 271
v “Chapter 23. Virtual and real storage” on page 283
v “Chapter 24. MRO and ISC” on page 305
v “Chapter 25. Programming considerations” on page 315
v “Chapter 26. CICS facilities” on page 321
v “Chapter 27. Improving CICS startup and normal shutdown time” on page 339.
There are four checklists, corresponding to four of the main contention areas
described in “Chapter 11. Identifying CICS constraints” on page 155.
1. I/O contention (this applies to data set and database subsystems, as well as to
the data communications network)
2. Virtual storage above and below the 16MB line
3. Real storage
4. Processor cycles.
The checklists are in the sequence of low-level to high-level resources, and the
items are ordered from those that probably have the greatest effect on performance
to those that have a lesser effect, from the highest likelihood of being a factor in a
normal system to the lowest, and from the easiest to the most difficult to
implement.
Before taking action on a particular item, you should review the item to:
v Determine whether the item is applicable in your particular environment
v Understand the nature of the change
v Identify the trade-offs involved in the change.
Note:
Ideally, I/O contention should be reduced by using very large data buffers
and keeping programs in storage. This would require adequate central and
expanded storage, and programs that can be loaded above the 16MB line
Item Page
VSAM considerations
Review use of LLA 197
Implement Hiperspace buffers 240
Review/increase data set buffer allocations within 235
LSR
Use data tables when appropriate 244
Database considerations
Replace DL/I function shipping with IMS/ESA 263
DBCTL facility
Reduce/replace shared database access to online 263
data sets
Review DB2 threads and buffers 266
Journaling
Miscellaneous
Reduce DFHRPL library contention 299
Review temporary storage strings 321
Review transient data strings 326
Note:
The lower the number of concurrent transactions in the system, the lower the
usage of virtual storage. Therefore, improving transaction internal response
time decreases virtual storage usage. Keeping programs in storage above the
16MB line, and minimizing physical I/Os makes the largest contribution to
well-designed transaction internal response time improvement.
Item Page
CICS region
Increase CICS region size 192
Reorganize program layout within region 299
Split the CICS region 284
DSA sizes
Specify optimal size of the dynamic storage areas 625
upper limits (DSALIM, EDSALIM)
Adjust maximum tasks (MXT) 287
Control certain tasks by transaction class 288
Put application programs above 16MB line 300
Database considerations
Increase use of DBCTL and reduce use of shared 263
database facility
Replace DL/I function shipping with IMS DBCTL 263
facility
Review use of DB2 threads and buffers 266
Applications
Compile COBOL programs RES, NODYNAM 316
Use PL/I shared library facility 317
Implement VS COBOL II 317
Journaling
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared 305
database programs
Miscellaneous
Reduce use of aligned maps 298
Prioritize transactions 291
Use only required CICS recovery facilities 334
Recycle job initiators with each CICS startup 193
Note:
Adequate central and expanded storage is vital to achieving good
performance with CICS.
Item Page
MVS considerations
Dedicate, or fence, real storage to CICS 190
Make CICS nonswappable 190
Move CICS code to the LPA/ELPA 297
VSAM considerations
Review the use of Hiperspace buffers 240
Use VSAM LSR where possible 240
Review the number of VSAM buffers 235
Review the number of VSAM strings 237
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MVS cross-memory services with shared
database programs
Use CICS intercommunication facilities 305
Database considerations
Journaling
Applications
Use PL/I shared library facilities 317
Compile COBOL programs RES, NODYNAM 316
Miscellaneous
Decrease region exit interval 194
Reduce trace table size 332
Use only required CICS recovery facilities 334
Note:
Minimizing physical I/Os by employing large data buffers and keeping
programs in storage reduces processor use, if adequate central and expanded
storage is available.
Item Page
General
Reduce or turn off CICS trace 332
Increase CICS dispatching level or performance 192
group
MRO/ISC considerations
Implement MVS cross-memory services with MRO 305
Implement MRO fastpath facilities 305
Implement MVS cross-memory services with shared 263
database programs
Use CICS intercommunication facilities 305
Database considerations
Journaling
Increase activity keypoint frequency (AKPFREQ) 279
value
Miscellaneous
Use only required CICS monitoring facilities 331
Review use of required CICS recovery facilities 334
Review use of required CICS security facilities 334
Increase region exit interval 194
Review use of program storage 299
Use NPDELAY for unsolicited input errors on TCAM 214
lines
Prioritize transactions 291
Because tuning is a top-down activity, you should already have made a vigorous
effort to tune MVS before tuning CICS. Your main effort to reduce virtual storage
constraint and to get relief should be concentrated on reducing the life of the
various individual transactions: in other words, shortening task life.
This section describes some of the techniques that can contribute significantly to
shorter task life, and therefore, a reduction in virtual storage constraint.
Additional real storage, if page-ins are frequently occurring (if there are more than
5 to 10 page-ins per second, CICS performance is affected), can reduce waits for
the paging subsystem.
MVS provides storage isolation for an MVS performance group, which allows you
to reserve a specific range of real storage for the CICS address space and to control
the page-rates for that address space based on the task control block (TCB) time
absorbed by the CICS address space during execution.
So far (except when describing storage isolation and DASD sharing), we have
concentrated on CICS systems that run a stand-alone single CICS address space.
The sizes of all MVS address spaces are defined by the common requirements of
the largest subsystem. If you want to combine the workload from two or more
processors onto an MVS image, you must be aware of the virtual storage
requirements of each of the subsystems that are to execute on the single-image
ESA processor. Review the virtual storage effects of combining the following kinds
of workload on a single-image MVS system:
1. CICS and a large number (100 or more) of TSO users
2. CICS and a large IMS system
3. CICS and 5000 to 7500 VTAM LUs.
By its nature, CICS requires a large private region that may not be available when
the large system’s common requirements of these other subsystems are satisfied. If,
after tuning the operating system, VTAM, VSAM, and CICS, you find that your
address space requirements still exceed that available, you can split CICS using
one of three options:
1. Multiregion option (MRO)
2. Intersystem communication (ISC)
3. Multiple independent address spaces.
Adding large new applications or making major increases in the size of your
VTAM network places large demands on virtual storage, and you must analyze
them before implementing them in a production system. Careful analysis and
system specification can avoid performance problems arising from the addition of
new applications in a virtual-storage-constrained environment.
If you have not made the necessary preparations, you usually become aware of
problems associated with severe stress only after you have attempted to implement
the large application or major change in your production system. Some of these
symptoms are:
v Poor response times
v Short-on-storage
v Program compression
v Heavy paging activity
v Many well-tested applications suddenly abending with new symptoms
v S80A and S40D abends
v S822 abends
v Dramatic increase in I/O activity on DFHRPL program libraries.
Various chapters in the rest of this book deal with specific, individual operands
and techniques to overcome these problems. They tell you how to minimize the
use of virtual storage in the CICS address space, and how to split it into multiple
address spaces if your situation requires it.
For an overall description of ESA virtual storage, see “Appendix F. MVS and CICS
virtual storage” on page 615.
The availability of the overall system may be improved by splitting the system
because the effects of a failure can be limited or the time to recover from the
failure can be reduced.
Recommendations
If availability of your system is an important requirement, both splitting systems
and the use of XRF should be considered. The use of XRF can complement the
splitting of systems by automating the recovery of the components.
When splitting your system, you should try to separate the sources of failure so
that as much of the rest of the system as possible is protected against their failure,
and remains available for use. Critical components should be backed up, or
configured so that service can be restored with minimum delay. Since the
advantages of splitting regions for availability can be compromised if the queueing
of requests for remote regions is not controlled, you should also review
“Intersystems session queue management” on page 307.
Making CICS nonswappable prevents the address space from being swapped out
in MVS, and reduces the paging overhead. Consider leaving only very lightly used
test systems swappable.
How implemented
You should consider making your CICS region nonswappable by using the
PPTNSWP option in the MVS Program Properties Table (PPT).
Limitations
Using the PPT will make all CICS systems (including test systems) nonswappable.
As an alternative, use the IPS. For more information about defining entries in the
PPT see the OS/390 MVS Programming: Callable Services for HLL manual.
How monitored
The DISPLAY ACTIVE (DA) command on SDSF gives you an indication of the
number of real pages used and the paging rate. Use RMF, the RMFMON command
on TSO to provide additional information. For more information about RMF see
“Resource measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.
The target working set size of an XRF alternate CICS system can vary significantly
in different environments.
For the XRF alternate system that has a low activity while in the surveillance
phase, PPGRTR is a better choice because the target working set size is adjusted on
the basis of page-faults per second, rather than page-faults per execution second.
During catchup and while tracking, the real storage needs of the XRF alternate
CICS system are increased as it changes terminal session states and the contents of
the TCT. At takeover, the real storage needs also increase as the alternate CICS
system begins to switch terminal sessions and implement emergency restart. In
order to ensure good performance and minimize takeover time, the target working
set size should be increased. This can be done in several different ways, two of
which are:
1. Parameter “b” in PWSS=(a,b) can be set to “*” which allows the working set
size to increase without limit, if the maximum paging rate (parameter “d” in
PPGRTR=(c,d)) is exceeded.
2. A command can be put in the CLT to change the alternate CICS system’s
performance group at takeover to one which has different real storage isolation
parameters specified.
If you set PWSS=(*,*), and PPGRTR=(1,2), this allows CICS to use as much storage
as it wants when the paging rate is > 2 per second. The values depend very much
on the installation and the MVS setup. The values suggested here assume that
CICS is an important address space and therefore needs service to be resumed
quickly.
For the definition and format of the storage isolation parameters in IEAIPSxx, see
the OS/390 MVS Initialization and Tuning Reference manual.
How implemented
See the OS/390 MVS Initialization and Tuning Reference manual.
How monitored
Use RMF, the RMFMON command on TSO for additional information. The
DISPLAY ACTIVE (DA) command on SDSF will give you an indication of the
number of real pages used and the paging rate.
Changes to MVS and other subsystems over time generally reduce the amount of
storage required below the 16MB line. Thus the CICS region size may be able to be
increased when a new release of MVS or non-CICS subsystem is installed.
To get any further increase, operating-system functions and storage areas (such as
the local shared queue area, LSQA), or other programs must be reduced. The
LSQA is used by VTAM and other programs, and any increase in the CICS region
size decreases the area available for the LSQA, SWA, and subpools 229 and 230. A
shortage in these subpools can cause S80A, S40D, and S822 abends.
If you specify a larger region, the value of the relevant dsasize system initialization
parameter must be increased or the extra space is not used.
How implemented
The region size is defined in the startup job stream for CICS. Other definitions are
made to the operating system or through operating-system console commands.
To determine the maximum region size, determine the size of your private area
from RMF II or one of the storage monitors available.
To determine the maximum region size you should allocate, use the following
formula:
Max region possible = private area size – system region size – (LSQA + SWA +
subpools 229 and 230)
The remaining storage is available for the CICS region; for safety, use 80% or 90%
of this number. If the system is static or does not change much, use 90% of this
number for the REGION= parameter; if the system is dynamic, or changes
frequently, 80% would be more desirable.
Note: You must maintain a minimum of 200KB of free storage between the top of
the region and the bottom of the ESA high private area (the LSQA, the SWA,
and subpools 229 and 230).
How monitored
Use RMF, the RMFMON command on TSO for additional information. For more
information about RMF see “Resource measurement facility (RMF)” on page 27 or
the MVS RMF User’s Guide.
How implemented
Set the CICS priority above the automatic priority group (APG). See the OS/390
MVS Initialization and Tuning Reference manual for further information.
There are various ways to assign CICS a dispatching priority. The best is through
the ICS (PARMLIB member IEAICSxx). The ICS assigns performance group
numbers and enforces assignments. The dispatching priorities are specified in
PARMLIB member IEAIPSxx. Use APGRNG to capture the top ten priority sets (6
through 15). Specify a suitably high priority for CICS. There are priority levels that
change dynamically, but we recommend a simple fixed priority for CICS. Use
storage isolation only when necessary.
You cannot specify a response time, and you must give CICS enough resources to
achieve good performance.
See the OS/390 MVS Initialization and Tuning Reference manual for more
information.
How monitored
Use either the DISPLAY ACTIVE (DA) command on SDSF or use RMF, the
RMFMON command on TSO. For more information about RMF see “Resource
measurement facility (RMF)” on page 27 or the MVS RMF User’s Guide.
Some fragmentation can also occur in a region when a job initiator starts multiple
jobs without being stopped and then started again. If you define the region as
having the maximum allowable storage size, it is possible to start and stop the job
the first time the initiator is used, but to have an S822 abend (insufficient virtual
storage) the second time the job is started. This is because of the fragmentation
that occurs.
In this situation, either the region has to be decreased, or the job initiator has to be
stopped and restarted.
Effects
Some installations have had S822 abends after doing I/O generations or after
adding DD statements to large applications. An S822 abend occurs when you
request a REGION=nnnnK size that is larger than the amount available in the
address space.
The maximum region size that is available is difficult to define, and is usually
determined by trial and error. One of the reasons is that the size depends on the
system generation and on DD statements.
Limitations
Available virtual storage is increased by starting new initiators to run CICS, or by
using MVS START. Startup time may be minimally increased.
How implemented
CICS startup and use of initiators are defined in an installation’s startup
procedures.
How monitored
Part of the job termination message IEF374I 'VIRT=nnnnnK' shows you the virtual
storage below the 16MB line, and another part 'EXT=nnnnnnnK' shows the virtual
storage above the 16MB line.
In general, ICV can be used in low-volume systems to keep part of the CICS
management code paged in. Expiration of this interval results in a full terminal
control table (TCT) scan in non-VTAM environments, and controls the dispatching
of terminal control in VTAM systems with low activity. Redispatch of CICS by
MVS after the wait may be delayed because of activity in the supervisor or in
higher-priority regions, for example, VTAM. The ICV delay can affect the
shutdown time if no other activity is taking place.
The value of ICV acts as a backstop for MROBTCH (see “Batching requests
(MROBTCH)” on page 311).
Main effect
The region exit interval determines the maximum period between terminal control
full scans. However, the interval between full scans in very active systems may be
less than this, being controlled by the normally shorter terminal scan delay interval
(see “Terminal scan delay (ICVTSD)” on page 211). In such systems, ICV becomes
largely irrelevant unless ICVTSD has been set to zero.
Secondary effects
Whenever control returns to the task dispatcher from terminal control after a full
scan, ICV is added to the current time of day to give the provisional due time for
the next full scan. In idle systems, CICS then goes into an operating-system wait
state, setting the timer to expire at this time. If there are application tasks to
dispatch, however, CICS passes control to these and, if the due time arrives before
CICS has issued an operating-system WAIT, the scan is done as soon as the task
dispatcher next regains control.
In active systems, after the due time has been calculated by adding ICV, the scan
may be performed at an earlier time by application activity (see “Terminal scan
delay (ICVTSD)” on page 211).
Operating-system waits are not always for the duration of one ICV. They last only
until some event ends. One possible event is the expiry of a time interval, but
often CICS regains control because of the completion of an I/O operation. Before
issuing the operating-system WAIT macro, CICS sets an operating-system timer,
specifying the interval as the time remaining until the next time-dependent activity
becomes due for processing. This is usually the next terminal control scan,
controlled by either ICV or ICVTSD, but it can be the earliest ICE expiry time, or
even less.
In high-activity systems, where CICS is contending for processor time with very
active higher-priority subsystems (VTAM, TSO, other CICS systems, or DB/DC),
control may be seized from CICS so often that CICS always has work to do and
never issues an operating-system WAIT.
Limitations
Too low a value can impair concurrent batch performance by causing frequent and
unnecessary dispatches of CICS by MVS. Too high a value can lead to an
appreciable delay before the system handles time-dependent events (such as
abends for terminal read or deadlock timeouts) after the due time.
A low ICV value does not prevent all CICS modules from being paged out. When
the ICV time interval expires, the operating system dispatches CICS task control
which, in turn, dispatches terminal control. CICS references only task control,
terminal control, TCT, and the CSA. No other modules in CICS are referenced. If
there is storage constraint they do not stay in real storage.
The ICV delay can affect the shutdown time if no other activity is taking place.
Recommendations
The time interval can be any decimal value in the range from 100 through 3600000
milliseconds.
A low interval value can enable much of the CICS nucleus to be retained, and not
be paged out at times of low terminal activity. This reduces the amount of paging
necessary for CICS to process terminal transactions (thus representing a potential
reduction in response time), sometimes at the expense of concurrent batch region
throughput. Large networks with high terminal activity tend to drive CICS without
a need for this value, except to handle the occasional, but unpredictable, period of
inactivity. These networks can usually function with a large interval (10000 to
30000 milliseconds). After a task has been initiated, the system recognizes its
requests for terminal services and the completion of the services, and overrides this
maximum delay interval.
Small systems or those with low terminal activity are subject to paging introduced
by other jobs running in competition with CICS. If you specify a low interval
value, key portions of the CICS nucleus are referenced more frequently, thus
reducing the probability of these pages being paged-out. However, the execution of
the logic, such as terminal polling activity, without performing productive work
might be considered wasteful.
You must weigh the need to increase the probability of residency by frequent but
unproductive referencing, against the extra overhead and longer response times
incurred by allowing the paging to occur. If you increase the interval size, more
productive work is performed at the expense of performance if paging occurs
during the periods of CICS activity.
How implemented
ICV is specified in the SIT or at startup, and can be changed using either the
CEMT or EXEC CICS SET SYSTEM (time) command. It is defined in units of
milliseconds, rounded down to the nearest multiple of ten. The default is 1000
(that is, one second; usually too low).
How monitored
The region exit interval can be monitored by the frequency of CICS
operating-system WAITs that are counted in “Dispatcher domain” on page 367.
LLA manages modules (system or application) whose library names you have put
in the appropriate CSVLLA member in SYS1.PARMLIB.
There are two optional parameters in this member that affect the management of
specified libraries:
FREEZE
Tells the system always to use the copy of the directory that is maintained
in the LLA address space.
NOFREEZE
Tells the system always to search the directory that resides in DASD
storage.
However, FREEZE and NOFREEZE are only relevant when LLACOPY is not used.
When CICS issues a LOAD and specifies the directory entry (DE), it bypasses the
LLA directory processing, but determines from LLA whether the program is
already in VLF or must be fetched from DASD. For more information about the
FREEZE and NOFREEZE options, see the OS/390 MVS Initialization and Tuning
Guide.
The use of LLA to manage a very busy DFHRPL library can show two distinct
benefits:
1. Improved transaction response time
2. Better DASD utilization.
In addition to any USER-defined CICS DFHRPL libraries, LLA also manages the
system LNKLST. It is likely that staging some modules from the LNKLST could
have more effect than staging modules from the CICS libraries. LLA makes
decisions on what is staged to VLF only after observing the fetch activity in the
system for a certain period. For this reason it is possible to see I/O against a
program library even when it is managed by LLA.
Another contributing factor for continued I/O is the system becoming “MAXVIRT
constrained”, that is, the sum of bytes from the working set of modules is greater
than the MAXVIRT parameter for the LLA class of VLF objects. You can increase
this value by changing it in the COFVLF member in SYS1.PARMLIB. A value too
small can cause excessive movement of that VLF object class; a value too large can
cause excessive paging; both may increase the DASD activity significantly.
See the OS/390 MVS Initialization and Tuning Guide manual for information on LLA
and VLF parameters.
Effects of LLACOPY
CICS can use one of two methods for locating modules in the DFHRPL
concatenation. Either a build link-list (BLDL) macro or a LLACOPY macro is
issued to return the directory information to pass to the load request. Which macro
is issued is dependant upon the LLACOPY system initialization parameter and the
reason for the locate of the module.
The LLACOPY macro is used to update the LLA-managed directory entry for a
module or a list of modules. If a module which is LLA managed has an LLACOPY
issued against it, it results in a BLDL with physical I/O against the DCB specified.
If the directory information does not match that which is stored within LLA, the
LLA tables are then updated, keeping both subsystems synchronized. While this
activity takes place an ENQ for the resource SYSZLLA1.update is held. This is then
unavailable to any other LLACOPY request on the same MVS system and therefore
another LLACOPY request is delayed until the ENQ is released.
The BLDL macro also returns the directory information. When a BLDL is issued
against an LLA managed module, the information returned will be from the LLA
copy of the directory, if one exists. It will not necessarily result in physical I/O to
the dataset and may therefore be out of step with the actual dataset. BLDL does
not require the SYSZLLA1.update ENQ and is therefore less prone to being
delayed by BLDLs on the same MVS system. Note that it is not advisable to use a
NOCONNECT option when invoking the BLDL macro because the DFHRPL
concatenated dataset may contain partitioned data set extended (PDSE) datasets.
PDSE can contain more function than PDS, but CICS may not recognise some of
this function. PDSE also use more virtual storage .
If you code LLACOPY=NO, CICS never issues an LLACOPY macro. Instead, each
time the RPL dataset is searched for a module, a BLDL is issued.
DASD tuning
The main solutions to DASD problems are to:
v Reduce the number of I/O operations
v Tune the remaining I/O operations
v Balance the I/O operations load.
Take the following figures as guidelines for best DASD response times for online
systems:
v Channel busy: less than 30% (with CHP ids this can be higher)
v Device busy: less than 35% for randomly accessed files
v Average response time: less than 20 milliseconds.
Aim for multiple paths to disk controllers because this allows dynamic path
selection to work.
For TCAM, the DFHTCT TYPE=TERMINAL TIOAL=value macro, is the only way
to adjust this value.
One value defining the minimum size is used for non-SNA devices, while two
values specifying both the minimum and maximum size are used for SNA devices.
This book does not discuss the performance aspects of the CICS Front End
Programming Interface. See the CICS Front End Programming Interface User’s Guide
for more information.
Effects
When value1,0 is specified for IOAREALEN, value1 is the minimum size of the
terminal input/output area that is passed to an application program when a
RECEIVE command is issued. If the size of the input message exceeds value1, the
area passed to the application program is the size of the input message.
When value1, value2 is specified, value1 is the minimum size of the terminal
input/output area that is passed to an application program when a RECEIVE
command is issued. Whenever the size of the input message exceeds value1, CICS
will use value2. If the input message size exceeds value2, the node abnormal
condition program sends an exception response to the terminal.
If you specify ATI(YES), you must specify an IOAREALEN of at least one byte.
Limitations
Real storage can be wasted if the IOAREALEN (value1) or TIOAL value is too
large for most terminal inputs in the network. If IOAREALEN (value1) or TIOAL
is smaller than most initial terminal inputs, excessive GETMAIN requests can
occur, resulting in additional processor requirements, unless IOAREALEN(value1)
or TIOAL is zero.
Recommendations
IOAREALEN(value1) or TIOAL should be set to a value that is slightly larger than
the average input message length for the terminal. The maximum value that may
be specified for IOAREALEN/TIOAL is 32767 bytes.
If a value of nonzero is required, the best size to specify is the most commonly
encountered input message size. A multiple of 64 bytes minus 21 allows for SAA
requirements and ensures good use of operating system pages.
For VTAM, you can specify two values if inbound chaining is used. The first value
should be the length of the normal chain size for the terminal, and the second
value should be the maximum size of the chain. The length of the TIOA presented
to the task depends on the message length and the size specified for the TIOA.
(See the example in Figure 30.)
Avoid specifying too large a value1, for example, by matching it to the size of the
terminal display screen. This area is used only as input. If READ with SET is
specified, the same pointer is used by applications for an output area.
If too small a value is specified for value1, extra processing time is required for
chain assembly, or data is lost if inbound chaining is not used.
In general, a value of zero is best because it causes the optimum use of storage and
eliminates the second GETMAIN request. If automatic transaction initiation (ATI) is
used for that terminal, a minimum size of one byte is required.
How implemented
For VTAM, the TIOA value is specified in the CEDA DEFINE TYPETERM
IOAREALEN attribute.
For TCAM, the TIOAL value can be specified in the terminal control table (TCT)
TYPE=TERMINAL operand. TIOAL defaults to the INAREAL value specified in
the TCT TYPE=LINE operand.
How monitored
RMF and NetView Performance Monitor (NPM) can be used to show storage usage
and message size characteristics in the network.
Storage for the RAIAs, which is above the 16MB line, is allocated by the CICS
terminal control program during CICS initialization, and remains allocated for the
entire execution of the CICS job step. The size of this storage is the product of the
RAPOOL and RAMAX system initialization parameters.
Effects
VTAM attempts to put any incoming RU into the initial receive-any input area,
which has the size of RAMAX. If this is not large enough, VTAM indicates that
and also states how many extra bytes are waiting that cannot be accommodated.
RAMAX is the largest size of any RU that CICS can take directly in the receive-any
command, and is a limit against which CICS compares VTAM’s indication of the
overall size of the RU. If there is more, VTAM saves it, and CICS gets the rest in a
second request.
With a small RAMAX, you reduce the virtual storage taken up in RAIAs but risk
more processor usage in VTAM retries to get any data that could not fit into the
RAIA.
For many purposes, the default RAMAX value of 256 bytes is adequate. If you
know that many incoming RUs are larger than this, you can always increase
RAMAX to suit your system.
For individual terminals, there are separate parameters that determine how large
an RU is going to be from that device. It makes sense for RAMAX to be at least as
large as the largest CEDA SENDSIZE for any frequently-used terminals.
Limitations
Real storage can be wasted with a high RAMAX value, and additional processor
time can be required with a low RAMAX value. If the RAMAX value is set too
low, extra processor time is needed to acquire additional buffers to receive the
remaining data. Because most inputs are 256 bytes, this should normally be
specified.
Do not specify a RAMAX value that is less than the RUSIZE (from the CINIT) for
a pipeline terminal because pipelines cannot handle overlength data.
Recommendations
Code RAMAX with the size in bytes of the I/O area allocated for each receive-any
request issued by CICS. The maximum value is 32767.
Set RAMAX to be slightly larger than your CICS system input messages. If you
know the message length distribution for your system, set the value to
accommodate the majority of your input messages.
In any case, the size required for RAMAX need only take into account the first (or
only) RU of a message. Thus, messages sent using SNA chaining do not require
RAMAX based on their overall chain length, but only on the size of the constituent
RUs.
Receive-any input areas are taken from a fixed length subpool of storage. A size of
2048 may appear to be adequate for two such areas to fit on one 4KB page, but
only 4048 bytes are available in each page, so only one area fits on one page. A
size of 2024 should be defined to ensure that two areas, including page headers, fit
on one page.
How implemented
RAMAX is a system initialization parameter.
How monitored
The size of RUs or chains in a network can be identified with a VTAM line or
buffer trace. The maximum size RUs are defined in the CEDA SENDSIZE attribute.
Effects
Initially, task input from a terminal or session is received by the VTAM access
method and is passed to CICS if CICS has a receive-any request outstanding.
For each receive-any request, a VTAM request parameter list (RPL), a receive-any
control element (RACE), and a receive-any input area (RAIA)—the value specified
by RAMAX (see “Receive-any input areas (RAMAX)” on page 203) are set aside.
The total area set aside for VTAM receive-any operations is:
If HPO=YES, both RACE and RPL are above the 16MB line.
In general, input messages up to the value specified in RAPOOL are all processed
in one dispatch of the terminal control task. Because the processing of a
receive-any request is a short operation, at times more messages than the RAPOOL
value may be processed in one dispatch of terminal control. This happens when a
receive-any request completes before the terminal control program has finished
processing and there are additional messages from VTAM.
The pool is used only for the first input to start a task; it is not used for output or
conversational input. VTAM posts the event control block (ECB) associated with
the receive any input area. CICS then moves the data to the terminal I/O area
(TIOA) ready for task processing. The RAIA is then available for reuse.
Where useful
Use the RAPOOL operand in networks that use the VTAM access method for
terminals.
Limitations
If the RAPOOL value is set too low, this can result in terminal messages not being
processed in the earliest dispatch of the terminal control program, thereby
inducing transaction delays during high-activity periods. For example, if you use
the default and five terminal entries want to start up tasks, three tasks may be
delayed for at least the time required to complete the VTAM receive-any request
and copy the data and RPL. In general, no more than 5 to 10% of all receive-any
processing should be at the RAPOOL ceiling, with none being at the RAPOOL
ceiling if there is sufficient storage.
Recommendations
Whether RAPOOL is significant or not depends on the environment of the CICS
system: whether, for example, HPO is being used.
In some cases, it may sometimes be more economical for VTAM to store the
occasional peak of messages in its own areas rather than for CICS itself to have a
large number of RAIAs, many of which are unused most of the time.
CICS maintains a VTAM RECEIVE ANY for n of the RPLs, where n is either the
RAPOOL value, or the MXT value minus the number of currently active tasks,
whichever is the smaller. See the CICS System Definition Guide for more information
about these SIT parameters.
The RAPOOL value you set depends on the number of sessions, the number of
terminals, and the ICVTSD value (see page 211) in the system initialization table
(SIT). Initially, for non-HPO systems, you should set RAPOOL to 1.5 times your
peak local 2 transaction rate per second plus the autoinstall rate. This can then be
adjusted by analyzing the CICS VTAM statistics and by resetting the value to the
maximum RPLs reached.
For HPO systems, a small value (<= 5) is usually sufficient if specified through the
value2 in the RAPOOL system initialization parameter. Thus, RAPOOL=20, for
example, is specified either RAPOOL=(20) or RAPOOL=(20,5) to achieve the same
effect.
How implemented
RAPOOL is a system initialization parameter.
How monitored
The CICS VTAM statistics contain values for the maximum number of RPLs posted
on any one dispatch of the terminal control program, and the number of times the
RPL maximum was reached. This maximum value may be greater than the
RAPOOL value if the terminal control program is able to reuse an RPL during one
dispatch. See “VTAM statistics” on page 51 for more information.
2. The RAPOOL figure does not include MRO sessions, so you should set RAPOOL to a low value in application- or file-owning
regions (AORs or FORs).
Effects
| HPO bypasses some of the validating functions performed by MVS on I/O
operations, and implements service request block (SRB) scheduling. This shortens
the instruction pathlength and allows some concurrent processing on MVS images
for the VTAM operations because of the SRB scheduling. This makes it useful in a
multi processor environment, but not in a single processor environment.
Limitations
HPO requires CICS to be authorized, and some risks with MVS integrity are
involved because a user-written module could be made to replace one of the CICS
system initialization routines and run in authorized mode. This risk can be reduced
by RACF protecting the CICS SDFHAUTH data set.
Use of HPO saves processor time, and does not increase real or virtual storage
requirements or I/O contention. The only expense of HPO is the potential security
| exposure that arises because of a deficiency in validation.
Recommendations
| The general recommendation is that all production systems with vetted
| applications can use HPO. It is totally application-transparent and introduces no
function restrictions while providing a reduced pathlength through VTAM. In the
case of VTAM, the reduced validation does not induce any integrity loss for the
messages.
How implemented
The SVCs and use of HPO are specified in the system initialization table (SIT) and,
if the default SVC numbers are acceptable, no tailoring of the system is required.
How monitored
There is no direct measurement of HPO. One way to tell if it is working is to take
detailed measurements of processor usage with HPO turned on (SIT option) and
with it turned off. Depending on the workload, you may not see much difference.
Another way to check whether it is working is that you may see a small increase
in the SRB scheduling time with HPO turned on.
RMF can give general information on processor usage. An SVC trace can show
how HPO was used.
| Note that you should be take care when using HPO in a system that is being used
| for early testing of a new application or CICS code (a new release or PUT). Much
of the pathlength reduction is achieved by bypassing control block verification
| code in VTAM. Untested code might possibly corrupt the control blocks that CICS
| passes to VTAM, and unvalidated applications can lead to security exposure.
Effects
One of the options in Systems Network Architecture (SNA) is whether the
messages exchanged between CICS and a terminal are to be in definite or
exception response mode. Definite response mode requires both the terminal and
CICS to provide acknowledgment of receipt of messages from each other on a
one-to-one basis.
SNA also ensures message delivery through synchronous data link control (SDLC),
so definite response is not normally required. Specifying message integrity
(MSGINTEG) causes the sessions for which it is specified to operate in definite
response mode.
In other cases, the session between CICS and a terminal operates in exception
response mode, and this is the normal case.
In SNA, transactions are defined within brackets. A begin bracket (BB) command
defines the start of a transaction, and an end bracket (EB) command defines the
end of that transaction. Unless CICS knows ahead of time that a message is the last
of a transaction, it must send an EB separate from the last message if a transaction
terminates. The EB is an SNA command, and can be sent with the message,
eliminating one required transmission to the terminal.
Specifying the ONEWTE option for a transaction implies that only one output
message is to be sent to the terminal by that transaction, and allows CICS to send
the EB along with that message. Only one output message is allowed if ONEWTE
is specified and, if a second message is sent, the transaction is abended.
The second way to allow CICS to send the EB with a terminal message is to code
the LAST option on the last terminal control or basic mapping support SEND
command in a program. Multiple SEND commands can be used, but the LAST
option must be coded for the final SEND in a program.
The third (and most common) way is to issue SEND without WAIT as the final
terminal communication. The message is then sent as part of task termination.
Where useful
The above options can be used in all CICS systems that use VTAM.
Limitations
The MSGINTEG option causes additional transmissions to the terminal.
Transactions remain in CICS for a longer period, and tie up virtual storage and
When MSGINTEG is specified, the TIOA remains in storage until the response is
received from the terminal. This option can increase the virtual storage
requirements for the CICS region because of the longer duration of the storage
needs.
How implemented
With resource definition online (RDO) using the CEDA transaction, protection can
be specified in the PROFILE definition by means of the MSGINTEG, and ONEWTE
options. The MSGINTEG option is used with SNA LUs only. See the CICS Resource
Definition Guide for more information about defining a PROFILE.
How monitored
You can monitor the use of the above options from a VTAM trace by examining
the exchanges between terminals and CICS and, in particular, by examining the
contents of the request/response header (RH).
Input chain size and characteristics are normally dictated by the hardware
requirements of the terminal in question, and so the CEDA BUILDCHAIN and
RECEIVESIZE attributes have default values which depend on device attributes.
The size of an output chain is specified by the CEDA SENDSIZE attribute.
Effects
Because the network control program (NCP) also segments messages into 256-byte
blocks for normal LU Type 0, 1, 2, and 3 devices, a SENDSIZE value of zero
eliminates the overhead of output chaining. A value of 0 or 1536 is required for
local devices of this type.
If you specify the CEDA SENDSIZE attribute for intersystem communication (ISC)
sessions, this must match the CEDA RECEIVESIZE attribute in the other system.
The CEDA SENDSIZE attribute or TCT BUFFER operand controls the size of the
SNA element that is to be sent, and the CEDA RECEIVESIZEs need to match so
that there is a corresponding buffer of the same size able to receive the element.
Where useful
Chaining can be used in systems that use VTAM and SNA terminals of types that
tolerate chaining.
Limitations
If you specify a low CEDA SENDSIZE value, this causes additional processing and
real and virtual storage to be used to break the single logical message into multiple
parts.
Chaining may be required for some terminal devices. Output chaining can cause
flickering on display screens, which can annoy users. Chaining also causes
additional I/O overhead between VTAM and the NCP by requiring additional
VTAM subtasks and STARTIO operations. This additional overhead is eliminated
with applicable ACF/VTAM releases by making use of the large message
performance enhancement option (LMPEO).
Recommendations
The CEDA RECEIVESIZE value for IBM 3274-connected display terminals should
be 1024; for IBM 3276-connected display terminals it should be 2048. These values
give the best line characteristics while keeping processor usage to a minimum.
How implemented
Chaining characteristics are specified in the CEDA DEFINE TYPETERM statement
with the SENDSIZE, BUILDCHAIN, and RECEIVESIZE attributes.
How monitored
Use of chaining and chain size can be determined by examining a VTAM trace.
You can also use the CICS internal and auxiliary trace facilities, in which the VIO
ZCP trace shows the chain elements. Some of the network monitor tools such as
NetView Performance Monitor (NPM) give this data.
Each concurrent logon/logoff requires storage in the CICS dynamic storage areas
for the duration of that processing.
Where useful
The OPNDLIM system initialization parameter can be used in CICS systems that
use VTAM as the terminal access method.
The OPNDLIM system initialization parameter can also be useful if there are times
when all the user community tends to log on or log off at the same time, for
example, during lunch breaks.
Limitations
If too low a value is specified for OPNDLIM, real and virtual storage requirements
are reduced within CICS and VTAM buffer requirements may be cut back, but
session initializations and terminations take longer.
Recommendations
Use the default value initially and make adjustments if statistics indicate that too
much storage is required in your environment or that the startup time (DEFINE
TYPETERM AUTOCONNECT attribute in CEDA) is excessive.
OPNDLIM should be set to a value not less than the number of LUs connected to
any single VTAM line.
How implemented
OPNDLIM is a system initialization parameter.
How monitored
Logon and logoff activities are not reported directly by CICS or any measurement
tools, but can be analyzed using the information given in a VTAM trace or VTAM
display command.
This last case arises from the way that CICS scans active tasks.
On CICS non-VTAM systems, the delay value specifies how long the terminal
control program must wait after an application terminal request, before it carries
out a TCT scan. The value thus controls batching and delay in the associated
processing of terminal control requests. In a low-activity system, it controls the
dispatching of the terminal control program.
The batching of requests reduces processor time at the expense of longer response
times. On CICS VTAM systems, it influences how quickly the terminal control
program completes VTAM request processing, especially when the MVS high
performance option (HPO) is being used.
Effects
VTAM
In VTAM networks, a low ICVTSD value does not cause full TCT scans because
the input from or output to VTAM terminals is processed from the activate queue
chain, and only those terminal entries are scanned.
With VTAM terminals, CICS uses bracket protocol to indicate that the terminal is
currently connected to a transaction. The bracket is started when the transaction is
initiated, and ended when the transaction is terminated. This means that there
could be two outputs to the terminal per transaction: one for the data sent and one
when the transaction terminates containing the end bracket. In fact, only one
output is sent (except for WRITE/SEND with WAIT and definite response). CICS
holds the output data until the next terminal control request or termination. In this
way it saves processor cycles and line utilization by sending the message and end
bracket or change direction (if the next request was a READ/RECEIVE) together in
the same output message (PIU). When the system gets very busy, terminal control
is dispatched less frequently and becomes more dependent upon the value
specified in ICVTSD. Because CICS may not send the end bracket to VTAM for an
extended period of time, the life of a transaction can be extended. This keeps
storage allocated for that task for longer periods and potentially increases the
amount of virtual storage required for the total CICS dynamic storage areas.
Non-VTAM
ICVTSD is the major control on the frequency of full terminal control table (TCT)
scanning of non-VTAM terminals. In active systems, a full scan is done
approximately once every ICVTSD. The average extra delay before sending an
output message should be about half this period.
All networks
The ICVTSD parameter can be changed in the system initialization table (SIT) or
through JCL parameter overrides. If you are having virtual storage constraint
problems, it is highly recommended that you reduce the value specified in
ICVTSD. A value of zero causes the terminal control task to be dispatched most
frequently. If you also have a large number of non-VTAM terminals, this may
increase the amount of nonproductive processor cycles. A value of 100—300
milliseconds may be more appropriate for that situation. In a pure VTAM
environment, however, the overhead is not significant, unless the average
transaction has a very short pathlength, and ICVTSD should be set to zero for a
better response time and best virtual storage usage.
Where useful
The ICVTSD system initialization parameter can be used in all except very
low-activity CICS systems.
Limitations
In TCAM systems, a low ICVTSD value can cause excessive processor time to be
used in slower processor units, and can delay the dispatch of user tasks because
too many full TCT scans have to be done. A high ICVTSD value can increase
response time by an average of one half of the ICVTSD value, and can tie up
resources owned by the task because the task takes longer to terminate. This
applies to conversational tasks.
In VTAM systems, a low value adds the overhead of scanning the activate queue
TCTTE chain, which is normally a minor consideration. A high value in
high-volume systems can increase task life and tie up resources owned by that task
for a longer period of time; this can be a significant consideration.
A low, nonzero value of ICVTSD can cause CICS to be dispatched more frequently,
which increases the overhead of performance monitoring.
Recommendations
Set ICVTSD to a value less than the region exit time interval (ICV), which is also in
the system initialization table (see page 192). Use the value of zero in an
environment that contains only VTAM terminals and consoles, unless your
| workload consists of many short transactions. ICVTSD=0 in a VTAM terminal-only
| environment is not recommended for a CICS workload consisting of low terminal
| activity but with high TASK activity. Periods of low terminal activity can lead to
| delays in CSTP being dispatched. Setting ICVTSD=100-500 resolves this by causing
| CSTP to be dispatched regularly. For non-VTAM systems, specify the value of zero
only for small networks (1 through 30 terminals).
The recommended absolute minimum level, for systems that are not “pure”
VTAM, is approximately 250 milliseconds or, in really high-performance,
high-power systems that are “pure” VTAM, 100 milliseconds.
How implemented
The ICVTSD system initialization parameter is defined in units of milliseconds.
Use the commands CEMT or EXEC CICS SET SYSTEM SCANDELAY (nnnn) to
reset the value of ICVTSD.
In reasonably active systems, a nonzero ICVTSD virtually replaces ICV (see page
194) because the time to the next TCT full scan (non-VTAM) or sending of output
requests (VTAM) is the principal influence on operating system wait duration.
How monitored
Use RMF to monitor task duration and processor requirements. The dispatcher
domain statistics reports the value of ICVTSD.
Effects
If the preceding transaction fails to terminate during the NPDELAY interval, the
X'87' unsolicited-input error condition is raised.
Where useful
When several queues are defined for TCAM-to-CICS processing, CICS can suspend
the acceptance of input messages from one or more of the queues without
completely stopping the flow of input from TCAM to CICS.
Choosing an appropriate value for NPDELAY is a matter of tuning. Even with the
“cascade” list approach, some messages may be held up behind an unsolicited
message. The objective should be to find the minimum value that can be specified
for NPDELAY which is sufficient to eliminate the unsolicited-input errors.
Limitations
Some additional processor cycles are required to process the exit code, and the
coding of the exit logic also requires some effort. Use of a compression exit reduces
the storage requirements of VTAM or TCAM and NCP, and reduces line
transmission time.
Recommendations
The simplest operation is to replace redundant characters, especially blanks, with a
repeat-to-address sequence in the data stream for 3270-type devices.
Note: The repeat-to-address sequence is not handled very quickly on some types
of 3270 cluster controller. In some cases, alternatives may give superior
performance. For example, instead of sending a repeat-to-address sequence
for a series of blanks, you should consider sending an ERASE and then
set-buffer-address sequences to skip over the blank areas. This is satisfactory
if nulls are acceptable in the buffer as an alternative to blanks.
Another technique for reducing the amount of data transmitted is to turn off any
modified data tags on protected fields in an output data stream. This eliminates
the need for those characters to be transmitted back to the processor on the next
input message, but you should review application dependencies on those fields
before you try this.
There may be other opportunities for data compression in individual systems, but
you may need to investigate the design of those systems thoroughly before you
can implement them.
How monitored
The contents of output terminal data streams can be examined in either a VTAM or
TCAM trace.
The AIQMAX value does not limit the total number of devices that can be
autoinstalled.
Setting the restart delay to zero means that you do not want CICS to re-install the
autoinstalled terminal entries from the global catalog during emergency restart. In
this case, CICS does not write the terminal entries to the catalog while the terminal
is being autoinstalled. This can have positive performance effects on the following
processes:
Normal shutdown CICS deletes AI terminal entries from the GCD during normal
shutdown unless they were not cataloged (AIRDELAY=0) and the terminal has not
been deleted. If the restart delay is set to zero, CICS has not cataloged terminal
entries when they were autoinstalled, so they are not deleted. This can reduce
normal shutdown time.
XRF takeover The system initialization parameter, AIRDELAY, should not affect
XRF takeover. The tracking process still functions as before regardless of the value
of the restart delay. Thus, after a takeover, the alternate system still has all the
autoinstalled terminal entries. However, if a takeover occurs before the catchup
process completes, some of the autoinstalled terminals have to log on to CICS
again. The alternate CICS system has to rely on the catalog to complete the
catchup process and, if the restart delay is set to zero in the active system, the
alternate system is not able to restore the autoinstalled terminal entries that have
not been tracked. Those terminals have to log on to the new CICS system, rather
than being switched or rebound after takeover.
You have to weigh the risk of having some terminal users log on again because
tracking has not completed, against the benefits introduced by setting the restart
delay to zero. Because catchup takes only a few minutes, the chance of such a
takeover occurring is usually small.
In general, setting the delete delay to a nonzero value can improve the
performance of CICS when many autoinstalled terminals are logging on and off
during the day. However, this does mean that unused autoinstalled terminal entry
storage is not freed for use by other tasks until the delete delay interval has
expired. This parameter provides an effective way of defining a terminal whose
storage lifetime is somewhere between that of an autoinstalled terminal and a
statically defined terminal.
The effect of setting the delete delay to a nonzero value can have different effects
depending on the value of the restart delay:
Nonzero restart delay When the restart delay is nonzero, CICS catalogs
autoinstalled terminal entries in the global catalog.
If the delete delay is nonzero as well, CICS retains the terminal entry so that it is
re-used when the terminal logs back on. This can eliminate the overhead of:
v Deleting the terminal entry in virtual storage
v An I/O to the catalog and recovery log
v Re-building the terminal entry when the terminal logs on again.
If the delete delay is nonzero, CICS retains the terminal entry so that it is re-used
when the terminal logs back on. This can save the overhead of deleting the
terminal entry in virtual storage and the rebuilding of the terminal entry when the
terminal logs on again.
Effects
You can control the use of resource by autoinstall processing in three ways:
1. By using the transaction class limit to restrict the number of autoinstall tasks
that can concurrently exist (see page 288).
2. By using the CATA and CATD transactions to install and delete autoinstall
terminals dynamically. If you have a large number of devices autoinstalled,
shutdown can fail due to the MXT system initialization parameter being
reached or CICS becoming short on storage. To prevent this possible cause of
shutdown failure, you should consider putting the CATD transaction in a class
of its own to limit the number of concurrent CATD transactions.
3. By specifying AIQMAX to limit the number of devices that can be queued for
autoinstall. This protects against abnormal consumption of virtual storage by
the autoinstall process, caused as a result of some other abnormal event.
If this limit is reached, the AIQMAX system initialization parameter affects the
LOGON and BIND processing by CICS. CICS requests VTAM to stop passing
LOGON and BIND requests to CICS. VTAM holds such requests until CICS
indicates that it can accept further LOGONs and BINDs (this occurs when CICS
has processed a queued autoinstall request).
Recommendations
If the autoinstall process is noticeably slowed down by the AIQMAX limit, raise it.
If the CICS system shows signs of running out of storage, reduce the AIQMAX
limit. If possible, set the AIQMAX system initialization parameter to a value higher
than that reached during normal operations.
A value of zero for both restart delay and delete delay is the best overall setting
for many systems from an overall performance and virtual-storage usage point of
view.
Because a considerable number of messages are sent to transient data during logon
and logoff, the performance of these output destinations should also be taken into
consideration.
How monitored
Monitor the autoinstall rate during normal operations by inspecting the autoinstall
| statistics regularly.
|
| CICS Web performance in a sysplex
| The dynamic routing facility is extended to provide mechanisms for dynamically
| routing program—link requests received from outside CICS. The target program of
| a CICS Web application can be run anywhere in a sysplex by dynamically routing
| the EXEC CICS LINK to the target application. Web bridge transactions should
| either be not routed or always routed to the same region so that there are major
| affinitites. Using CICSPlex SM to route the program-link requests, the transaction
| ID becomes much more significant because CICSPlex SM’s routing logic is
| transaction-based. CICSPlex SM routes each DPL request according to the rules
| specified for its associated transaction. This dynamic routing means that there is
| extra pathlength for both routed and nonrouted links, and routing links.
| Analyzer and converter programs must run in the same region as the instance of
| DFHWBBLI which invokes them, which in the case of CICS Web support, is the
| CICS region on which the HTTP request is received.
| If the Web API is being used by the application program to process the HTTP
| request and build the HTTP response, the application program must also run in
| the same CICS region as the instance of DFHWBBLI which is linking to it.
| To achieve optimum performance when using templates, you should ensure you
| have defined the template as DOCTEMPLATE and installed the definition before
| using it, especially when using the DFHWBTL program. If the template is not
| preinstalled when this program is used, DFHWBTL attempts to install it for you,
| assuming that it is a member of the partitioned dataset referenced by the
| DFHHTML DD statement.
| The fastest results can be achieved by storing your templates as CICS load
| modules. For more information about this, see the CICS Internet Guide. These
| modules are managed like other CICS loaded programs and may be flushed out by
| program compression when storage is constrained.
| When the CICS Web Business Logic Interface is used, the TS queue prefix is
| always DFHWEB.
|
| CICS Web support of HTTP 1.0 persistent connections
| In most circumstances CICS Web performance will be improved by enabling
| support of the HTTP 1.0 Keepalive header.
| To enable CICS support of this header, you have to specify NO or a numeric value
| for the SOCKET CLOSE keyword on the relevant TCPIPSERVICE definition; if NO
| or a numeric value is specified, and the incoming HTTP request contains the
| Keepalive header, CICS keeps the socket open in order to allow further HTTP
| requests to be sent by the Web Browser. If a numeric value is specified, the interval
| between receipt of the last HTTP request and arrival of the next must be less than
| the interval specified on the TCPIPSERVICE, else CICS closes the socket. Some
| HTTP proxy servers do not allow the HTTP 1.0 Keepalive header to be passed to
| the end server (in this case, CICS), so Web Browsers which wish to use this header
| may not be able to pass it to CICS if the HTTP request arrives via such an HTTP
| proxy server.
|
| CICS Web security
| If Secure Sockets Layer is used to make CICS Web transactions more secure, there
| will be a significant increase in pathlength for these transactions. This increase can
| be minimized by use of the HTTP 1.0 Keepalive header. Keeping the socket open
| removes the need to perform a full SSL handshake on the second and any
| subsequent HTTP request. If CICS or the Web Browser closes the socket, the SSL
| handshake has to be executed again.
|
| CICS Web 3270 support
| Use of the HTTP 1.0 Keepalive header can improve the performance of CICS Web
| 3270 support, by removing the need for the Web Browser to open a new sockets
| connection for each leg of the 3270 conversation or pseudoconversation.
The costs of assigning additional buffers and providing for concurrent operations
on data sets are the additional virtual and real storage that is required for the
buffers and control blocks.
Several factors influence the performance of VSAM data sets. The rest of this
section reviews these and the following sections summarize the various related
parameters of file control.
Note that, in this section, a distinction is made between “files” and “data sets”:
v A “file” means a view of a data set as defined by an installed CICS file resource
definition and a VSAM ACB.
v A “data set” means a VSAM “sphere”, including the base cluster with any
associated AIX® paths.
CICS provides separate LSR buffer pools for data and index records. If only data
buffers are specified, only one set of buffers are built and used for both data and
index records.
LSR files share a common pool of buffers and a common pool of strings (that is,
control blocks supporting the I/O operations). Other control blocks define the file
and are unique to each file or data set. NSR files or data sets have their own set of
buffers and control blocks.
Some important differences exist between NSR and LSR in the way that VSAM
allocates and shares the buffers.
In NSR, the minimum number of data buffers is STRNO + 1, and the minimum
index buffers (for KSDSs and AIX paths) is STRNO. One data and one index buffer
are preallocated to each string, and one data buffer is kept in reserve for CI splits.
If there are extra data buffers, these are assigned to the first sequential operation;
they may also be used to speed VSAM CA splits by permitting chained I/O
operations. If there are extra index buffers, they are shared between the strings and
are used to hold high-level index records, thus providing an opportunity for saving
physical I/O.
Before issuing a read to disk when using LSR, VSAM first scans the buffers to
check if the control interval it requires is already in storage. If so, it may not have
to issue the read. This buffer “lookaside” can reduce I/O significantly.
The general recommendation is to use LSR for all VSAM data sets except where
you have one of the following situations:
v A file is very active but there is no opportunity for lookaside because, for
instance, the file is very large.
v High performance is required by the allocation of extra index buffers.
v Fast sequential browse or mass insert is required by the allocation of extra data
buffers.
v Control area (CA) splits are expected for a file, and extra data buffers are to be
allocated to speed up the CA splits.
If you have only one LSR pool, a particular data set cannot be isolated from others
using the same pool when it is competing for strings, and it can only be isolated
when it is competing for buffers by specifying unique CI sizes. In general, you get
more self-tuning effects by running with one large pool, but it is possible to isolate
busy files from the remainder or give additional buffers to a group of high
performance files by using several pools. It is possible that a highly active file has
more successful buffer lookaside and less I/O if it is set up as the only file in an
LSR subpool rather than using NSR. Also the use of multiple pools eases the
restriction of 255 strings for each pool.
Number of strings
The next decision to be made is the number of concurrent accesses to be supported
for each file and for each LSR pool.
VSAM requires one or more strings for each concurrent file operation. For
nonupdate requests (for example, a READ or BROWSE), an access using a base
needs one string, and an access using an AIX needs two strings (one to hold
position on the AIX and one to hold position on the base data set). For update
requests where no upgrade set is involved, a base still needs one string, and a path
two strings. For update requests where an upgrade set is involved, a base needs
1+n strings and a path needs 2+n strings, where n is the number of members in
the upgrade set (VSAM needs one string per upgrade set member to hold
position). Note that, for each concurrent request, VSAM can reuse the n strings
required for upgrade set processing because the upgrade set is updated serially.
See “CICS calculation of LSR pool parameters” on page 231.
| Note: There are some special considerations for setting the STRINGS value for an
| ESDS file (see “Number of strings considerations for ESDS files” on
| page 229).
| For LSR, it is possible to specify the precise numbers of strings, or to have CICS
calculate the numbers. The number specified in the LSR pool definition is the
actual number of strings in the pool. If CICS is left to calculate the number of
strings, it derives the pool STRINGS from the RDO file definition and interprets
this, as with NSR, as the actual number of concurrent requests. (For an explanation
of CICS calculation of LSR pool parameters, see “CICS calculation of LSR pool
parameters” on page 231.)
You must decide how many concurrent read, browse, updates, mass inserts, and so
on you need to support.
If access to a file is read only with no browsing, there is no need to have a large
number of strings; just one may be sufficient. Note that, while a read operation
only holds the VSAM string for the duration of the request, it may have to wait for
the completion of an update operation on the same CI.
| In general (but see“Number of strings considerations for ESDS files” on page 229)
| where some browsing or updates are used, STRINGS should be set to 2 or 3
initially and CICS file statistics should be checked regularly to see the proportion
of wait-on-strings encountered. Wait-on-strings of up to 5% of file accesses would
usually be considered quite acceptable. You should not try, with NSR files, to keep
wait-on-strings permanently zero.
CICS manages string usage for both files and LSR pools. For each file, whether it
uses LSR or NSR, CICS limits the number of concurrent VSAM requests to the
STRINGS= specified in the file definition. For each LSR pool, CICS also prevents
more requests being concurrently made to VSAM than can be handled by the
strings in the pool. Note that, if additional strings are required for upgrade-set
processing at update time, CICS anticipates this requirement by reserving the
additional strings at read-for-update time. If there are not enough file or LSR pool
strings available, the requesting task waits until they are freed. The CICS statistics
give details of the string waits.
If you want to distribute your strings across tasks of different types, the transaction
classes may also be useful. You can use transaction class limits to control the
transactions issuing the separate types of VSAM request, and for limiting the
number of task types that can use VSAM strings, thereby leaving a subset of
strings available for other uses.
All placeholder control blocks must contain a field long enough for the largest key
associated with any of the data sets sharing the pool. Assigning one inactive file
that has a very large key (primary or alternate) into an LSR pool with many strings
| may use excessive storage.
| If an ESDS is used as an ‘add-only’ file (that is, it is used only in write mode to
| add records to the end of the file), a string number of 1 is strongly recommended.
| Any string number greater than 1 can significantly affect performance, because of
| exclusive control conflicts that occur when more than one task attempts to write to
| the ESDS at the same time.
| If an ESDS is used for both writing and reading, with writing, say, being 80% of
| the activity, it is better to define two file definitions—using one file for writing and
| the other for reading.
In general, direct I/O runs slightly more quickly when data CIs are small, whereas
sequential I/O is quicker when data CIs are large. However, with NSR files, it is
possible to get a good compromise by using small data CIs but also assigning extra
buffers, which leads to chained and overlapped sequential I/O. However, all the
extra data buffers get assigned to the first string doing sequential I/O.
VSAM functions most efficiently when its control areas are the maximum size, and
it is generally best to have data CIs larger than index CIs. Thus, typical CI sizes for
data are 4KB to 12KB and, for index, 1KB to 2KB.
In general, you should specify the size of the data CI for a file, but allow VSAM to
select the appropriate index CI to match. An exception to this is if key compression
turns out to be less efficient than VSAM expects it to be. In this case, VSAM may
select too small an index CI size. You may find an unusually high rate of CA splits
occurring with poor use of DASD space. If this is suspected, specify a larger index
CI.
In the case of LSR, there may be a benefit in standardizing on the CI sizes, because
this allows more sharing of buffers between files and thereby allow a lower total
Chapter 18. VSAM and file control 229
number of buffers. Conversely, there may be a benefit in giving a file unique CI
sizes to prevent it from competing for buffers with other files using the same pool.
Try to keep CI sizes at 512, 1KB, 2KB, or any multiple of 4KB. Unusual CI sizes
like 26KB or 30KB should be avoided. A CI size of 26KB does not mean that
physical block size will be 26KB; the physical block size will most likely be 2KB in
this case (it is device-dependent).
Specify the number of data and index buffers for NSR using the DATABUFFER
and INDEXBUFFER parameters of the file definition. It is important to specify
sufficient index buffers. If a KSDS consists of just one control area (and, therefore,
just one index CI), the minimum index buffers equal to STRINGS is sufficient. But
when a KSDS is larger than this, at least one extra index buffer needs to be
specified so that at least the top level index buffer is shared by all strings. Further
index buffers reduces index I/O to some extent.
Note that when the file is an AIX path to a base, the same INDEXBUFFERS (if the
base is a KSDS) and DATABUFFERS are used for AIX and base buffers (but see
“Data set name sharing” on page 232).
Allowing CICS to calculate the LSR parameters is easy but it requires additional
overhead (when the first file that needs the LSR pool is opened) to build the pool
because CICS must read the VSAM catalog for every file that is specified to use the
pool. Also it cannot be fine-tuned by specifying actual quantities of each buffer
size. When making changes to the size of an LSR pool, refer to the CICS statistics
before and after the change is made. These statistics show whether the proportion
of VSAM reads satisfied by buffer lookaside is significantly changed or not.
In general, you would expect to benefit more by having extra index buffers for
lookaside, and less by having extra data buffers. This is a further reason for
standardizing on LSR data and index CI sizes, so that one subpool does not have a
mix of index and data CIs in it.
Note: Data and index buffers are specified separately with the LSRPOOL
definition. Thus, there is not a requirement to use CI size to differentiate
between data and index values.
Note: If you have specified only buffers or only strings, CICS performs the
calculation for what you have not specified.
The following information helps you calculate the buffers required. A particular file
may require more than one buffer size. For each file, CICS determines the buffer
sizes required for:
v The data component
v The index component (if a KSDS)
v The data and index components for the AIX (if it is an AIX path)
v The data and index components for each AIX in the upgrade set (if any).
When this has been done for all the files that use the pool, the total number of
buffers for each size is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT in the
LSRPOOL definition. The SHARELIMIT parameter takes precedence.
v If necessary, increased to a minimum of three buffers.
v Rounded up to the nearest 4KB boundary.
Note: If the LSR pool is calculated by CICS and the data sets have been archived
by HSM, when the first file that needs the LSR pool is opened, the startup
When the strings have been accumulated for all files, the total is:
v Reduced to either 50% or the percentage specified in the SHARELIMIT
parameter in the LSR pool definition. The SHARELIMIT parameter takes
precedence.
v Reduced to 255 (the maximum number of strings allowed for a pool by VSAM).
v Increased to the largest specified STRINGS value for a particular file.
To avoid files failing to open because of the lack of adequate resources, you can
specify that CICS should include files opened in RLS mode when it is calculating
the size of an LSR pool using default values. To specify the inclusion of files
defined with RLSACCESS(YES) in an LSR pool being built using values that CICS
calculates, use the RLSTOLSR=YES system initialization parameter
(RLSTOLSR=NO is the default)
See the CICS System Definition Guide for more information about the RLSTOLSR
parameter.
DSN sharing is the default for files using both NSR and LSR. The only exception
to this default is made when opening a file that has been specified as read-only
(READ=YES or BROWSE=YES) and with DSNSHARING(MODIFYREQS) in the file
resource definition. CICS provides this option so that a file (represented by an
When the first member of a group of DSN-sharing NSR files is opened, CICS must
specify to VSAM the total number of strings to be allocated for all file entries in
the group, by means of the BSTRNO value in the ACB. VSAM builds its control
block structure at this time regardless of whether the first data set to be opened is
a path or a base. CICS calculates the value of BSTRNO used at the time of the
open by adding the STRINGS values in all the files that share the same
NSRGROUP= parameter.
If you do not provide the NSRGROUP= parameter, the VSAM control block
structure may be built with insufficient strings for later processing. This should be
avoided for performance reasons. In such a case, VSAM invokes the dynamic
string addition feature to provide the extra control blocks for the strings as they
are required, and the extra storage is not released until the end of the CICS run.
AIX considerations
For each AIX defined with the UPGRADE attribute, VSAM upgrades the AIX
automatically when the base cluster is updated.
For NSR, VSAM uses a special set of buffers associated with the base cluster to do
this. This set consists of two data buffers and one index buffer, which are used
serially for each AIX associated with a base cluster. It is not possible to tune this
part of the VSAM operation.
Care should be taken when specifying to VSAM that an AIX should be in the
upgrade set. Whenever a new record is added, an existing record deleted, or a
record updated with a changed attribute key, VSAM updates the AIXs in the
upgrade set. This involves extra processing and extra I/O operations.
Adding records to the end of a VSAM data set does not cause CI/CA splits.
Adding sequential records to anywhere but the end causes splits. An empty file
with a low-value dummy key tends to reduce splits; a high-value key increases the
number of splits.
Effects
The LSRPOOLID parameter specifies whether a file is to use LSR or NSR and, if
LSR, which pool.
Where useful
The LSRPOOLID parameter can be used in CICS systems with VSAM data sets.
Limitations
All files with the same base data set, except read-only files with
DSNSHARING(MODIFYREQS) specified in the file definition, must use either the
same LSR pool or all use NSR.
Recommendations
See “VSAM considerations: general objectives” on page 225. Consider removing
files from an LSR pool.
How implemented
The resource usage is defined by the LSRPOOL definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.
Effects
INDEXBUFFERS and DATABUFFERS specify the number of index and data buffers
for an NSR file.
The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings) and efficient sequential operations and
CA splits. Providing extra buffers for high-level index records can reduce physical
I/O operations.
Buffer allocations above the 16MB line represent a significant part of the virtual
storage requirement of most CICS systems.
INDEXBUFFERS and DATABUFFERS have no effect if they are specified for files
using LSR.
Where useful
The INDEXBUFFERS and DATABUFFERS parameters should be used in CICS
systems that use VSAM NSR files in CICS file control.
Limitations
These parameters can be overridden by VSAM if they are insufficient for the
strings specified for the VSAM data set. The maximum specification is 255. A
specification greater than this will automatically be reduced to 255. Overriding of
VSAM strings and buffers should never be done by specifying the AMP= attribute
on the DD statement.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The INDEXBUFFERS and DATABUFFERS parameters are defined in the file
definition on the CSD. They correspond exactly to VSAM ACB parameters:
INDEXBUFFERS is the number of index buffers, DATABUFFERS is the number of
data buffers.
Effects
The BUFFERS parameter allows for exact definition of specific buffers for the LSR
pool.
The number of buffers can have a significant effect on performance. The use of
many buffers can permit multiple concurrent operations (if there are the
corresponding number of VSAM strings). It can also increase the chance of
successful buffer lookaside with the resulting reduction in physical I/O operations.
The number of buffers should achieve an optimum between increasing the I/O
saving due to lookaside and increasing the real storage requirement. This optimum
is different for buffers used for indexes and buffers used for data. Note that the
optimum buffer allocation for LSR is likely to be significantly less than the buffer
allocation for the same files using NSR.
Where useful
The BUFFERS parameter should be used in CICS systems that use VSAM LSR files
in CICS file control.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The BUFFERS parameter is defined in the file definition on the CSD. For more
information about the CSD, see the CICS Resource Definition Guide.
How monitored
The effects of these parameters can be monitored through transaction response
times and data set and paging I/O rates. The effectiveness affects both file and
lsrpool statistics. The CICS file statistics show data set activity to VSAM data sets.
The VSAM catalog and RMF can show data set activity, I/O contention, space
usage, and CI size.
Effects
The STRINGS parameter for files using NSR has the following effects:
v It specifies the number of concurrent asynchronous requests that can be made
against that specific file.
v It is used as the STRINGS in the VSAM ACB.
v It is used, in conjunction with the BASE parameter, to calculate the VSAM
BSTRNO.
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| invalidating the buffers for each of the strings is greater than waiting for the
| string, and there can be a significant increase in the number of VSAM EXCP
| requests.
Strings represent a significant part of the virtual storage requirement of most CICS
systems. With CICS, this storage is above the 16MB line.
Where useful
The STRINGS parameter should be used in CICS systems that use VSAM NSR files
in CICS file control.
Limitations
A maximum of 255 strings can be used as the STRNO or BSTRNO in the ACB.
Recommendations
See “Number of strings considerations for ESDS files” on page 229 and “VSAM
considerations: general objectives” on page 225.
How implemented
| The number of strings is defined by the STRINGS parameter in the CICS file
definition on the CSD. It corresponds to the VSAM parameter in the ACB except
where a base file is opened as the first for a VSAM data set; in this case, the
CICS-accumulated BSTRNO value is used as the STRNO for the ACB.
How monitored
The effects of the STRINGS parameter can be seen in increased response times and
monitored by the string queueing statistics for each file definition. RMF can show
I/O contention in the DASD subsystem.
Effects
The STRINGS parameter relating to files using LSR has the following effects:
v It specifies the number of concurrent requests that can be made against that
specific file.
v It is used by CICS to calculate the number of strings and buffers for the LSR
pool.
v It is used as the STRINGS for the VSAM LSR pool.
v It is used by CICS to limit requests to the pools to prevent a VSAM
short-on-strings condition (note that CICS calculates the number of strings
required per request).
| v A number greater than 1 can adversely affect performance for ESDS files used
| exclusively in write mode. With a string number greater than 1, the cost of
| resolving exclusive control conflicts is greater than waiting for a string. Each
| time exclusive control is returned, a GETMAIN is issued for a message area,
| followed by a second call to VSAM to obtain the owner of the control interval.
Where useful
The STRINGS parameter can be used in CICS systems with VSAM data sets.
Limitations
A maximum of 255 strings is allowed per pool.
Recommendations
| See “Number of strings considerations for ESDS files” on page 229 and “VSAM
| considerations: general objectives” on page 225.
How implemented
The number of strings is defined by the STRNO parameter in the file definition on
the CSD, which limits the concurrent activity for that particular file.
How monitored
The effects of the STRINGS parameter can be seen in increased response times for
each file entry. The CICS LSRPOOL statistics give information on the number of
data set accesses and the highest number of requests for a string.
Examination of the string numbers in the CICS statistics shows that there is a
two-level check on string numbers available: one at the data set level (see “File
control” on page 385), and one at the shared resource pool level (see “LSRpool” on
page 416).
Effects
The KEYLENGTH parameter causes the “placeholder” control blocks to be built
with space for the largest key that can be used with the LSR pool. If the
KEYLENGTH specified is too small, it prevents requests for files that have a longer
key length.
Where useful
The KEYLENGTH parameter can be used in CICS systems with VSAM data sets.
Recommendations
See “VSAM considerations: general objectives” on page 225.
The key length should always be as large as, or larger than, the largest key for files
using the LSR pool.
How implemented
The size of the maximum keylength is defined in the KEYLEN parameter in the
file definition on the CSD. For more information about the CSD, see the CICS
Resource Definition Guide.
Effects
The method used by CICS to calculate LSR pool parameters and the use of the
SHARELIMIT value is described in “VSAM considerations: general objectives” on
page 225.
This parameter has no effect if both the BUFFERS and the STRINGS parameters
are specified for the pool.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
The SHARELIMIT parameter is specified in the LSR pool definition. For more
information, see the CICS Resource Definition Guide.
Effects
CICS always builds a control block for LSR pool 1. CICS builds control blocks for
other pools if either a LSR pool definition is installed, or a file definition at CICS
initialization time has LSRPOOL= defined with the number of the pool.
Where useful
VSAM local shared resources can be used in CICS systems that use VSAM.
Recommendations
See “VSAM considerations: general objectives” on page 225.
How implemented
CICS uses the parameters provided in the LSR pool definition to build the LSR
pool.
How monitored
VSAM LSR can be monitored by means of response times, paging rates, and CICS
LSRPOOL statistics. The CICS LSRPOOL statistics show string usage, data set
activity, and buffer lookasides (see “LSRpool” on page 416).
Hiperspace buffers
VSAM Hiperspace buffers reside in MVS expanded storage. These buffers are
backed only by expanded storage. If the system determines that a particular page
of this expanded storage is to be used for another purpose, the current page’s
contents are discarded rather than paged-out. If VSAM subsequently requires this
Effects
The use of a very large number of Hiperspace buffers can reduce both physical
I/O and pathlength when accessing your CICS files because the chance of finding
the required records already in storage is relatively high.
Limitations
Because the amount of expanded storage is limited, it is possible that the
installation will overcommit its use and VSAM may be unable to allocate all of the
Hiperspace buffers requested. MVS may use expanded storage pages for purposes
other than those allocated to VSAM Hiperspace buffers. In this case CICS
continues processing using whatever buffers are available.
If address space buffers are similarly overallocated then the system would have to
page. This overallocation of address space buffers is likely to seriously degrade
CICS performance whereas overallocation of Hiperspace buffers is not.
Hiperspace buffer contents are lost when an address space is swapped out. This
causes increased I/O activity when the address is swapped in again. If you use
Hiperspace buffers, you should consider making the CICS address space
nonswappable.
Recommendations
Keeping data in memory is usually very effective in reducing the CPU costs
provided adequate central and expanded storage is available. Using mostly
Hiperspace rather than all address space buffers can be the most effective option
especially in environments where there are more pressing demands for central
storage than VSAM data.
How implemented
CICS never requests Hiperspace buffers as a result of its own resource calculations.
You have to specify the size and number of virtual buffers and Hiperspace buffers
that you need.
You can use the RDO parameters of HSDATA and HSINDEX, which are added to
the LSRPOOL definition to specify Hiperspace buffers. Using this method you can
adjust the balance between Hiperspace buffers and virtual buffers for your system.
For further details of the CEDA transaction, see the CICS Resource Definition Guide.
Effects
The objective of subtasks is to increase the maximum throughput of a single CICS
system on multiprocessors. However, the intertask communication increases total
processor utilization.
When I/O is done on subtasks, any extended response time which would cause
the CICS region to stop, such as CI/CA splitting in NSR pools, causes only the
additional TCB to stop. This may allow more throughput in a region that has very
many CA splits in its file, but has to be assessed cautiously with regard to the
extra overhead associated with using the subtask.
Limitations
Subtasking can improve throughput only in multiprocessor MVS images, because
additional processor cycles are required to run the extra subtask. For that reason,
we do not recommend the use of this facility on uniprocessors (UPs). It should be
used only for a region that reaches the maximum capacity of one processor in a
complex that has spare processor capacity or has NSR files that undergo frequent
CI/CA splitting.
Regions that do not contain significant amounts of VSAM data set activity
(particularly update activity) do not gain from VSAM subtasking.
Application task elapsed time may increase or decrease because of conflict between
subtasking overheads and better use of multiprocessors. Task-related DSA
occupancy increases or decreases proportionately.
Recommendations
SUBTSKS=1 should normally be specified only when the CICS system is run on a
MVS image with two or more processors and the peak processor utilization due to
the CICS main TCB in a region exceeds, say, about 70% of one processor, and a
significant amount of I/O activity within the CICS address space is eligible for
subtasking.
The maximum system throughput of this sort of CICS region can be increased by
using the I/O subtask, but at the expense of some additional processing for
communication between the subtask and the MVS task under which the
transaction processing is performed. This additional processing is seldom justified
unless the CICS region has reached or is approaching its throughput limit.
A TOR that is largely or exclusively routing transactions to one or more AORs has
very little I/O that is eligible for subtasking. It is not, therefore, a good candidate
for subtasking.
Subtasking should be considered for a busy FOR that often has a significant
amount of VSAM I/O (but remember that DL/I processing of VSAM data sets is
not subtasked).
| How monitored
| CICS dispatcher domain statistics include information about the modes of TCB
| listed in “Subtasking: VSAM (SUBTSKS=1)” on page 241.
|
Data tables
Data tables enable you to build, maintain and have rapid access to data records
contained in tables held in virtual storage above the 16MB line. Therefore, they can
provide a substantial performance benefit by reducing DASD I/O and pathlength
resources. The pathlength to retrieve a record from a data table is significantly
shorter than that to retrieve a record already in a VSAM buffer.
Effects
v After the initial data table load operation, DASD I/O can be eliminated for all
user-maintained and for read-only CICS-maintained data tables.
v Reductions in DASD I/O for CICS-maintained data tables are dependent on the
READ/WRITE ratio. This is a ratio of the number of READs to WRITEs that
was experienced on the source data set, prior to the data table implementation.
They also depend on the data table READ-hit ratio, that is, the number of
READs that are satisfied by the table, compared with the number of requests
that go against the source data set.
v CICS file control processor consumption can be reduced by up to 70%. This is
dependent on the file design and activity, and is given here as a general
guideline only. Actual results vary from installation to installation.
For CICS-maintained data tables, CICS ensures the synchronization of source data
set and data table changes. When a file is recoverable, the necessary
synchronization is already effected by the existing record locking. When the file is
nonrecoverable, there is no CICS record locking and the note string position (NSP)
mechanism is used instead for all update requests. This may have a small
performance impact of additional VSAM ENDREQ requests in some instances.
Recommendations
v Remember that data tables are defined by two RDO parameters, TABLE and
MAXNUMRECS of the file definition. No other changes are required.
v Start off gradually by selecting only one or two candidates. You may want to
start with a CICS-maintained data table because this simplifies recovery
considerations.
v Select a CICS-maintained data table with a high READ to WRITE ratio. This
information can be found in the CICS LSRPOOL statistics (see page 416) by
running a VSAM LISTCAT job.
v READ INTO is recommended, because READ SET incurs slightly more internal
overhead.
How implemented
Data tables can be defined using either the DEFINE FILE command of the CEDx
transaction or the DFHCSDUP utility program. See the CICS Resource Definition
Guide for more information.
How monitored
Performance statistics are gathered to assess the effectiveness of the data table.
They are in addition to those available through the standard CICS file statistics.
| A CFDT is similar in many ways to a shared user-maintained data table, and the
| API used to store and retrieve the data is based on the file control API used for
| user-maintained data tables. The data, unlike a UMT, is not kept in a dataspace in
| CFDTs are particularly useful for informal shared data. Uses could include a
| sysplex-wide shared scratchpad, look-up tables of telephone numbers, and creating
| a subset of customers from a customer list. Compared with existing methods of
| sharing data of this kind, such as shared data tables, shared temporary storage or
| RLS files, CFDTs offer some distinct advantages:
| v If the data is frequently accessed for modification, CFDT provides superior
| performance compared with function-shipped UMT requests, or using an RLS
| file
| v CFDT-held data can be recoverable within a CICS transaction. Recovery of the
| structure is not supported, but the CFDT server can recover from a unit of work
| failure, and in the event of a CICS region failure, a CFDT server failure, and an
| MVS failure (that is, updates made by units of work that were in-flight at the
| time of the failure are backed out). Such recoverability is not provided by shared
| temporary storage.
| There are two models of coupling facility data table, a contention model or locking
| model.
| The locking model causes records to be locked following a read for update request
| so that multiple updates cannot occur.
| The relative cost of using update models and recovery is related to the number of
| coupling facility accesses needed to support a request. Contention requires the least
| number of accesses, but if the data is changed, additional programming and
| coupling facility accesses would be needed to handle this condition. Locking
| requires more coupling facility accesses, but does mean a request will not need to
| be retried, whereas retries can be required when using the contention model.
| Recovery also requires further coupling facility accesses, because the recovery data
| is kept in the coupling facility list structure.
| The following table shows the number of coupling facility accesses needed to
| support the CFDT request types by update model.
| Open, Close 3 3 6
| Read, Point 1 1 1
| Write new record 1 1 2
| Read for Update 1 2 2
| Unlock 0 1 1
| Rewrite 1 1 3
| Delete 1 1 2
| Delete by key 1 2 3
| Syncpoint 0 0 3
| Lock WAIT 0 2 2
| Lock POST 0 2 2
| Cross-system POST 0 2 per waiting 2 per waiting
| server server
| Locking model
| Records held in a coupling facility list structure are marked as locked by updating
| the adjunct area associated with the coupling facility list structure element that
| holds the data. Locking a record requires an additional coupling facility access to
| set the lock, having determined on the first access that the data was not already
| locked.
| If, however, there is an update conflict, a number of extra coupling facility accesses
| are needed, as described in the following sequence of events:
| 1. The request that hits lock contention is initially rejected.
| 2. The requester modifies the locked record adjunct area to express an interest in
| it. This is a second extra coupling facility access for the lock waiter.
| 3. The lock owner has its update rejected because the record adjunct area has
| been modified, requiring the CICS region to re-read and retry the update. This
| results in two extra coupling facility accesses.
| 4. The lock owner sends a lock release notification message. If the lock was
| requested by a different server, this results in a coupling facility access to write
| a notification message to the other server and a coupling facility access to read
| it on the other side.
| Contention model
| The contention update model uses the entry version number to keep track of
| changes. The entry version number is changed each time the record is updated.
| This allows an update request to check that the record has not been altered since
| its copy of the record was acquired.
| When an update conflict occurs, additional coupling facility accesses are needed:-
| v The request that detects that the record has changed is initially rejected and a
| CHANGED response is sent.
| v The application receiving the response has to decide whether to retry the
| request.
| Recommendations
| Choose an appropriate use of a CFDT. For example, for cross-system, recoverable
| scratchpad storage, where shared TS does not give the required functional, or
| VSAM RLS incurs too much overhead.
| A large file requires a large amount of coupling facility storage to contain it.
| Smaller files are better CFDT candidates (unless your application is written to
| control the number of records held in a CFDT).
| The additional cost of using a locking model compared with a contention model is
| not great. Considering that using the contention model may need application
| changes if you are using an existing program, locking is probably the best choice
| of update model for your CFDT. If coupling facility accesses are critical to you,
| they are minimized by the contention model.
| Recovery costs slightly more in CPU usage and in coupling facility utilisation.
| Allow for expansion when sizing the CFDT. The amount of coupling facility
| storage a structure occupies can be increased dynamically up to the maximum
| defined in the associated coupling facility resource management (CFRM) policy
| with a SETXCF ALTER command. The MAXTABLES value defined to the CFDT
| server should allow for expansion. Therefore, consider setting it to a value higher
| The utilization of the CFDT should be regularly monitored both through CICS and
| CFDT statistics and RMF. Check that the size of the structure is reasonable for the
| amount of data it contains. A maximum-used of 80% is a reasonable target.
| Defining a maximum coupling facility list structure size in the CFRM policy
| definition to be greater than the initial allocation size specified by the POOLSIZE
| parameter in the CFDT server startup parameters enables you to enlarge the
| structure dynamically with a SETXCF ALTER command if the structure does fill in
| extraordinary circumstances.
| Ensure that the AXMPGANY storage pool is large enough. This can be increased
| by increasing the REGION size for the CFDT server. Insufficient AXMPGANY
| storage may lead to 80A abends in the CFDT server.
| How implemented
| A CFDT is defined to a CICS region using a FILE definition with the following
| parameters:
| v TABLE(CF)
| v MAXNUMRECS(NOLIMIT|number(1 through 99999999))
| v CFDTPOOL(pool_name)
| v TABLENAME(name)
| v UPDATEMODEL(CONTENTION|LOCKING)
| v LOAD(NO│YES)
| MAXNUMRECS specifies the maximum number of records that that CFDT can
| hold.
| The first CICS region to open the CFDT determines the attributes for the file. Once
| opened successfully, these attributes remain associated with the CFDT through the
| data in the coupling facility list structure. Unless this table or coupling facility list
| structure is deleted or altered by a CFDT server operator command, the attributes
| persist even after CICS and CFDT server restarts. Other CICS regions attempting to
| open the CFDT must have a consistent definition of the CFDT, for example using
| the same update model.
| The CFDT server controls the coupling facility list structure and the data tables
| held in this structure. The parameters documented in the CICS System Definition
| Guide describe how initial structure size, structure element size, and
| entry-to-element ratio can be specified.
| How monitored
| Both CICS and the CFDT server produce statistics records. These are described in
| “Appendix C. Coupling facility data tables server statistics” on page 509.
| The CICS file statistics report the various requests by type issued against each
| CFDT. They also report if the CFDT becomes full, the highest number of records
| held and a Changed Response/Lock Wait count. This last item can be used to
| determine for a contention CFDT how many times the CHANGED condition was
| returned. For a locking CFDT this count reports how many times requests were
| made to wait because the requested record was already locked.
| This above example shows the amount of space currently used in a coupling
| facility list structure (Size) and the maximum size (Max size) defined for the
| structure. The structure size can be increased by using a SETXCF ALTER
| command. The number of lists defined is determined by the MAXTABLES
| parameter for the CFDT server. In this example, the structure can support up to
| 100 data tables (and 37 lists for control information).
| Each list entry comprises a fixed length section for entry controls and a variable
| number of data elements. The size of these elements is fixed when the structure is
| first allocated in the coupling facility, and is specified to the CFDT server by the
| ELEMSIZE parameter. The allocation of coupling facility space between entry
| controls and elements will be altered automatically and dynamically by the CFDT
| server to improve space utilization if necessary.
| The reserve space is used to ensure that rewrites and server internal operations can
| still function if a structure fills with user data.
| The amount of storage used with the CFDT region to support AXM requests is also
| reported. For example:-
| AXMPG0004I Usage statistics for storage page pool AXMPGANY:
| Size In Use Max Used Free Min Free
| 30852K 636K 672K 30216K 30180K
| 100% 2% 2% 98% 98%
| Gets Frees Retries Fails
| 3122 3098 0 0
| AXMPG0004I Usage statistics for storage page pool AXMPGLOW:
| Size In Use Max Used Free Min Free
| 440K 12K 12K 428K 428K
| 100% 3% 3% 97% 97%
| Gets Frees Retries Fails
| 3 0 0 0
| The CFDT server uses storage in its own region for AXMPGANY and
| AXMPGLOW storage pools. AXMPGANY accounts for most of the available
| RMF reports
| In addition to the statistics produced by CICS and the CFDT server, you can
| monitor the performance and use of the coupling facility list structure using the
| RMF facilities available on OS/390. A ‘Coupling Facility Activity’ report can be
| used to review the use of a coupling facility list structure. For example, this section
| of the report shows the DFHFCLS_PERFCFT2 structure size (12M), how much of
| the coupling facility is occupied (0.6%), some information on the requests handled,
| and how this structure has allocated and used the entries and data elements within
| this particular list structure.
| % OF % OF AVG LST/DIR DATA LOCK DIR REC/
| STRUCTURE ALLOC CF # ALL REQ/ ENTRIES ELEMENTS ENTRIES DIR REC
| TYPE NAME STATUS CHG SIZE STORAGE REQ REQ SEC TOT/CUR TOT/CUR TOT/CUR XI'S
|
| LIST DFHCFLS_PERFCFT2 ACTIVE 12M 0.6% 43530 93.2% 169.38 3837 39K N/A N/A
| 1508 11K N/A N/A
| RMF will also report on the activity (performance) of each structure, for example:-
|
|
| STRUCTURE NAME = DFHCFLS_PERFCFT2 TYPE = LIST
| # REQ -------------- REQUESTS ------------- -------------- DELAYED REQUESTS -------------
| SYSTEM TOTAL # % OF -SERV TIME(MIC)- REASON # % OF ---- AVG TIME(MIC) -----
| NAME AVG/SEC REQ ALL AVG STD_DEV REQ REQ /DEL STD_DEV /ALL
|
| MV2A 43530 SYNC 21K 49.3% 130.2 39.1
| 169.4 ASYNC 22K 50.7% 632.7 377.7 NO SCH 0 0.0% 0.0 0.0 0.0
| CHNGD 0 0.0% INCLUDED IN ASYNC
| DUMP 0 0.0% 0.0 0.0
| This report shows how many requests were processed for the structure
| DFHFCLS_PERFCFT2 and average service times (response times) for the two
| categories of requests, synchronous and asynchronous. Be aware that requests of
| greater then 4K are handled asynchronously. For an asynchronous request, the
| CICS region can continue to execute other work and is informed when the request
| completes. CICS waits for a synchronous request to complete, but these are
| generally very short periods. The example above shows an average service time of
| 130.2 microseconds (millionths of a second). CICS monitoring records show delay
| time for a transaction due waiting for a CFDT response. In the example above, a
| mixed workload of small and large files was used. You can see from the SERV
| TIME values that, on average, the ASYNC requests took nearly 5 times longer to
| process and that there was a wide variation in service times for these requests. The
| STD_DEV value for SYNC requests is much smaller.
|
| VSAM record-level sharing (RLS)
| VSAM record-level sharing (RLS) is a VSAM data set access mode, introduced in
| DFSMS™ Version 1 Release 3, and supported by CICS. RLS enables VSAM data to
| be shared, with full update capability, between many applications running in many
| CICS regions. With RLS, CICS regions that share VSAM data sets can reside in one
| or more MVS images within an MVS parallel sysplex.
| RLS also provides some benefits when data sets are being shared between CICS
| regions and batch jobs.
| Effects
| There is an increase CPU costs when using RLS compared with function-shipping
| to an FOR using MRO. When measuring CPU usage using the standard DSW
| workload, the following comparisons were noted:
| v Switching from local file access to function-shipping across MRO cross-memory
| (XM) connections incurred an increase of 7.02 ms per transaction in a single
| CPC.
| v Switching from MRO XM to RLS incurred an increase of 8.20ms per transaction
| in a single CPC.
| v Switching from XCF/MRO to RLS using two CPCs produced a reduction of
| 2.39ms per transaction.
| v Switching from RLS using one CPC to RLS using two CPCs there was no
| appreciable difference.
| However, performance measurements on their own don’t tell the whole story, and
| do not take account of other factors, such as:
| v As more and more applications need to share the same VSAM data, the load
| increases on the single file-owning region (FOR) to a point where the FOR can
| become a throughput bottleneck. The FOR is restricted, because of the CICS
| internal architecture, to the use of a single TCB for user tasks, which means that
| a CICS region generally does not exploit multiple CPs
| v Session management becomes more difficult as more and more AORs connect to
| to the FOR.
| v In some circumstances, high levels of activity can cause CI lock contention,
| causing transactions to wait for a lock even the specific record being accessed is
| not itself locked.
| These negative aspects of using an FOR are resolved by using RLS, which provides
| the scalability lacking in a FOR.
| How implemented
| To use RLS access mode with CICS files:
| 1. Define the required sharing control data sets
| 2. Specify the RLS_MAX_POOL_SIZE parameter in the IGDSMSxx SYS1.PARMLIB
| member.
| 3. Ensure the SMSVSAM server is started in the MVS image in which you want
| RLS support.
| 4. Specify the system initialization parameter RLS=YES. This enables CICS to
| register automatically with the SMSVSAM server by opening the control ACB
| during CICS initialization. RLS support cannot be enabled dynamically later if
| you start CICS with RLS=NO.
| 5. Ensure that the data sets you plan to use in RLS-access mode are defined, using
| Access Method Services (AMS), with the required recovery attributes using the
| LOG and LOGSTREAMID parameters on the IDCAMS DEFINE statements. If
| you are going to use an existing data set that was defined without these
| attributes, redefine the data set with them specified.
| 6. Specify RLSACCESS(YES) on the file resource definition.
| This chapter has covered the three different modes that CICS can use to access a
| VSAM file. These are non-shared resources (NSR) mode, local shared resources
| (LSR) mode, and record-level sharing (RLS) mode. (CICS does not support VSAM
| global shared resources (GSR) access mode.) The mode of access is not a property
| of the data set itself—it is a property of the way that the data set is opened. This
| means that a given data set can be opened by a user in NSR mode at one time,
| and RLS mode at another. The term non-RLS mode is used as a generic term to
| refer to the NSR or LSR access modes supported by CICS. Mixed-mode operation
| means a data set that is opened in RLS mode and a non-RLS mode concurrently,
| by different users.
| How monitored
| Using RLS-access mode for VSAM files involves SMSVSAM as well as the CICS
| region issuing the file control requests. This means monitoring the performance of
| both CICS and SMSVSAM to get the full picture, using a combination of CICS
| performance monitoring data and SMF Type 42 records written by SMSVSAM:
| CICS monitoring
| For RLS access, CICS writes performance class records to SMF containing:
| v RLS CPU time on the SMSVSAM SRB
| v RLS wait time.
| SMSVSAM SMF data
| SMSVSAM writes Type 42 records, subtypes 15, 16, 17, 18, and 19,
| providing information about coupling facility cache sets, structures, locking
| statistics, CPU usage, and so on. This information can be analyzed using
| RMF III post processing reports.
| The following is an example of the JCL that you can use to obtain a report of
| SMSVSAM data:
| //RMFCF JOB (accounting_information),MSGCLASS=A,MSGLEVEL=(1,1),CLASS=A
| //STEP1 EXEC PGM=IFASMFDP
| //DUMPIN DD DSN=SYS1.MV2A.MANA,DISP=SHR
| //DUMPOUT DD DSN=&&SMF,UNIT=SYSDA,
| // DISP=(NEW,PASS),SPACE=(CYL,(10,10))
| //SYSPRINT DD SYSOUT=*
| //SYSIN DD *
| INDD(DUMPIN,OPTIONS(DUMP))
| OUTDD(DUMPOUT,TYPE=000:255))
| //POST EXEC PGM=ERBRMFPP,REGION=0M
| //MFPINPUT DD DSN=&&SMF,DISP=(OLD,PASS)
| //SYSUDUMP DD SYSOUT=A
| //SYSOUT DD SYSOUT=A
| //SYSPRINT DD SYSOUT=A
| //MFPMSGDS DD SYSOUT=A
| //SYSIN DD *
| NOSUMMARY
| SYSRPTS(CF)
| SYSOUT(A)
| REPORTS(XCF)
| /*
|
| CICS file control statistics contain the usual information about the numbers of file
| control requests issued in the CICS region. They also identify which files are
| accessed in RLS mode and provide counts of RLS timeouts. They do not contain
| EXCP counts, ar any information about the SMSVSAM server, or its buffer usage,
| or its accesses to the coupling facility.
|
| Overview
| The high level of abstraction required for Java or any OO language involves
| increased layering and more dynamic runtime binding as a necessary part of the
| language. This incurs extra runtime performance cost.
| The benefits of using Java language support include the ease of use of Object
| Oriented programming, and access to existing CICS applications and data from
| Java program objects. The cost of these benefits is currently runtime CPU and
| storage. Although there is a significant initialization cost, even for a Java program
| object built with ET/390, that cost amounts to only a few milliseconds of CPU time
| on the latest S/390® G5 processors. You should not see a noticeable increase in
| response time for a transaction written in Java unless CPU is constrained, although
| there will be a noticeable increase in CPU utilization. You can, however, take
| advantage of the scalability of the CICSplex architecture, and in particular, its
| parallel sysplex capabilities, to scale transaction rates.
|
| Performance considerations
| The main areas that may affect the CPU costs associated with running Java
| program objects with CICS, are discussed in the following sections:
| v “DLL initialization”
| v “LE runtime options” on page 256
| v “API costs” on page 257
| v “CICS system storage” on page 257
| DLL initialization
| At run time, when a Java program is initialized, all dynamic link libraries (DLLs)
|